Presign AWS S3 request stalling on Chrome

Hello all,

Maybe someone can help me , I am not very good in analyzing JS and XHR in the browser.

In Firefox The presign request mywebsite/mycontroller/presign/s3/params?filename=splash.jpg&type=image%2Fjpeg returns a Json nearly instantly but in Chrome it takes invariably 20 seconds. Under Chrome Network/Timing of XHR request i see “stalled 20secs” but I still get the proper JSON in the end.

I have just noticed this behavior today ( because of Uppy taking ages to upload the original file to S3 ) and thought it was my mistake because I have asynced a lot of things and returns the Attachment and versions data through Actioncable.

But the problem actually looks to happen before asyncing promotion and versionning, and additional websockets is probably not the reason …

Here is my presign code:

def presign
set_rack_response PhotoUploader.presign_response(:cache, request.env)
end

def set_rack_response((status, headers, body))
self.status = status
self.headers.merge!(headers)
self.response_body = body
end

(this happens in development, will investigate further if reproductible in production tomorrow)

Hi Maxence,

When I was started testing stuff on Rails 6, I sometimes noticed stalled requests when loading new pages. I assumed it was webpacker compiling, though it could also be actioncable, I’m not sure.

If you move presigning out of the controller, and mount it directly into your router, does the delay still occur? I expect it will actually, because I think requests will still go through all the Rack middleware.

I would also try disabling webpacker and actionable and seeing whether the delay goes away. There should be some method like Rails.application.middleware.remove for removing Rack middleware, so you could try removing Webpacker::DevServerProxy. For ActionCable I don’t know how you would disable it.

Hi Janko,

I am under Rails 5.2 and not using Webpacker. Today I started afresh and could’nt see any latency in pinging the presign controller action in either browser ! Yet Uppy was stalling indefinitely. Also I could spot some unexpected websockets connections closings in Chrome. Puma wouldn’t close “gracefully” and would have to kill the pid.

I realised I was maybe running on old stuff. Updated :
aws-sdk-3 from 1.30.1 (january 2019) to newest
puma from 3.11.4 (april 2018) to newest

I have also reverted changes made to Puma initializer in dev (I raised the workers numbers)

It works more consistently now. Puma still needs 15 secs to close properly but I can’t see any problem with Uppy and S3.
The problem might have become of websockets because when I close Puma, the websockets connection close. And when restarting Puma, if I drop files in Uppy without reloading page (and allowing websockets to reconnect) then Uppy stalls. After page refresh it comes back properly.

If the problem is coming back I will try moving presign to routing as you suggest and keep you posted