Dedicated app for processing derivation on-the-fly

Hi,

We are in the process of migrating our application from Refile to Shrine and we were wondering about the usefulness and possibility of running the uploader derivation endpoint into a dedicated app.

Refile is providing a Rack application written with Sinatra (See the Refile README, 3. Rack Application) for that exact purpose and it is what we have been running for many years with success. We are still using a CDN to cache all the attachments processed on the fly by the dedicated app but as the medium response time (~550ms) and memory usage of this app aren’t great it sounds like a good idea to keep it separated from our main app.

I looked for such a setup on the Shrine website (which is great btw) but wasn’t able to find something about it. Is it something recommended? Any chance that the response time and memory usage of the Shrine uploader derivation endpoint would be better than the Refile one? I suppose it mostly comes down to the mini_magick asset processing so I expect them to be in the same ballpark.

Do you think we would just be fine processing all our files on the fly inside our main app?

Sorry for the late reply, this went off my radar.

I looked for such a setup on the Shrine website (which is great btw) but wasn’t able to find something about it. Is it something recommended?

The derivation_endpoint plugin was designed with this use case in mind, but honestly I’ve never run it or heard anybody run it as standalone service, so I don’t know if it’s fully supported.

Any chance that the response time and memory usage of the Shrine uploader derivation endpoint would be better than the Refile one?

I haven’t compared the speed or memory usage with Refile, so I wouldn’t know. The derivation_endpoint plugin does have more options if the default streaming doesn’t work for you; the processed file can be cached to the storage, and optionally redirect to the uploaded processed file instead of streaming it through the app (the latter strategy is used by Active Storage). These options can help with potential limitations of the CDN you’re using, such as too short cache TTL.

Do you think we would just be fine processing all our files on the fly inside our main app?

This really depends on your workload, and the image processing library you’re using. I personally haven’t had big processing requirements at work, so I cannot say where is the point where performance becomes an issue.

Hey, thanks for your answer.

I actually already gave it a try on our staging environment and it has been working fine so far, some minimal config.ru Puma file did the trick:

require 'rubygems'
require 'bundler'

require 'shrine/storage/s3'
s3_options = {
  bucket: ENV['S3_ATTACHMENTS_BUCKET'],
  region: 'eu-west-1',
  access_key_id: ENV['S3_ACCESS_ID'],
  secret_access_key: ENV['S3_ACCESS_KEY']
}
Shrine.storages = {
  cache: Shrine::Storage::S3.new(prefix: 'cache', **s3_options),
  store: Shrine::Storage::S3.new(**s3_options),
}
Shrine.plugin :derivation_endpoint, secret_key: ENV['SHRINE_SECRET_KEY']

class Uploader < Shrine
  derivation :thumbnail do |file, width, height|
    ImageProcessing::MiniMagick
      .source(file)
      .resize_to_fit(width.to_i, height.to_i)
      .auto_orient
      .call
  end
end

map '/derivations/image' do
  run Uploader.derivation_endpoint(cache_control: "public, max-age=#{1.week}")
end

We have an app handling a lot of low latency requests and having image processing being high latency could put our app under high stress, so that why we were investigating it.

1 Like