Shrine backgrounding with Lambdakiq on AWS

Hi,

I am trying to get the Shrine promote and destroy jobs to work with storage in S3 and backgrounding using Lambdakiq. My app successfully invokes the AWS Lambda image version of the app but most times out when the promote job is processed. I am testing with a video upload which adds metadata as per the example on the Shrine website.

require "streamio-ffmpeg" # https://github.com/streamio/streamio-ffmpeg 
 
class VideoUploader < Shrine
  add_metadata do |io|
    movie = Shrine.with_file(io) do |file|
      FFMPEG::Movie.new(file.path)
    end
 
    { "duration"   => movie.duration,
      "bitrate"    => movie.bitrate,
      "resolution" => movie.resolution,
      "frame_rate" => movie.frame_rate }
  end
end

The promote job is the following:

class Attachment::PromoteJob < ApplicationJob

  def perform(attacher_class, record_class, record_id, name, file_data)

    attacher_class = Object.const_get(attacher_class)

    record         = Object.const_get(record_class).find(record_id)

    attacher = attacher_class.retrieve(model: record, name: name, file: file_data)

    attacher.atomic_promote

  rescue Shrine::AttachmentChanged, ActiveRecord::RecordNotFound

    # The attachment has changed or record has been deleted, nothing to do 

  end

end

It is unclear to me when the additional metadata is added. And, what is the “io” argument provided to add_metadata? The docs say this isn’t always a file object. If I want to use a partial or incomplete chunk of the uploaded video to get metadata how would you suggest I do that?

AWS Lambda has a limit of 512MB temporary storage and a video upload may be larger than that.

@janko or should I use the Down gem in the add_metadata block to limit how much of the file is downloaded?

@janko does Shrine stream files from S3 by default when using Shrine.with_file?

The S3 bucket I am using for uploads using encryption at rest.