Assigning raw JSON ignores storage attribute

As the title suggests, if I do something like:

@my_object.update(
  image: {
    id: params[:key],
    storage: "cache",
    metadata: {}
  }
)

then storage becomes “store” upon saving, despite being set to “cache”. If I assign it raw (to image_data instead of image), like:

@my_object.update(
  image_data: {
    id: params[:key],
    storage: "cache",
    metadata: {}
  }
)

Then it works. According to the docs, I should be able to assign without needing to assign to the underlying column: https://shrinerb.com/docs/metadata#direct-uploads

Shrine’s activerecord plugin automatically promotes assigned cached files to permanent storage when the record is saved, that’s why you see the attached file uploaded to store.

If you want to promote to permanent storage manually, you can disable or override callbacks.

Thank you! Is there any reason I couldn’t/shouldn’t just assign the data column manually to avoid the callbacks in certain scenarios?

If you’re changing the attached file on an existing record, with direct column assign the potential previous attached file won’t be automatically deleted. Also, file validations won’t be triggered automatically, nor restore_cached_data behaviour if you have the plugin loaded. All these can be triggered manually though, it’s just extra work.

Might I ask why you want to trigger promotion manually?

I gotcha. Basically, I have two separate S3 buckets, one that acts as a cache storage, the other is the main store. When anything is uploaded direct to the S3 cache bucket, a lambda function is triggered. It generates other derivatives, extracts metadata, and moves it all to the store bucket, then essentially fires a webhook to my application passing the metadata and storage info to it. So when a file is uploaded direct to S3, my application’s front-end fires a request to the back-end and saves the metadata from the S3 object and marks it as the cache storage. Then when the lambda function succeeds, it sends the updated metadata to my application, which includes updating the storage attribute to the store bucket. Basically, all of the metadata extraction and promotion is happening in AWS’ ecosystem, and my application is just saving attachment data that’s derived from the successful S3 upload (in the case of the cache step) or sent to it (in the case of the lambda function completing in which case files have been moved to the final bucket).

@janko any reason I shouldn’t be doing it like this?

Hi Trevor, no, it’s a completely valid use case, it’s just one I hadn’t considered, so I wasn’t able to offer a good solution. Just curious, how do you link the processed files back to the original record in the webhook, considering the record might not yet be persisted at the time of direct upload?

In the shrine-transloadit gem, I’m actually showing an example where the Transloadit service effectively does the promotion with processing, and your app receives as webhook with the stored file data. So, it’s is pretty much the same use case.

There we avoid promotion by using Attacher#atomic_persist instead of #atomic_promote in the background job. However, this assumes backgrounding is used; it would be great if it was that simple in the default synchronous scenario.

So, I would like to provide the ability to skip promotion, but to still have other features like dirty tracking, validation etc. I’m just not sure how to cleanly add this functionality in a way that would make sense.

Basically, I create the record in our database before the upload to S3 happens, so it already has an ID to tie it to (the ID is in the folder path that the object is uploaded to on S3 [i.e. modelname/id/attachmentname/file.whatever], so all sides have access to the ID prior to and post upload).

1 Like