Assigning raw JSON ignores storage attribute

As the title suggests, if I do something like:

@my_object.update(
  image: {
    id: params[:key],
    storage: "cache",
    metadata: {}
  }
)

then storage becomes “store” upon saving, despite being set to “cache”. If I assign it raw (to image_data instead of image), like:

@my_object.update(
  image_data: {
    id: params[:key],
    storage: "cache",
    metadata: {}
  }
)

Then it works. According to the docs, I should be able to assign without needing to assign to the underlying column: https://shrinerb.com/docs/metadata#direct-uploads

Shrine’s activerecord plugin automatically promotes assigned cached files to permanent storage when the record is saved, that’s why you see the attached file uploaded to store.

If you want to promote to permanent storage manually, you can disable or override callbacks.

Thank you! Is there any reason I couldn’t/shouldn’t just assign the data column manually to avoid the callbacks in certain scenarios?

If you’re changing the attached file on an existing record, with direct column assign the potential previous attached file won’t be automatically deleted. Also, file validations won’t be triggered automatically, nor restore_cached_data behaviour if you have the plugin loaded. All these can be triggered manually though, it’s just extra work.

Might I ask why you want to trigger promotion manually?

I gotcha. Basically, I have two separate S3 buckets, one that acts as a cache storage, the other is the main store. When anything is uploaded direct to the S3 cache bucket, a lambda function is triggered. It generates other derivatives, extracts metadata, and moves it all to the store bucket, then essentially fires a webhook to my application passing the metadata and storage info to it. So when a file is uploaded direct to S3, my application’s front-end fires a request to the back-end and saves the metadata from the S3 object and marks it as the cache storage. Then when the lambda function succeeds, it sends the updated metadata to my application, which includes updating the storage attribute to the store bucket. Basically, all of the metadata extraction and promotion is happening in AWS’ ecosystem, and my application is just saving attachment data that’s derived from the successful S3 upload (in the case of the cache step) or sent to it (in the case of the lambda function completing in which case files have been moved to the final bucket).