Change the storage right before a spec

Hi,
I am looking for ways to set a different storage only to a specific spec.

For all specs I use Memory storage. For one spec I need s3 storage.

How could I set it?
If right before the spec I do

Shrine.storages[:cache] = Shrine::Storage::S3.new(prefix: "cache", **s3_options), # temporary
Shrine.storages[:store] = Shrine::Storage::S3.new(**s3_options),                  # permanent 
...
picture = FactoryBot.create(:content_picture)

the storage is not changed. The file is still uploaded to the memory storage.

Thanks

You must be setting your storage in your initialiser?

Try registering the following extra storages (s3 storage):

Try this:

Shrine.storages = {   cache: Shrine::Storage::Memory.new("public", prefix: "cache"), # temporary   
                  store: Shrine::Storage::Memory.new("public", prefix: "uploads"),       # permanent 
                  s3_cache: Shrine::Storage::S3.new(prefix: "cache",  bucket: "my-app", region: "eu-west-1", access_key_id: "abc",  secret_access_key: "xyz"),
                  s3_store: Shrine::Storage::S3.new(prefix: "uploads",  bucket: "my-app", region: "eu-west-1",   access_key_id: "abc",  secret_access_key: "xyz")
               }

And then right before the spec where you want to upload to s3, ensure this plugin loads:

# s3_spec.rb
plugin :default_storage, cache: :s3_cache, store: :s3_store

This changes the default cache and storage - just for that particular spec. https://shrinerb.com/docs/plugins/default_storage

See if that works.

(Or perhaps someone more knowledgeable will be able to point you in the right direction).

  • But generally speaking, I would avoid hitting an s3 bucket while running tests. Perhaps someone more knowledgeable can advise. You can perhaps use minio when testing if you don’t want to use memory.

Refer here for Minio: https://shrinerb.com/docs/testing#minio

Thanks. Minio is great. I love it. But for this spec I actually need to touch the real S3. So all the specs should use memory but this spec in particular should go to S3. Long story short an external tool on a different machine running inside a docker instance is called in the spec and this external tool is using curl to get the file. So this file needs to really be on S3.

1 Like

Here is a real life example why I can not trust the whole stack between me and the user seeing the picture.

I was not changing the storage, because I thought “well things work” and it turns out a difference between aws and shrine left us for a few days without derivatives being generated.

1 Like

And then right before the spec where you want to upload to s3, ensure this plugin loads:

Can you expand on this? I’m having a similar issue where for 1 particular spec I need the file to actually exist for a call to File.open

@Jakanapes Hi Patrick,

I am not understanding your particular use case situation?

If you want to ensure a file exists, you can simply use: https://ruby-doc.org/stdlib-1.9.3/libdoc/pathname/rdoc/Pathname.html#method-i-exist-3F …or perhaps I am not understanding something?