Shrine

S3 EncryptionV2 - unable to locate encryption envelope (during download)

Hello! I’m not sure where I’m making a mistake but I want to use client side encryption and it’s getting me headaches.

  1. I’ve configured my storage to use S3 with an encryption client
  2. EncryptionClient uses RSA pkey to encrypt AES keys which are used to encrypt files. All according to: docs.aws.amazon .com/sdk-for-ruby/v3/api/Aws/S3/EncryptionV2.html
encryption_client = Aws::S3::EncryptionV2::Client.new(
  encryption_key: OpenSSL::PKey::RSA.new(ENV.fetch('S3_ENCRYPTION_KEY')),
  key_wrap_schema: :rsa_oaep_sha1,
  content_encryption_schema: :aes_gcm_no_padding,
  security_profile: :v2
)
  1. Storage config to use S3, encryption client, nothing special. I use S3 via Heroku plugin called CloudCube (also tested with Bucketeer) if that matters.
Shrine.storages = {
  cache: Shrine::Storage::S3.new(
    client: encryption_client,
    bucket: ENV.fetch('AWS_BUCKET_NAME'),
    prefix: "#{ENV.fetch('AWS_CUBE_NAME') { '' }}/uploads/cache",
    region: ENV.fetch('AWS_REGION'),
    access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
    secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY')
  ),
  store: Shrine::Storage::S3.new(
    client: encryption_client,
    bucket: ENV.fetch('AWS_BUCKET_NAME'),
    prefix: "#{ENV.fetch('AWS_CUBE_NAME') { '' }}/uploads",
    region: ENV.fetch('AWS_REGION'),
    access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
    secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY')
  )
}

There is no problem during upload. During downloads I get: Aws::S3::EncryptionV2::Errors::DecryptionError (unable to locate encryption envelope)

I’m new to S3 (and Shrine) and not sure how to proceed. From what I understand EncryptionV2 Client should automatically upload AES key encrypted with provided RSA pkey as file metadata. During download it should get metadata, decrypt AES key using provided RSA key and finally decrypt file.

I’m not sure what I’m doing wrong. I’ve tried to verify if metadata is uploaded correctly but to be honest don’t know how (yet). I start to believe that the problem lies with Bucketeer or CloudCube not giving me permission to read/write metadata. Hopefully it’s not the case (as I really would not like to custom manage S3) and you will help me find some mistakes. :wink:

Cheers
OG