S3 Cache Storage to promote to ClouldFlare Stream tus server as Store

Hello,

Thanks Janko for your awesome work on Shrine.

We are attempting to integrate CloudFlare Stream in our application for our users. CloudFlare Stream provides a tus server endpoint which videos can be uploaded to. This works great with Uppy, and we can then store the metadata and attachment url in Shrine by passing uploaded data to a hidden field on upload success with our Rails app.

We wish we could stop there, but unfortunately CloudFlare’s authentication is a static API key which is then exposed in our frontend JS with Uppy, so we cannot upload directly to the CloudFlare tus endpoint or we risk users being able to take our API token and abuse it for unlimited CloudFlare Stream access anywhere.

To get around this and keep our account secure, we’d like to upload to S3 as a cache storage and then use our Shrine uploader to pass the data to the CloudFlare tus server as the permanent storage when the record is saved.

I’m not sure how to do this though. Everything I have read uses the tus server as the cache storage which makes sense, but unfortunately we can’t do this with CloudFlare because they haven’t implemented something like signature authentication that Transloadit uses to use Uppy securely.

If what I need to do is take the URL of the file in the cache storage and pass it to CloudFlare’s tus endpoint with an authorization header, and then get the storage URL and metadata from the response to set the attachment attributes, how would this look in my uploader? (I don’t know if the promote method would even be the correct way to do this because I don’t want my webserver to use resources reading the data, I simply want to pass the cache store URL to CloudFlare’s endpoint and get CloudFlare’s response to save)

Finally, I am still using Shrine version 2 if it changes anything.

Thank you!

Hi,

Thanks Janko for your awesome work on Shrine.

You’re welcome :heart:

To get around this and keep our account secure, we’d like to upload to S3 as a cache storage and then use our Shrine uploader to pass the data to the CloudFlare tus server as the permanent storage when the record is saved.

Normally, if you need to use the tus API to upload the file to CloudStream API, and you want to do it from Ruby, you’d need a ruby tus client (like tus-js-client that exists for JavaScript/Node, but in Ruby). If you see the list of implementations, you’ll see that nobody has written a Ruby client yet.

It so happens I’ve started writing a tus-ruby-client locally some time ago, and have gotten pretty far. However, I got stuck with implementing one part of the protocol, and I still need to verify whether the current API is correct. It’s currently a private repo on GitHub, I plan to open source it once I’ve released it as a gem and have written documentation. The reason I haven’t yet is because nobody asked for it, and I ran out of steam :stuck_out_tongue:

However, the command line CloudFlare Stream docs suggest that you can give CloudFlare a remote URL instead, which is a much simpler solution. So, if your file is cached on S3 storage, you can just give CloudFlare an S3 URL to your file.

Since in your case CloudFlare will act as permanent storage, you can create a Shrine storage to encapsulate that, which will be used by Shrine’s promotion with no additional code change. For example:

# Gemfile
gem "http", "~> 4.3"
require "http"

class Shrine
  module Storage
    class CloudflareStream
      CLOUDFLARE_API = "https://api.cloudflare.com/client/v4"

      attr_reader :account_id, :api_key, :email

      def initialize(account_id:, api_key:, email:)
        @account_id = account_id
        @api_key    = api_key
        @email      = email
      end

      def upload(io, id, **)
        response = http.post("#{stream_url}/copy", json: { url: io.url })

        # ... somehow fetch the video identifier ...

        id.replace(identifier) # make id the identifier
      end

      def open(id, **)
        fail NotImplementedError
      end

      def exists?(id, **)
        fail NotImplementedError
      end

      def url(id)
        "#{stream_url}/#{id}"
      end

      def delete(id)
        http.delete(url)
      end

      private

      def http
        HTTP.headers(
          "X-Auth-Key"   => api_key,
          "X-Auth-Email" => email,
          "Content-Type" => "application/json",
        )
      end

      def stream_url
        "#{CLOUDFLARE_API}/accounts/#{account_id}/stream"
      end
    end
  end
end

Shrine.storages = {
  cache: Shrine::Storage::S3.new(...),
  store: Shrine::Storage::CloudflareStream.new(
    account_id: "...",
    api_key: "...",
    email: "...",
  ),
}

Hope that helps :slightly_smiling_face:

Thank you so much for the help and fast response! This worked perfectly.

I was worried that a solution would have required a Ruby tus client, and missed that line in the Cloudflare docs about being able to use a regular POST request.

1 Like