All Projects → martinklepsch → S3 Beam

martinklepsch / S3 Beam

Licence: epl-1.0
🚀 direct-to-S3 uploading using ClojureScript

Programming Languages

clojure
4091 projects
clojurescript
191 projects
cljs
18 projects

Projects that are alternatives of or similar to S3 Beam

Udacity Data Engineering
Udacity Data Engineering Nano Degree (DEND)
Stars: ✭ 89 (-2.2%)
Mutual labels:  aws, s3
Awesome Aws
A curated list of awesome Amazon Web Services (AWS) libraries, open source repos, guides, blogs, and other resources. Featuring the Fiery Meter of AWSome.
Stars: ✭ 9,895 (+10773.63%)
Mutual labels:  aws, s3
Terraform Aws S3 Log Storage
This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail
Stars: ✭ 65 (-28.57%)
Mutual labels:  aws, s3
Black.box
Plug-and-Play VPN router and unblocker
Stars: ✭ 89 (-2.2%)
Mutual labels:  aws, digitalocean
Locopy
locopy: Loading/Unloading to Redshift and Snowflake using Python.
Stars: ✭ 73 (-19.78%)
Mutual labels:  aws, s3
Scrapy S3pipeline
Scrapy pipeline to store chunked items into Amazon S3 or Google Cloud Storage bucket.
Stars: ✭ 57 (-37.36%)
Mutual labels:  aws, s3
S3 Blob Store
☁️ Amazon S3 blob-store
Stars: ✭ 66 (-27.47%)
Mutual labels:  aws, s3
Aws Data Replication Hub
Seamless User Interface for replicating data into AWS.
Stars: ✭ 40 (-56.04%)
Mutual labels:  aws, s3
Aws Inventory
Python script for AWS resources inventory (cheaper than AWS Config)
Stars: ✭ 69 (-24.18%)
Mutual labels:  aws, s3
Cloud Security Audit
A command line security audit tool for Amazon Web Services
Stars: ✭ 68 (-25.27%)
Mutual labels:  aws, s3
Aws Utilities
Docker images and scripts to deploy to AWS
Stars: ✭ 52 (-42.86%)
Mutual labels:  aws, s3
Historical
A serverless, event-driven AWS configuration collection service with configuration versioning.
Stars: ✭ 85 (-6.59%)
Mutual labels:  aws, s3
Aws Testing Library
Chai (https://chaijs.com) and Jest (https://jestjs.io/) assertions for testing services built with aws
Stars: ✭ 52 (-42.86%)
Mutual labels:  aws, s3
S3reverse
The format of various s3 buckets is convert in one format. for bugbounty and security testing.
Stars: ✭ 61 (-32.97%)
Mutual labels:  aws, s3
Simple S3 Setup
Code examples used in the post "How to Setup Amazon S3 in a Django Project"
Stars: ✭ 46 (-49.45%)
Mutual labels:  aws, s3
React Deploy S3
Deploy create react app's in AWS S3
Stars: ✭ 66 (-27.47%)
Mutual labels:  aws, s3
Workshop Donkeytracker
Workshop to build a serverless tracking application for your mobile device with an AWS backend
Stars: ✭ 27 (-70.33%)
Mutual labels:  aws, s3
Aws S3 Scala
Scala client for Amazon S3
Stars: ✭ 35 (-61.54%)
Mutual labels:  aws, s3
Aws
Swift wrapper around AWS API
Stars: ✭ 67 (-26.37%)
Mutual labels:  aws, s3
Awesome Sec S3
A collection of awesome AWS S3 tools that collects and enumerates exposed S3 buckets
Stars: ✭ 76 (-16.48%)
Mutual labels:  aws, s3

s3-beam Dependencies Status

Usage | Changes

s3-beam is a Clojure/Clojurescript library designed to help you upload files from the browser to S3 (CORS upload). s3-beam can also upload files from the browser to DigitalOcean Spaces.

[org.martinklepsch/s3-beam "0.6.0-alpha5"] ;; latest release

Usage

To upload files directly to S3 you need to send special request parameters that are based on your AWS credentials, the file name, mime type, date etc. Since we don't want to store our credentials in the client these parameters need to be generated on the server side. For this reason this library consists of two parts:

  1. A pluggable route that will send back the required parameters for a given file-name & mime-type
  2. A client-side core.async pipeline setup that will retrieve the special parameters for a given File object, upload it to S3 and report back to you

1. Enable CORS on your S3 bucket

Please follow Amazon's official documentation.

For DigitalOcean Spaces, please follow DigitalOceans official documentation.

2. Plug-in the route to sign uploads

(ns your.server
  (:require [s3-beam.handler :as s3b]
            [compojure.core :refer [GET defroutes]]
            [compojure.route :refer [resources]]))

(def bucket "your-bucket")
(def aws-zone "eu-west-1")
(def access-key "your-aws-access-key")
(def secret-key "your-aws-secret-key")

(defroutes routes
  (resources "/")
  (GET "/sign" {params :params} (s3b/s3-sign bucket aws-zone access-key secret-key)))

If you want to use a route different than /sign, define it in the handler, (GET "/my-cool-route" ...), and then pass it in the options map to s3-pipe in the frontend.

If you are serving your S3 bucket from DigitalOcean Spaces, with CloudFront, or another CDN/proxy, you can pass upload-url as a fifth parameter to s3-sign, so that the ClojureScript client is directed to upload through this bucket. You still need to pass the bucket name, as the policy that is created and signed is based on the bucket name.

3. Integrate the upload pipeline into your frontend

In your frontend code you can now use s3-beam.client/s3-pipe. s3-pipe's argument is a channel where completed uploads will be reported. The function returns a channel where you can put File objects of a file map that should get uploaded. It can also take an extra options map with the previously mentioned :server-url like so:

(s3/s3-pipe uploaded {:server-url "/my-cool-route"}) ; assuming s3-beam.client is NS aliased as s3

The full options map spec is:

  • :server-url the signing server url, defaults to "/sign"
  • :response-parser a function to process the signing response from the signing server into EDN defaults to read-string.
  • :key-fn a function used to generate the object key for the uploaded file on S3 defaults to nil, which means it will use the passed filename as the object key.
  • :headers-fn a function used to create the headers for the GET request to the signing server. The returned headers should be a Clojure map of header name Strings to corresponding header value Strings.
  • :progress-events? If set to true, it will push progress events to the channel during the transfer, false per default.

If you choose to place a file map instead of a File object, you file map should follow:

  • :file A File object
  • :identifier (optional) A variable used to uniquely identify this file upload. This will be included in the response channel.
  • :key (optional) The file-name parameter that is sent to the signing server. If a :key key exists in the input-map it will be used instead of the key-fn as an object-key.
  • :metadata (optional) Metadata for the object. See Amazon's API docs for full details on which keys are supported. Keys and values can be strings or keywords. N.B. Keys not on that list will not be accepted. If you want to set arbitrary metadata, it needs to be prefixed with x-amz-meta-*.

An example using it within an Om component:

(ns your.client
  (:require [s3-beam.client :as s3]
  ...))

(defcomponent upload-form [app-state owner]
  (init-state [_]
    (let [uploaded (chan 20)]
      {:dropped-queue (chan 20)
       :upload-queue (s3/s3-pipe uploaded)
       :uploaded uploaded
       :uploads []}))
  (did-mount [_]
    (listen-file-drop js/document (om/get-state owner :dropped-queue))
    (go (while true
          (let [{:keys [dropped-queue upload-queue uploaded uploads]} (om/get-state owner)]
            (let [[v ch] (alts! [dropped-queue uploaded])]
              (cond
               (= ch dropped-queue) (put! upload-queue v)
               (= ch uploaded) (om/set-state! owner :uploads (conj uploads v))))))))
  (render-state [this state]
    ; ....
    )

Return values

The spec for the returned map (in the example above the returned map is v):

  • :type :success
  • :file The File object from the uploaded file
  • :response The upload response from S3 as a map with:
  • :location The S3 URL of the uploaded file
  • :bucket The S3 bucket where the file is located
  • :key The S3 key for the file
  • :etag The etag for the file
  • :xhr The XhrIo object used to POST to S3
  • :identifier A value used to uniquely identify the uploaded file

Or, if an error occurs during upload processing, an error-map will be placed on the response channel:

  • :type :error
  • :identifier A variable used to uniquely identify this file upload. This will be included in the response channel.
  • :error-code The error code from the XHR
  • :error-message The debug message from the error code
  • :http-error-code The HTTP error code

If :progress-events? are set to true, it will also forward those events from XhrIo:

  • :type :progress
  • :file The File object from the uploaded file
  • :bytes-sent Bytes uploaded
  • :bytes-total Total file size in bytes
  • :xhr The XhrIo object used to POST to S3
  • :identifier A value used to uniquely identify the uploaded file

Changes

0.6.0-alpha5

  • Fix compilation issues with shadow-cljs (#47)
  • Upgrade dependencies (#48)

0.6.0-alpha4

  • Add support for DigitalOcean Spaces (#44)

0.6.0-alpha3

  • Add support for progress events (#40)

0.6.0-alpha1

  • Add support for assigning metadata to files when uploading them. See the file-map spec above for more details. #37
  • Tweak keys and parameters for communication between the client and server parts of the library. This is backwards and forwards compatible between clients and servers running 0.5.2 and 0.6.0-alpha1.

0.5.2

  • Allow the user to upload to S3 through a custom URL as an extra parameter to sign-upload
  • Support bucket names with a '.' in them
  • Add asserts that arguments are provided

0.5.1

  • Allow the upload-queue to be passed an input-map instead of a file. This input-map follows the spec:

    • :file A File object
    • :identifier (optional) A variable used to uniquely identify this file upload. This will be included in the response channel.
    • :key (optional) The file-name parameter that is sent to the signing server. If a :key key exists in the input-map it will be used instead of the key-fn as an object-key.
  • Introduce error handling. When an error has been thrown while uploading a file to S3 an error-map will be put onto the channel. The error-map follows the spec:

    • :identifier A variable used to uniquely identify this file upload. This will be included in the response channel.
    • :error-code The error code from the XHR
    • :error-message The debug message from the error code
    • :http-error-code The HTTP error code
  • New options are available in the options map:

    • :response-parser a function to process the signing response from the signing server into EDN defaults to read-string.
    • :key-fn a function used to generate the object key for the uploaded file on S3 defaults to nil, which means it will use the passed filename as the object key.
    • :headers-fn a function used to create the headers for the GET request to the signing server.
  • Places a map into the upload-channel with:

    • :file The File object from the uploaded file
    • :response The upload response from S3 as a map with:
    • :location The S3 URL of the uploaded file
    • :bucket The S3 bucket where the file is located
    • :key The S3 key for the file
    • :etag The etag for the file
    • :xhr The XhrIo object used to POST to S3
    • :identifier A value used to uniquely identify the uploaded file

0.4.0

  • Support custom ACLs. The sign-upload function that can be used to implement custom signing routes now supports an additional :acl key to upload assets with a different ACL than public-read.

      (sign-upload {:file-name "xyz.html" :mime-type "text/html"}
                   {:bucket bucket
                    :aws-zone aws-zone
                    :aws-access-key access-key
                    :aws-secret-key secret-key
                    :acl "authenticated-read"})
    
  • Changes the arity of s3-beam.handler/policy function.

0.3.1

  • Correctly look up endpoints given a zone parameter (#10)

0.3.0

  • Allow customization of server-side endpoint (1cb9b27)

     (s3/s3-pipe uploaded {:server-url "/my-cool-route"})
    

0.2.0

  • Allow passing of aws-zone parameter to s3-sign handler function (b880736)

Contributing

Pull requests and issues are welcome. There are a few things I'd like to improve:

  • Testing: currently there are no tests
  • Error handling: what happens when the request fails?

Maintainers

Martin Klepsch Daniel Compton

License

Copyright © 2014 Martin Klepsch

Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].