Skip to main content

AWS S3 (legacy)

The @uppy/aws-s3 plugin can be used to upload files directly to a S3 bucket or a S3-compatible provider, such as Google Cloud Storage or DigitalOcean Spaces. Uploads can be signed using either Companion or a custom signing function.

This documents the legacy version of this plugin that we plan to remove on the next version.

When should I use it?

tip

Not sure which uploader is best for you? Read “Choosing the uploader you need”.

warning

This plugin is deprecated, you should switch to using the modern version of this plugin.

You can use this plugin when you prefer a client-to-storage over a client-to-server-to-storage (such as Transloadit or Tus) setup. This may in some cases be preferable, for instance, to reduce costs or the complexity of running a server and load balancer with Tus.

This plugin can be used with AWS S3, DigitalOcean Spaces, Google Cloud Storage, or any S3-compatible provider. Although all S3-compatible providers are supported, we don’t test against them, this plugin was developed against S3 so a small risk is involved in using the others.

@uppy/aws-s3 is best suited for small files and/or lots of files. If you are planning to upload mostly large files (100 MB+), consider using @uppy/aws-s3-multipart.

Install

npm install @uppy/aws-s3

Use

A quick overview of the complete API.

import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import AwsS3 from '@uppy/aws-s3';

import '@uppy/core/dist/style.min.css';
import '@uppy/dashboard/dist/style.min.css';

const uppy = new Uppy()
.use(Dashboard, { inline: true, target: 'body' })
.use(AwsS3, { companionUrl: 'http://companion.uppy.io' });

With a AWS S3 bucket

To use this plugin with S3 we need to setup a bucket with the right permissions and CORS settings.

S3 buckets do not allow public uploads for security reasons. To allow Uppy and the browser to upload directly to a bucket, its CORS permissions need to be configured.

CORS permissions can be found in the S3 Management Console. Click the bucket that will receive the uploads, then go into the Permissions tab and select the CORS configuration button. A JSON document will be shown that defines the CORS configuration. (AWS used to use XML but now only allow JSON). More information about the S3 CORS format here.

The configuration required for Uppy and Companion is this:

[
{
"AllowedOrigins": ["https://my-app.com"],
"AllowedMethods": ["GET", "POST"],
"MaxAgeSeconds": 3000,
"AllowedHeaders": [
"Authorization",
"x-amz-date",
"x-amz-content-sha256",
"content-type"
]
},
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET"],
"MaxAgeSeconds": 3000
}
]

A good practice is to use two CORS rules: one for viewing the uploaded files, and one for uploading files. This is done above where the first object in the array defines the rules for uploading, and the second for viewing. The example above makes files publicly viewable. You can change it according to your needs.

If you are using an IAM policy to allow access to the S3 bucket, the policy must have at least the s3:PutObject and s3:PutObjectAcl permissions scoped to the bucket in question. In-depth documentation about CORS rules is available on the AWS documentation site.

With a DigitalOcean Spaces bucket

tip

Checkout the DigitalOcean Spaces example in the Uppy repository for a complete, runnable example.

DigitalOcean Spaces is S3-compatible so you only need to change the endpoint and bucket. Make sure you have a key and secret. If not, refer to “How To Create a DigitalOcean Space and API Key”.

When using Companion as standalone, you can set these as environment variables:

export COMPANION_AWS_KEY="xxx"
export COMPANION_AWS_SECRET="xxx"
export COMPANION_AWS_REGION="us-east-1"
export COMPANION_AWS_ENDPOINT="https://{region}.digitaloceanspaces.com"
export COMPANION_AWS_BUCKET="my-space-name"

The {region} string will be replaced by the contents of the COMPANION_AWS_REGION environment variable.

When using Companion as an Express integration, configure the s3 options:

const options = {
s3: {
key: 'xxx',
secret: 'xxx',
bucket: 'my-space-name',
region: 'us-east-1',
endpoint: 'https://{region}.digitaloceanspaces.com',
},
};

With a Google Cloud Storage bucket

For the @uppy/aws-s3 plugin to be able to upload to a GCS bucket, it needs the Interoperability setting enabled. You can enable the Interoperability setting and generate interoperable storage access keys by going to Google Cloud Storage » Settings » Interoperability. Then set the environment variables for Companion like this:

export COMPANION_AWS_ENDPOINT="https://storage.googleapis.com"
export COMPANION_AWS_BUCKET="YOUR-GCS-BUCKET-NAME"
export COMPANION_AWS_KEY="GOOGxxxxxxxxx" # The Access Key
export COMPANION_AWS_SECRET="YOUR-GCS-SECRET" # The Secret

You do not need to configure the region with GCS.

You also need to configure CORS with their HTTP API. If you haven’t done this already, see Configuring CORS on a Bucket in the GCS documentation, or follow the steps below to do it using Google’s API playground.

The JSON format consists of an array of CORS configuration objects. For instance:

{
"cors": [
{
"origin": ["https://my-app.com"],
"method": ["GET", "POST"],
"maxAgeSeconds": 3000
},
{
"origin": ["*"],
"method": ["GET"],
"maxAgeSeconds": 3000
}
]
}

When using presigned PUT uploads, replace the "POST" method by "PUT" in the first entry.

If you have the gsutil command-line tool, you can apply this configuration using the gsutil cors command.

gsutil cors set THAT-FILE.json gs://BUCKET-NAME

Otherwise, you can manually apply it through the OAuth playground:

  1. Get a temporary API token from the Google OAuth2.0 playground
  2. Select the Cloud Storage JSON API v1 » devstorage.full_control scope
  3. Press Authorize APIs and allow access
  4. Click Step 3 - Configure request to API
  5. Configure it as follows:
    1. HTTP Method: PATCH
    2. Request URI: https://www.googleapis.com/storage/v1/b/YOUR_BUCKET_NAME
    3. Content-Type: application/json (should be the default)
    4. Press Enter request body and input your CORS configuration
  6. Press Send the request.

Use with your own server

The recommended approach is to integrate @uppy/aws-s3 with your own server. You will need to do the following things:

  1. Setup a bucket
  2. Create endpoints in your server. You can create them as edge functions (such as AWS Lambdas), inside Next.js as an API route, or wherever your server runs
    • POST > /uppy/s3: get upload parameters
  3. Setup Uppy

Use with Companion

Companion has S3 routes built-in for a plug-and-play experience with Uppy.

caution

Generally it’s better for access control, observability, and scaling to integrate @uppy/aws-s3 with your own server. You may want to use Companion for creating, signing, and completing your S3 uploads if you already need Companion for remote files (such as from Google Drive). Otherwise it’s not worth the hosting effort.

import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import AwsS3 from '@uppy/aws-s3';

import '@uppy/core/dist/style.min.css';
import '@uppy/dashboard/dist/style.min.css';

const uppy = new Uppy.use(Dashboard, { inline: true, target: 'body' }).use(
AwsS3,
{ companionUrl: 'http://companion.uppy.io' },
);

Options

The @uppy/aws-s3 plugin has the following configurable options:

id

A unique identifier for this plugin (string, default: 'AwsS3').

companionUrl

Companion instance to use for signing S3 uploads (string, default: null).

companionHeaders

Custom headers that should be sent along to Companion on every request (Object, default: {}).

allowedMetaFields

Pass an array of field names to limit the metadata fields that will be added to upload as query parameters (Array, default: null).

  • Set this to ['name'] to only send the name field.
  • Set this to null (the default) to send all metadata fields.
  • Set this to an empty array [] to not send any fields.

getUploadParameters(file)

note

When using Companion to sign S3 uploads, do not define this option.

A function that returns upload parameters for a file (Promise, default: null).

Parameters should be returned as an object, or as a Promise that fulfills with an object, with keys { method, url, fields, headers }.

  • The method field is the HTTP method to be used for the upload. This should be one of either PUT or POST, depending on the type of upload used.
  • The url field is the URL to which the upload request will be sent. When using a presigned PUT upload, this should be the URL to the S3 object with signing parameters included in the query string. When using a POST upload with a policy document, this should be the root URL of the bucket.
  • The fields field is an object with form fields to send along with the upload request. For presigned PUT uploads, this should be left empty.
  • The headers field is an object with request headers to send along with the upload request. When using a presigned PUT upload, it’s a good idea to provide headers['content-type']. That will make sure that the request uses the same content-type that was used to generate the signature. Without it, the browser may decide on a different content-type instead, causing S3 to reject the upload.

timeout

When no upload progress events have been received for this amount of milliseconds, assume the connection has an issue and abort the upload (number, default: 30_000).

This is passed through to XHRUpload; see its documentation page for details. Set to 0 to disable this check.

limit

Limit the amount of uploads going on at the same time (number, default: 5).

Setting this to 0 means no limit on concurrent uploads, but we recommend a value between 5 and 20.

getResponseData(responseText, response)

note

This is an advanced option intended for use with almost S3-compatible storage solutions.

Customize response handling once an upload is completed. This passes the function through to @uppy/xhr-upload, see its documentation for API details.

This option is useful when uploading to an S3-like service that doesn’t reply with an XML document, but with something else such as JSON.

locale: {}

export default {
strings: {
timedOut: 'Upload stalled for %{seconds} seconds, aborting.',
},
};

Frequently Asked Questions

How can I generate a presigned URL server-side?

The getUploadParameters function can return a Promise, so upload parameters can be prepared server-side. That way, no private keys to the S3 bucket need to be shared on the client. For example, there could be a PHP server endpoint that prepares a presigned URL for a file:

uppy.use(AwsS3, {
getUploadParameters(file) {
// Send a request to our PHP signing endpoint.
return fetch('/s3-sign.php', {
method: 'post',
// Send and receive JSON.
headers: {
accept: 'application/json',
'content-type': 'application/json',
},
body: JSON.stringify({
filename: file.name,
contentType: file.type,
}),
})
.then((response) => {
// Parse the JSON response.
return response.json();
})
.then((data) => {
// Return an object in the correct shape.
return {
method: data.method,
url: data.url,
fields: data.fields,
// Provide content type header required by S3
headers: {
'Content-Type': file.type,
},
};
});
},
});

See either the aws-nodejs or aws-php examples in the uppy repository for a demonstration of how to implement handling of presigned URLs on both the server-side and client-side.

How can I retrieve the presigned parameters of the uploaded file?

Once the file is uploaded, it’s possible to retrieve the parameters that were generated in getUploadParameters(file) via the file.meta field:

uppy.on('upload-success', (file, data) => {
const s3Key = file.meta['key']; // the S3 object key of the uploaded file
});