Skip to main content

AWS S3

The @uppy/aws-s3 plugin can be used to upload files directly to a S3 bucket or a S3-compatible provider, such as Google Cloud Storage or DigitalOcean Spaces. Uploads can be signed using either Companion, temporary credentials, or a custom signing function.

When should I use it?

tip

Not sure which uploader is best for you? Read “Choosing the uploader you need”.

You can use this plugin when you prefer a client-to-storage over a client-to-server-to-storage (such as Transloadit or Tus) setup. This may in some cases be preferable, for instance, to reduce costs or the complexity of running a server and load balancer with Tus.

Multipart uploads start to become valuable for larger files (100 MiB+) as it uploads a single object as a set of parts. This has certain benefits, such as improved throughput (uploading parts in parallel) and quick recovery from network issues (only the failed parts need to be retried). The downside is request overhead, as it needs to do creation, signing (unless you are signing on the client), and completion requests besides the upload requests. For example, if you are uploading files that are only a couple kilobytes with a 100ms roundtrip latency, you are spending 400ms on overhead and only a few milliseconds on uploading.

In short

  • We recommend to set shouldUseMultipart to enable multipart uploads only for large files.
  • If you prefer to have less overhead (+20% upload speed) you can use temporary S3 credentials with getTemporarySecurityCredentials. This means users get a single token which allows them to do bucket operations for longer, instead of short lived signed URL per resource. This is a security trade-off.

Install

npm install @uppy/aws-s3

Use

Setting up your S3 bucket

To use this plugin with S3 we need to setup a bucket with the right permissions and CORS settings.

S3 buckets do not allow public uploads for security reasons. To allow Uppy and the browser to upload directly to a bucket, its CORS permissions need to be configured.

CORS permissions can be found in the S3 Management Console. Click the bucket that will receive the uploads, then go into the Permissions tab and select the CORS configuration button. A JSON document will be shown that defines the CORS configuration. (AWS used to use XML but now only allow JSON). More information about the S3 CORS format here.

The configuration required for Uppy and Companion is this:

[
{
"AllowedOrigins": ["https://my-app.com"],
"AllowedMethods": ["GET", "PUT"],
"MaxAgeSeconds": 3000,
"AllowedHeaders": [
"Authorization",
"x-amz-date",
"x-amz-content-sha256",
"content-type"
],
"ExposeHeaders": ["ETag", "Location"]
},
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET"],
"MaxAgeSeconds": 3000
}
]

A good practice is to use two CORS rules: one for viewing the uploaded files, and one for uploading files. This is done above where the first object in the array defines the rules for uploading, and the second for viewing. The example above makes files publicly viewable. You can change it according to your needs.

If you are using an IAM policy to allow access to the S3 bucket, the policy must have at least the s3:PutObject and s3:PutObjectAcl permissions scoped to the bucket in question. In-depth documentation about CORS rules is available on the AWS documentation site.

Use with your own server

The recommended approach is to integrate @uppy/aws-s3 with your own server. You will need to do the following things:

  1. Setup up a S3 bucket.
  2. Create endpoints in your server. You can create them as edge functions (such as AWS Lambdas), inside Next.js as an API route, or wherever your server runs.
    • GET > /uppy/sts: get the temporary security credentials (optional).
    • POST > /uppy/s3: get parameters and pre-signed URL for non-multipart upload.
    • POST > /uppy/s3-multipart: create the multipart upload.
    • GET > /uppy/s3-multipart/:id: get the uploaded parts.
    • GET > /uppy/s3-multipart/:id/:partNumber: sign the part and return a pre-signed URL.
    • POST > /uppy/s3-multipart/:id/complete: complete the multipart upload.
    • DELETE > /uppy/s3-multipart/:id: abort the multipart upload.
  3. Setup Uppy.

Use with Companion

Companion has S3 routes built-in for a plug-and-play experience with Uppy.

caution

Generally it’s better for access control, observability, and scaling to integrate @uppy/aws-s3 with your own server. You may want to use Companion for creating, signing, and completing your S3 uploads if you already need Companion for remote files (such as from Google Drive). Otherwise it’s not worth the hosting effort.

import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import AwsS3 from '@uppy/aws-s3';

import '@uppy/core/dist/style.min.css';
import '@uppy/dashboard/dist/style.min.css';

const uppy = new Uppy()
.use(Dashboard, { inline: true, target: 'body' })
.use(AwsS3, {
shouldUseMultipart: (file) => file.size > 100 * 2 ** 20,
companionUrl: 'https://companion.uppy.io',
});

API

Options

shouldUseMultipart(file)

warning

Until the next major version, not setting this option uses the legacy version of this plugin. This is a suboptimal experience for some of your user’s uploads. It’s best for speed and stability to upload large (100 MiB+) files with multipart and small files with regular uploads.

A boolean, or a function that returns a boolean which is called for each file that is uploaded with the corresponding UppyFile instance as argument.

By default, all files are uploaded as multipart. In a future version, all files with a file.size ≤ 100 MiB will be uploaded in a single chunk, all files larger than that as multipart.

Here’s how to use it:

uppy.use(AwsS3, {
shouldUseMultipart(file) {
// Use multipart only for files larger than 100MiB.
return file.size > 100 * 2 ** 20;
},
});

limit

The maximum amount of files to upload in parallel (number, default: 6).

Note that the amount of files is not the same as the amount of concurrent connections. Multipart uploads can use many requests per file. For example, for a 100 MiB file with a part size of 5 MiB:

  • 1 createMultipartUpload request
  • 100/5 = 20 sign requests (unless you are signing on the client)
  • 100/5 = 20 upload requests
  • 1 completeMultipartUpload request
caution

Unless you have a good reason and are well informed about the average internet speed of your users, do not set this higher. S3 uses HTTP/1.1, which means a limit to concurrent connections and your uploads may expire before they are uploaded.

companionUrl

URL to a Companion instance (string, default: null).

companionHeaders

Custom headers that should be sent along to Companion on every request (Object, default: {}).

companionCookiesRule

This option correlates to the RequestCredentials value (string, default: 'same-origin').

This tells the plugin whether to send cookies to Companion.

retryDelays

retryDelays are the intervals in milliseconds used to retry a failed chunk (array, default: [0, 1000, 3000, 5000]).

This is also used for signPart(). Set to null to disable automatic retries, and fail instantly if any chunk fails to upload.

getChunkSize(file)

A function that returns the minimum chunk size to use when uploading the given file.

The S3 Multipart plugin uploads files in chunks. Chunks are sent in batches to have presigned URLs generated with signPart(). To reduce the amount of requests for large files, you can choose a larger chunk size, at the cost of having to re-upload more data if one chunk fails to upload.

S3 requires a minimum chunk size of 5MiB, and supports at most 10,000 chunks per multipart upload. If getChunkSize() returns a size that’s too small, Uppy will increase it to S3’s minimum requirements.

getUploadParameters(file, options)

note

When using Companion to sign S3 uploads, you should not define this option.

A function that will be called for each non-multipart upload.

  • file: UppyFile the file that will be uploaded
  • options: object
    • signal: AbortSignal
  • Returns: object | Promise<object>
    • method: string, the HTTP method to be used for the upload. This should be one of either PUT or POST, depending on the type of upload used.
    • url: string, the URL to which the upload request will be sent. When using a presigned PUT upload, this should be the URL to the S3 object with signing parameters included in the query string. When using a POST upload with a policy document, this should be the root URL of the bucket.
    • fields object, an object with form fields to send along with the upload request. For presigned PUT uploads (which are default), this should be left empty.
    • headers: object, an object with request headers to send along with the upload request. When using a presigned PUT upload, it’s a good idea to provide headers['content-type']. That will make sure that the request uses the same content-type that was used to generate the signature. Without it, the browser may decide on a different content-type instead, causing S3 to reject the upload.

createMultipartUpload(file)

A function that calls the S3 Multipart API to create a new upload.

file is the file object from Uppy’s state. The most relevant keys are file.name and file.type.

Return a Promise for an object with keys:

  • uploadId - The UploadID returned by S3.
  • key - The object key for the file. This needs to be returned to allow it to be different from the file.name.

The default implementation calls out to Companion’s S3 signing endpoints.

listParts(file, { uploadId, key })

A function that calls the S3 Multipart API to list the parts of a file that have already been uploaded.

Receives the file object from Uppy’s state, and an object with keys:

  • uploadId - The UploadID of this Multipart upload.
  • key - The object key of this Multipart upload.

Return a Promise for an array of S3 Part objects, as returned by the S3 Multipart API. Each object has keys:

  • PartNumber - The index in the file of the uploaded part.
  • Size - The size of the part in bytes.
  • ETag - The ETag of the part, used to identify it when completing the multipart upload and combining all parts into a single file.

The default implementation calls out to Companion’s S3 signing endpoints.

signPart(file, partData)

A function that generates a signed URL for the specified part number. The partData argument is an object with the keys:

  • uploadId - The UploadID of this Multipart upload.
  • key - The object key in the S3 bucket.
  • partNumber - can’t be zero.
  • body – The data that will be signed.
  • signal – An AbortSignal that may be used to abort an ongoing request.

This function should return a object, or a promise that resolves to an object, with the following keys:

  • url – the presigned URL, as a string.
  • headers(Optional) Custom headers to send along with the request to S3 endpoint.

An example of what the return value should look like:

{
"url": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=1&...",
"headers": { "Content-MD5": "foo" }
}

abortMultipartUpload(file, { uploadId, key })

A function that calls the S3 Multipart API to abort a Multipart upload, and removes all parts that have been uploaded so far.

Receives the file object from Uppy’s state, and an object with keys:

  • uploadId - The UploadID of this Multipart upload.
  • key - The object key of this Multipart upload.

This is typically called when the user cancels an upload. Cancellation cannot fail in Uppy, so the result of this function is ignored.

The default implementation calls out to Companion’s S3 signing endpoints.

completeMultipartUpload(file, { uploadId, key, parts })

A function that calls the S3 Multipart API to complete a Multipart upload, combining all parts into a single object in the S3 bucket.

Receives the file object from Uppy’s state, and an object with keys:

  • uploadId - The UploadID of this Multipart upload.
  • key - The object key of this Multipart upload.
  • parts - S3-style list of parts, an array of objects with ETag and PartNumber properties. This can be passed straight to S3’s Multipart API.

Return a Promise for an object with properties:

  • location - (Optional) A publicly accessible URL to the object in the S3 bucket.

The default implementation calls out to Companion’s S3 signing endpoints.

allowedMetaFields: null

Pass an array of field names to limit the metadata fields that will be added to upload as query parameters.

  • Set this to ['name'] to only send the name field.
  • Set this to null (the default) to send all metadata fields.
  • Set this to an empty array [] to not send any fields.
Deprecated options

prepareUploadParts(file, partData)

A function that generates a batch of signed URLs for the specified part numbers.

Receives the file object from Uppy’s state. The partData argument is an object with keys:

  • uploadId - The UploadID of this Multipart upload.
  • key - The object key in the S3 bucket.
  • parts - An array of objects with the part number and chunk (Array<{ number: number, chunk: blob }>). number can’t be zero.

prepareUploadParts should return a Promise with an Object with keys:

  • presignedUrls - A JavaScript object with the part numbers as keys and the presigned URL for each part as the value.
  • headers - (Optional) Custom headers to send along with every request per part ({ 1: { 'Content-MD5': 'hash' }}). These are (1-based) indexed by part number too so you can for instance send the MD5 hash validation for each part to S3.

An example of what the return value should look like:

{
"presignedUrls": {
"1": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=1&...",
"2": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=2&...",
"3": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=3&..."
},
"headers": {
"1": { "Content-MD5": "foo" },
"2": { "Content-MD5": "bar" },
"3": { "Content-MD5": "baz" }
}
}

If an error occurred, reject the Promise with an Object with the following keys:

{ "source": { "status": 500 } }

status is the HTTP code and is required for determining whether to retry the request. prepareUploadParts will be retried if the code is 0, 409, 423, or between 500 and 600.

getTemporarySecurityCredentials(options)

note

When using Companion as a backend, you can pass true instead of a function. Setting up Companion will not simplify the process of getting signing on the client.

A boolean (when using Companion), or an (async) function to retrieve temporary security credentials used for all uploads instead of signing every part. This results in less request overhead which can lead to around 20% faster uploads. This is a security tradeoff. We recommend to not use this option unless you are familiar with the security implications of temporary credentials, and how to setup your bucket to make it work. See the Requesting temporary security credentials AWS guide for more information.

It’s strongly recommended to have some sort of caching process to avoid requesting more temporary token than necessary.

  • options: object
    • signal: AbortSignal
  • Returns: object | Promise<object>
    • credentials: object
      • AccessKeyId: string
      • SecretAccessKey: string
      • SessionToken: string
      • Expiration: string
    • bucket: string
    • region: string

If you are using Companion (for example because you want to support remote upload sources), you can pass a boolean:

uppy.use(AwsS3, {
// This is an example using Companion:
companionUrl: 'http://companion.uppy.io',
getTemporarySecurityCredentials: true,
shouldUseMultipart: (file) => file.size > 100 * 2 ** 20,
});

In the most common case, you are using a different backend, in which case you need to specify a function:

uppy.use(AwsS3, {
// This is an example not using Companion:
async getTemporarySecurityCredentials({ signal }) {
const response = await fetch('/sts-token', { signal });
if (!response.ok)
throw new Error('Failed to fetch STS', { cause: response });
return response.json();
},
shouldUseMultipart: (file) => file.size > 100 * 2 ** 20,
});