AWS S3
The @uppy/aws-s3
plugin can be used to upload files directly to a S3 bucket or
a S3-compatible provider, such as Google Cloud Storage or DigitalOcean Spaces.
Uploads can be signed using either Companion, temporary
credentials, or a custom signing function.
When should I use it?
Not sure which uploader is best for you? Read “Choosing the uploader you need”.
You can use this plugin when you prefer a client-to-storage over a client-to-server-to-storage (such as Transloadit or Tus) setup. This may in some cases be preferable, for instance, to reduce costs or the complexity of running a server and load balancer with Tus.
Multipart uploads start to become valuable for larger files (100 MiB+) as it uploads a single object as a set of parts. This has certain benefits, such as improved throughput (uploading parts in parallel) and quick recovery from network issues (only the failed parts need to be retried). The downside is request overhead, as it needs to do creation, signing (unless you are signing on the client), and completion requests besides the upload requests. For example, if you are uploading files that are only a couple kilobytes with a 100ms roundtrip latency, you are spending 400ms on overhead and only a few milliseconds on uploading.
In short
- We recommend the default value of
shouldUseMultipart
, which enable multipart uploads only for large files. - If you prefer to have less overhead (+20% upload speed) you can use temporary
S3 credentials with
getTemporarySecurityCredentials
. This means users get a single token which allows them to do bucket operations for longer, instead of short lived signed URL per resource. This is a security trade-off.
Install
- NPM
- Yarn
- CDN
npm install @uppy/aws-s3
yarn add @uppy/aws-s3
The bundle consists of most Uppy plugins, so this method is not recommended for production, as your users will have to download all plugins when you are likely using only a few.
It can be useful to speed up your development environment, so don't hesitate to use it to get you started.
<!-- 1. Add CSS to `<head>` -->
<link href="https://releases.transloadit.com/uppy/v4.9.0/uppy.min.css" rel="stylesheet">
<!-- 2. Initialize -->
<div id="uppy"></div>
<script type="module">
import { Uppy, AwsS3 } from "https://releases.transloadit.com/uppy/v4.9.0/uppy.min.mjs"
new Uppy().use(AwsS3, { /* see options */ })
</script>
Use
Setting up your S3 bucket
To use this plugin with S3 we need to setup a bucket with the right permissions and CORS settings.
S3 buckets do not allow public uploads for security reasons. To allow Uppy and the browser to upload directly to a bucket, its CORS permissions need to be configured.
CORS permissions can be found in the
S3 Management Console. Click the
bucket that will receive the uploads, then go into the Permissions
tab and
select the CORS configuration
button. A JSON document will be shown that
defines the CORS configuration. (AWS used to use XML but now only allow JSON).
More information about the
S3 CORS format here.
The configuration required for Uppy and Companion is this:
[
{
"AllowedOrigins": ["https://my-app.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"MaxAgeSeconds": 3000,
"AllowedHeaders": [
"Authorization",
"x-amz-date",
"x-amz-content-sha256",
"content-type"
],
"ExposeHeaders": ["ETag", "Location"]
},
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET"],
"MaxAgeSeconds": 3000
}
]
A good practice is to use two CORS rules: one for viewing the uploaded files, and one for uploading files. This is done above where the first object in the array defines the rules for uploading, and the second for viewing. The example above makes files publicly viewable. You can change it according to your needs.
If you are using an IAM policy to allow access to the S3 bucket, the policy must
have at least the s3:PutObject
and s3:PutObjectAcl
permissions scoped to the
bucket in question. In-depth documentation about CORS rules is available on the
AWS documentation site.
Use with your own server
The recommended approach is to integrate @uppy/aws-s3
with your own server.
You will need to do the following things:
Use with Companion
Companion has S3 routes built-in for a plug-and-play experience with Uppy.
Generally it’s better for access control, observability, and scaling to
integrate @uppy/aws-s3
with your own server. You may want to use
Companion for creating, signing, and completing your S3
uploads if you already need Companion for remote files (such as from Google
Drive). Otherwise it’s not worth the hosting effort.
import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import AwsS3 from '@uppy/aws-s3';
import '@uppy/core/dist/style.min.css';
import '@uppy/dashboard/dist/style.min.css';
const uppy = new Uppy()
.use(Dashboard, { inline: true, target: 'body' })
.use(AwsS3, {
endpoint: 'https://companion.uppy.io',
});
Use with TypeScript
Uppy always puts the response to an upload in file.response.body
. If you want
this to be type safe with @uppy/aws-s3
, you can import the AwsBody
type and
pass it as the second genric to Uppy
.
import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import AwsS3, { type AwsBody } from '@uppy/aws-s3';
import '@uppy/core/dist/style.min.css';
import '@uppy/dashboard/dist/style.min.css';
// Set this to `any` or `Record<string, unknown>`
// if you do not set any metadata yourself
type Meta = { license: string };
const uppy = new Uppy<Meta, AwsBody>()
.use(Dashboard, { inline: true, target: 'body' })
.use(AwsS3, { endpoint: '...' });
const id = uppy.addFile(/* ... */);
await uppy.upload();
const body = uppy.getFile(id).response.body!;
const { location } = body; // This is now type safe
API
Options
shouldUseMultipart(file)
A boolean, or a function that returns a boolean which is called for each file
that is uploaded with the corresponding UppyFile
instance as argument.
By default, all files with a file.size
≤ 100 MiB will be uploaded in a
single chunk, all files larger than that as multipart.
Here’s how to use it:
uppy.use(AwsS3, {
shouldUseMultipart(file) {
// Use multipart only for files larger than 100MiB.
return file.size > 100 * 2 ** 20;
},
});
limit
The maximum amount of files to upload in parallel (number
, default: 6
).
Note that the amount of files is not the same as the amount of concurrent connections. Multipart uploads can use many requests per file. For example, for a 100 MiB file with a part size of 5 MiB:
- 1
createMultipartUpload
request - 100/5 = 20 sign requests (unless you are signing on the client)
- 100/5 = 20 upload requests
- 1
completeMultipartUpload
request
Unless you have a good reason and are well informed about the average internet speed of your users, do not set this higher. S3 uses HTTP/1.1, which means a limit to concurrent connections and your uploads may expire before they are uploaded.
endpoint
URL to your backend or to Companion (string
, default:
null
).
headers
Custom headers that should be sent along to the endpoint
on every
request (Object
, default: {}
).
cookiesRule
This option correlates to the
RequestCredentials value
(string
, default: 'same-origin'
).
This tells the plugin whether to send cookies to the endpoint
.
retryDelays
retryDelays
are the intervals in milliseconds used to retry a failed chunk
(array
, default: [0, 1000, 3000, 5000]
).
This is also used for signPart()
. Set to null
to
disable automatic retries, and fail instantly if any chunk fails to upload.
getChunkSize(file)
A function that returns the minimum chunk size to use when uploading the given file as multipart.
For multipart uploads, chunks are sent in batches to have presigned URLs
generated with signPart()
. To reduce the amount of
requests for large files, you can choose a larger chunk size, at the cost of
having to re-upload more data if one chunk fails to upload.
S3 requires a minimum chunk size of 5MiB, and supports at most 10,000 chunks per
multipart upload. If getChunkSize()
returns a size that’s too small, Uppy will
increase it to S3’s minimum requirements.
getUploadParameters(file, options)
When using Companion to sign S3 uploads, you should not define this option.
A function that will be called for each non-multipart upload.
file
:UppyFile
the file that will be uploadedoptions
:object
signal
:AbortSignal
- Returns:
object | Promise<object>
method
:string
, the HTTP method to be used for the upload. This should be one of eitherPUT
orPOST
, depending on the type of upload used.url
:string
, the URL to which the upload request will be sent. When using a presigned PUT upload, this should be the URL to the S3 object with signing parameters included in the query string. When using a POST upload with a policy document, this should be the root URL of the bucket.fields
object
, an object with form fields to send along with the upload request. For presigned PUT uploads (which are default), this should be left empty.headers
:object
, an object with request headers to send along with the upload request. When using a presigned PUT upload, it’s a good idea to provideheaders['content-type']
. That will make sure that the request uses the same content-type that was used to generate the signature. Without it, the browser may decide on a different content-type instead, causing S3 to reject the upload.
createMultipartUpload(file)
A function that calls the S3 Multipart API to create a new upload.
file
is the file object from Uppy’s state. The most relevant keys are
file.name
and file.type
.
Return a Promise for an object with keys:
uploadId
- The UploadID returned by S3.key
- The object key for the file. This needs to be returned to allow it to be different from thefile.name
.
The default implementation calls out to Companion’s S3 signing endpoints.
listParts(file, { uploadId, key })
A function that calls the S3 Multipart API to list the parts of a file that have already been uploaded.
Receives the file
object from Uppy’s state, and an object with keys:
uploadId
- The UploadID of this Multipart upload.key
- The object key of this Multipart upload.
Return a Promise for an array of S3 Part objects, as returned by the S3 Multipart API. Each object has keys:
PartNumber
- The index in the file of the uploaded part.Size
- The size of the part in bytes.ETag
- The ETag of the part, used to identify it when completing the multipart upload and combining all parts into a single file.
The default implementation calls out to Companion’s S3 signing endpoints.
signPart(file, partData)
A function that generates a signed URL for the specified part number. The
partData
argument is an object with the keys:
uploadId
- The UploadID of this Multipart upload.key
- The object key in the S3 bucket.partNumber
- can’t be zero.body
– The data that will be signed.signal
– AnAbortSignal
that may be used to abort an ongoing request.
This function should return a object, or a promise that resolves to an object, with the following keys:
url
– the presigned URL, as astring
.headers
– (Optional) Custom headers to send along with the request to S3 endpoint.
An example of what the return value should look like:
{
"url": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=1&...",
"headers": { "Content-MD5": "foo" }
}
abortMultipartUpload(file, { uploadId, key })
A function that calls the S3 Multipart API to abort a Multipart upload, and removes all parts that have been uploaded so far.
Receives the file
object from Uppy’s state, and an object with keys:
uploadId
- The UploadID of this Multipart upload.key
- The object key of this Multipart upload.
This is typically called when the user cancels an upload. Cancellation cannot fail in Uppy, so the result of this function is ignored.
The default implementation calls out to Companion’s S3 signing endpoints.
completeMultipartUpload(file, { uploadId, key, parts })
A function that calls the S3 Multipart API to complete a Multipart upload, combining all parts into a single object in the S3 bucket.
Receives the file
object from Uppy’s state, and an object with keys:
uploadId
- The UploadID of this Multipart upload.key
- The object key of this Multipart upload.parts
- S3-style list of parts, an array of objects withETag
andPartNumber
properties. This can be passed straight to S3’s Multipart API.
Return a Promise for an object with properties:
location
- (Optional) A publicly accessible URL to the object in the S3 bucket.
The default implementation calls out to Companion’s S3 signing endpoints.
allowedMetaFields: null
Pass an array of field names to limit the metadata fields that will be added to upload as query parameters.
- Set it to
false
to not send any fields (or an empty array). - Set it to
['name']
to only send thename
field. - Set it to
true
(the default) to send all metadata fields.
getTemporarySecurityCredentials(options)
When using Companion as a backend, you can pass true
instead
of a function. Setting up Companion will not simplify the process of getting
signing on the client.
A boolean (when using Companion), or an (async) function to retrieve temporary security credentials used for all uploads instead of signing every part. This results in less request overhead which can lead to around 20% faster uploads. This is a security tradeoff. We recommend to not use this option unless you are familiar with the security implications of temporary credentials, and how to setup your bucket to make it work. See the Requesting temporary security credentials AWS guide for more information.
It’s strongly recommended to have some sort of caching process to avoid requesting more temporary token than necessary.
options
:object
signal
:AbortSignal
- Returns:
object | Promise<object>
credentials
:object
AccessKeyId
:string
SecretAccessKey
:string
SessionToken
:string
Expiration
:string
bucket
:string
region
:string
If you are using Companion (for example because you want to support remote upload sources), you can pass a boolean:
uppy.use(AwsS3, {
// This is an example using Companion:
endpoint: 'http://companion.uppy.io',
getTemporarySecurityCredentials: true,
shouldUseMultipart: (file) => file.size > 100 * 2 ** 20,
});
In the most common case, you are using a different backend, in which case you need to specify a function:
uppy.use(AwsS3, {
// This is an example not using Companion:
async getTemporarySecurityCredentials({ signal }) {
const response = await fetch('/sts-token', { signal });
if (!response.ok)
throw new Error('Failed to fetch STS', { cause: response });
return response.json();
},
shouldUseMultipart: (file) => file.size > 100 * 2 ** 20,
});