Generate an Amazon S3 pre-signed URL with AWS Lambda and Amazon Api Gateway

Raffaele Garofalo
4 min readNov 18, 2022

--

Following-up on my previous post (Handle File Upload with AWS Lambda and Amazon Api Gateway), I received some questions on “how can I securely upload a File without passing through an AWS Lambda?”.

Here you go, the technique used is called Amazon S3 pre-signed URL. You can find more details on what it is and how it works, in this Documentation page on the official AWS Documentation portal. You can generate a pre-signed URL in multiple ways (AWS CLI, AWS Console) but my example is meant to show you how to expose an HTTP endpoint, secured by Amazon Api Gateway, that returns a 10 minutes expiration pre-signed URL.

Why? Maybe you want to shield this HTTP API with Amazon Api Gateway, Cognito and some specific IAM Role, so that not everyone can simply get a pre-signed URL and upload or download data from a private S3 Bucket 😄

The Design

The design for generating pre-signed URL is slightly different than my previous one. We will still leverage Amazon Api Gateway and AWS Lambda, but only for getting a pre-signed URL which will expire within 10 minutes. The time is a personal choice, I thought that 10 minutes could be enough to upload large files, but it’s a parameter, so you can tailor it according to your needs.

Solution for generating pre-signed URLs

AWS Lambda code to generate pre-signed URL

First step is to use SAM CLI (or any other framework of your choice) to build this Solution composed by:

  • An Amazon Api Gateway
  • An AWS Lambda which generates the pre-signed URL
  • An Amazon S3 bucket

To do that, I simply declare my function inside the template.yaml file as following:

  getDocumentUrl:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/get-document-url.getDocumentUrlHandler
Runtime: nodejs16.x
Architectures:
- x86_64
MemorySize: 128
Timeout: 100
Description: Get the Document Signed URL
Policies:
- S3CrudPolicy:
BucketName: !Sub "${BucketNamePrefix}-draftbucket"
Environment:
Variables:
DRAFT_BUCKET: !Ref DraftBucket
Events:
Api:
Type: Api
Properties:
Path: /api/documents/signed/
Method: GET

And the implementation code in Node.js is extremely simple. I use aws-sdk and connect to the S3 bucket and generate a signed URL:

// get the name of the file to be uploaded
const key = event.queryStringParameters.key;
// generate a 10 (60 sec. * 10) minutes URL
var params = {
Bucket: bucketName,
Key: key,
Expires: 600,
ContentType: 'multipart/form-data'
};
var url = await bucket.getSignedUrlPromise('putObject', params);
...
response.body = JSON.stringify({ url: url });
return response;

Policy note: did you notice that I assign a CRUD policy to my AWS Lambda? The reason for that is because the pre-signed URL will act on behalf of the identity that generates it, so you need the entitlements to upload a file in order to generate the correct pre-signed URL

The response from the HTTP GET call is something similar to this:

Pre-signed URL with 10 minutes expiration

Upload a File via Typescript

Now it’s time to upload a file via Typescript (for Front-end I tend to use Typescript). Here there are a millions way, for me the most convenient is to execute two promises and concat them via Redux actions:

  • Call the AWS Lambda that returns a pre-signed URL
  • Upload the File to S3 using the pre-signed URL

I use Redux so if something isn’t clear, just pay attention to the promise section of the code below:

export const uploadFile = createAsyncThunk<
string,
File,
{ rejectValue: KnwonError }
>(
'document/uploadFile',
async (file: File, { rejectWithValue }) => {
try {
// get URL
const urlResponse = await fetch(
`${rootUrl}api/documents/signed?key=${file.name}`, {
method: 'GET',
headers: {
'Content-Type':'application/json'
}
}
)
const url = await urlResponse.json();
console.log(url)
// Upload
const formData = new FormData();
formData.append('file', file);
const uploadResponse = await fetch(url.url, {
method: 'PUT',
headers: {
'Content-Type': 'multipart/form-data'
},
body: formData
});
const uploadResult = await uploadResponse.json();
return uploadResult;
} catch (err) {
return rejectWithValue({
errorTitle: 'Fetch Error',
errorMessage: (err as Error).message,
type: 'application'
});
}
})

Few points need to be mentioned because I encounter some problems in React:

  • You need to specify the Content-Type and it must be the same you specify when you generate the pre-signed URL, otherwise you get “The request signature we calculated does not match the signature …” error
  • Do not play around with the Payload. Let the Browser do its job. Pass a FormData object and the Browser will send the correct Boundary

CORS: If you are testing this from a different DNS Alias, or your web application is simply hosted in a different Domain than your Amazon S3 Bucket, then you need to configure your S3 CORS Headers!!

Example of Amazon S3 CORS

In my case I set “AllowedOrigin”: [“*”], but you probably want to list only the Array of DNS Alias that will interact with your Bucket 😬

As usual, Happy coding everyone 😎

--

--

Raffaele Garofalo
Raffaele Garofalo

Written by Raffaele Garofalo

Father | Husband | Fitness enthusiast & Microsoft/AWS Solution Architect

Responses (1)