Skip to content

Examples

A collection of example apps for reference.

Below is a collection of example SST apps. These are available in the examples/ directory of the repo.

The descriptions for these examples are generated using the comments in the sst.config.ts of the app.

Contributing

To contribute an example or to edit one, submit a PR to the repo. Make sure to document the sst.config.ts in your example.


API Gateway auth

Enable IAM and JWT authorizers for API Gateway routes.

sst.config.ts
const api = new sst.aws.ApiGatewayV2("MyApi", {
domain: {
name: "api.ion.sst.sh",
path: "v1",
},
});
api.route("GET /", {
handler: "route.handler",
});
api.route("GET /foo", "route.handler", { auth: { iam: true } });
api.route("GET /bar", "route.handler", {
auth: {
jwt: {
issuer:
"https://cognito-idp.us-east-1.amazonaws.com/us-east-1_Rq4d8zILG",
audiences: ["user@example.com"],
},
},
});
api.route("$default", "route.handler");
return {
api: api.url,
};

View the full example.


AWS Astro streaming

Follows the Astro Streaming guide to create an app that streams HTML.

The responseMode in the astro-sst adapter is set to enable streaming.

astro.config.mjs
adapter: aws({
responseMode: "stream"
})

Now any components that return promises will be streamed.

src/components/Friends.astro
---
import type { Character } from "./character";
const friends: Character[] = await new Promise((resolve) => setTimeout(() => {
setTimeout(() => {
resolve(
[
{ name: "Patrick Star", image: "patrick.png" },
{ name: "Sandy Cheeks", image: "sandy.png" },
{ name: "Squidward Tentacles", image: "squidward.png" },
{ name: "Mr. Krabs", image: "mr-krabs.png" },
]
);
}, 3000);
}));
---
<div class="grid">
{friends.map((friend) => (
<div class="card">
<img class="img" src={friend.image} alt={friend.name} />
<p>{friend.name}</p>
</div>
))}
</div>

You should see the friends section load after a 3 second delay.

Safari uses a different heuristic to determine when to stream data. You need to render enough initial HTML to trigger streaming. This is typically only a problem for demo apps.

There’s nothing to configure for streaming in the Astro component.

sst.config.ts
new sst.aws.Astro("MyWeb");

View the full example.


Bucket policy

Create an S3 bucket and transform its bucket policy.

sst.config.ts
const bucket = new sst.aws.Bucket("MyBucket", {
transform: {
policy: (args) => {
// use sst.aws.iamEdit helper function to manipulate IAM policy
// containing Output values from components
args.policy = sst.aws.iamEdit(args.policy, (policy) => {
policy.Statement.push({
Effect: "Allow",
Principal: { Service: "ses.amazonaws.com" },
Action: "s3:PutObject",
Resource: $interpolate`arn:aws:s3:::${args.bucket}/*`,
});
});
},
},
});
return {
bucket: bucket.name,
};

View the full example.


Bucket queue notifications

Create an S3 bucket and subscribe to its events with an SQS queue.

sst.config.ts
const queue = new sst.aws.Queue("MyQueue");
queue.subscribe("subscriber.handler");
const bucket = new sst.aws.Bucket("MyBucket");
bucket.subscribeQueue(queue.arn, {
events: ["s3:ObjectCreated:*"],
});
return {
bucket: bucket.name,
queue: queue.url,
};

View the full example.


Bucket notifications

Create an S3 bucket and subscribe to its events with a function.

sst.config.ts
const bucket = new sst.aws.Bucket("MyBucket");
bucket.subscribe("subscriber.handler", {
events: ["s3:ObjectCreated:*"],
});
return {
bucket: bucket.name,
};

View the full example.


Bucket topic notifications

Create an S3 bucket and subscribe to its events with an SNS topic.

sst.config.ts
const topic = new sst.aws.SnsTopic("MyTopic");
topic.subscribe("MySubscriber", "subscriber.handler");
const bucket = new sst.aws.Bucket("MyBucket");
bucket.subscribeTopic(topic.arn, {
events: ["s3:ObjectCreated:*"],
});
return {
bucket: bucket.name,
topic: topic.name,
};

View the full example.


AWS Bun Elysia container

Deploys a Bun Elysia API to AWS.

You can get started by running.

Terminal window
bun create elysia aws-bun-elysia
cd aws-bun-elysia
bunx sst init

Now you can add a service.

sst.config.ts
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http", forward: "3000/http" }],
},
dev: {
command: "bun dev",
},
});

Start your app locally.

Terminal window
bun sst dev

This example lets you upload a file to S3 and then download it.

Terminal window
curl -F file=@elysia.png http://localhost:3000/
curl http://localhost:3000/latest

Finally, you can deploy it using bun sst deploy --stage production.

sst.config.ts
const bucket = new sst.aws.Bucket("MyBucket");
const vpc = new sst.aws.Vpc("MyVpc");
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http", forward: "3000/http" }],
},
dev: {
command: "bun dev",
},
link: [bucket],
});

View the full example.


AWS Bun file upload

Deploys an Bun app to AWS.

You can get started by running.

Terminal window
mkdir aws-bun-file-upload && cd aws-bun-file-upload
bun init -y
bunx sst init

Now you can add a service.

sst.config.ts
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http", forward: "3000/http" }],
},
dev: {
command: "bun dev",
},
link: [bucket],
});

Start your app locally.

Terminal window
bun sst dev

This example lets you upload a file to S3 and then download it.

Terminal window
curl -F file=@package.json http://localhost:3000/
curl http://localhost:3000/latest

Finally, you can deploy it using bun sst deploy --stage production.

sst.config.ts
const bucket = new sst.aws.Bucket("MyBucket");
const vpc = new sst.aws.Vpc("MyVpc");
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http", forward: "3000/http" }],
},
dev: {
command: "bun dev",
},
link: [bucket],
});

View the full example.


AWS Cluster private service

Adds a private load balancer to a service by setting the loadBalancer.public prop to false.

This allows you to create internal services that can only be accessed inside a VPC.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc", { bastion: true });
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
cluster.addService("MyService", {
loadBalancer: {
public: false,
ports: [{ listen: "80/http" }],
},
});

View the full example.


AWS Cluster with API Gateway

Expose a service through API Gateway HTTP API using a VPC link.

This is an alternative to using a load balancer. Since API Gateway is pay per request, it works out a lot cheaper for services that don’t get a lot of traffic.

You need to specify which port in your service will be exposed through API Gateway.

sst.config.ts
const service = cluster.addService("MyService", {
serviceRegistry: {
port: 80,
},
});

Your API Gateway HTTP API also needs to be in the same VPC as the service.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc");
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
const service = cluster.addService("MyService", {
serviceRegistry: {
port: 80,
},
});
const api = new sst.aws.ApiGatewayV2("MyApi", { vpc });
api.routePrivate("$default", service.nodes.cloudmapService.arn);

View the full example.


Subscribe to queues

Create an SQS queue, subscribe to it, and publish to it from a function.

sst.config.ts
// create dead letter queue
const dlq = new sst.aws.Queue("DeadLetterQueue");
dlq.subscribe("subscriber.dlq");
// create main queue
const queue = new sst.aws.Queue("MyQueue", {
dlq: dlq.arn,
});
queue.subscribe("subscriber.main");
const app = new sst.aws.Function("MyApp", {
handler: "publisher.handler",
link: [queue],
url: true,
});
return {
app: app.url,
queue: queue.url,
dlq: dlq.url,
};

View the full example.


DynamoDB streams

Create a DynamoDB table, enable streams, and subscribe to it with a function.

sst.config.ts
const table = new sst.aws.Dynamo("MyTable", {
fields: {
id: "string",
},
primaryIndex: { hashKey: "id" },
stream: "new-and-old-images",
});
table.subscribe("MySubscriber", "subscriber.handler", {
filters: [
{
dynamodb: {
NewImage: {
message: {
S: ["Hello"],
},
},
},
},
],
});
const app = new sst.aws.Function("MyApp", {
handler: "publisher.handler",
link: [table],
url: true,
});
return {
app: app.url,
table: table.name,
};

View the full example.


EC2 with Pulumi

Use raw Pulumi resources to create an EC2 instance.

sst.config.ts
// Notice you don't need to import pulumi, it is already part of sst.
const securityGroup = new aws.ec2.SecurityGroup("web-secgrp", {
ingress: [
{
protocol: "tcp",
fromPort: 80,
toPort: 80,
cidrBlocks: ["0.0.0.0/0"],
},
],
});
// Find the latest Ubuntu AMI
const ami = aws.ec2.getAmi({
filters: [
{
name: "name",
values: ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"],
},
],
mostRecent: true,
owners: ["099720109477"], // Canonical
});
// User data to set up a simple web server
const userData = `#!/bin/bash
ho "Hello, World!" > index.html
hup python3 -m http.server 80 &`;
// Create an EC2 instance
const server = new aws.ec2.Instance("web-server", {
instanceType: "t2.micro",
ami: ami.then((ami) => ami.id),
userData: userData,
vpcSecurityGroupIds: [securityGroup.id],
associatePublicIpAddress: true,
});
return {
app: server.publicIp,
};

View the full example.


AWS EFS with SQLite

Mount an EFS file system to a function and write to a SQLite database.

index.ts
const db = sqlite3("/mnt/efs/mydb.sqlite");

The file system is mounted to /mnt/efs in the function.

This example is for demonstration purposes only. It’s not recommended to use EFS for databases in production.

sst.config.ts
// NAT Gateways are required for Lambda functions
const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" });
// Create an EFS file system to store the SQLite database
const efs = new sst.aws.Efs("MyEfs", { vpc });
// Create a Lambda function that queries the database
new sst.aws.Function("MyFunction", {
vpc,
url: true,
volume: {
efs,
path: "/mnt/efs",
},
handler: "index.handler",
nodejs: {
install: ["better-sqlite3"],
},
});

View the full example.


AWS EFS with SurrealDB

We use the SurrealDB docker image to run a server in a container and use EFS as the file system.

sst.config.ts
const server = cluster.addService("MyService", {
architecture: "arm64",
image: "surrealdb/surrealdb:v2.0.2",
// ...
volumes: [
{ efs, path: "/data" },
],
});

We then connect to the server from a Lambda function.

index.ts
const endpoint = `http://${Resource.MyConfig.host}:${Resource.MyConfig.port}`;
const db = new Surreal();
await db.connect(endpoint);

This uses the SurrealDB client to connect to the server.

This example is for demonstration purposes only. It’s not recommended to use EFS for databases in production.

sst.config.ts
const { RandomPassword } = await import("@pulumi/random");
// SurrealDB Credentials
const PORT = 8080;
const NAMESPACE = "test";
const DATABASE = "test";
const USERNAME = "root";
const PASSWORD = new RandomPassword("Password", {
length: 32,
}).result;
// NAT Gateways are required for Lambda functions
const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" });
// Store SurrealDB data in EFS
const efs = new sst.aws.Efs("MyEfs", { vpc });
// Run SurrealDB server in a container
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
const server = cluster.addService("MyService", {
architecture: "arm64",
image: "surrealdb/surrealdb:v2.0.2",
command: [
"start",
"--bind",
$interpolate`0.0.0.0:${PORT}`,
"--log",
"info",
"--user",
USERNAME,
"--pass",
PASSWORD,
"surrealkv://data/data.skv",
"--allow-scripting",
],
volumes: [
{ efs, path: "/data" },
],
});
// Lambda client to connect to SurrealDB
const config = new sst.Linkable("MyConfig", {
properties: {
username: USERNAME,
password: PASSWORD,
namespace: NAMESPACE,
database: DATABASE,
port: PORT,
host: server.service,
},
});
new sst.aws.Function("MyApp", {
handler: "index.handler",
link: [config],
url: true,
vpc,
});

View the full example.


AWS EFS

Mount an EFS file system to a function and a container.

This allows both your function and the container to access the same file system. Here they both update a counter that’s stored in the file system.

common.mjs
await writeFile("/mnt/efs/counter", newValue.toString());

The file system is mounted to /mnt/efs in both the function and the container.

sst.config.ts
// NAT Gateways are required for Lambda functions
const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" });
// Create an EFS file system to store a counter
const efs = new sst.aws.Efs("MyEfs", { vpc });
// Create a Lambda function that increments the counter
new sst.aws.Function("MyFunction", {
handler: "lambda.handler",
url: true,
vpc,
volume: {
efs,
path: "/mnt/efs",
},
});
// Create a service that increments the same counter
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http" }],
},
volumes: [
{
efs,
path: "/mnt/efs",
},
],
});

View the full example.


AWS Express file upload

Deploys an Express app to AWS.

You can get started by running.

Terminal window
mkdir aws-express-file-upload && cd aws-express-file-upload
npm init -y
npm install express
npx sst@latest init

Now you can add a service.

sst.config.ts
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http", forward: "3000/http" }],
},
dev: {
command: "node --watch index.mjs",
},
});

Start your app locally.

Terminal window
npx sst dev

This example lets you upload a file to S3 and then download it.

Terminal window
curl -F file=@package.json http://localhost:80/
curl http://localhost:80/latest

Finally, you can deploy it using npx sst deploy --stage production.

sst.config.ts
const bucket = new sst.aws.Bucket("MyBucket");
const vpc = new sst.aws.Vpc("MyVpc");
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http" }],
},
dev: {
command: "node --watch index.mjs",
},
link: [bucket],
});

View the full example.


FFmpeg in Lambda

Uses FFmpeg to process videos. In this example, it takes a clip.mp4 and grabs a single frame from it.

We use the ffmpeg-static package that contains pre-built binaries for all architectures.

index.ts
import ffmpeg from "ffmpeg-static";

We can use this to spawn a child process and run FFmpeg.

index.ts
spawnSync(ffmpeg, ffmpegParams, { stdio: "pipe" });

We don’t need a layer when we deploy this because SST will use the right binary for the target Lambda architecture; including arm64.

sst.config.ts
{
nodejs: { install: ["ffmpeg-static"] }
}

All this is handled by nodejs.install.

sst.config.ts
const func = new sst.aws.Function("MyFunction", {
url: true,
memory: "2 GB",
timeout: "15 minutes",
handler: "index.handler",
copyFiles: [{ from: "clip.mp4" }],
nodejs: { install: ["ffmpeg-static"] },
});
return {
url: func.url,
};

View the full example.


AWS Hono streaming

An example on how to enable streaming for Lambda functions using Hono.

sst.config.ts
{
streaming: true
}

While sst dev doesn’t support streaming, we can conditionally enable it on deploy.

index.ts
export const handler = process.env.SST_LIVE ? handle(app) : streamHandle(app);

This will return the standard handler for sst dev.

To test this in your terminal, use the curl command with the --no-buffer option.

Terminal window
curl --no-buffer https://u3dyblk457ghskwbmzrbylpxoi0ayrbb.lambda-url.us-east-1.on.aws

Here we are using a Function URL directly because API Gateway doesn’t support streaming.

sst.config.ts
const hono = new sst.aws.Function("Hono", {
url: true,
streaming: true,
timeout: "15 minutes",
handler: "index.handler",
});
return {
api: hono.url,
};

View the full example.


IAM permissions boundaries

Use permissions boundaries to set the maximum permissions for all IAM roles that’ll be created in your app.

In this example, the Function has the s3:ListAllMyBuckets and sqs:ListQueues permissions. However, we create a permissions boundary that only allows s3:ListAllMyBuckets. And we apply it to all Roles in the app using the global $transform.

As a result, the Function is only allowed to list S3 buckets. If you open the deployed URL, you’ll see that the SQS list call fails.

Learn more about AWS IAM permissions boundaries.

sst.config.ts
// Create a permission boundary
const permissionsBoundary = new aws.iam.Policy("MyPermissionsBoundary", {
policy: aws.iam.getPolicyDocumentOutput({
statements: [
{
actions: ["s3:ListAllMyBuckets"],
resources: ["*"],
},
],
}).json,
});
// Apply the boundary to all roles
$transform(aws.iam.Role, (args) => {
args.permissionsBoundary = permissionsBoundary;
});
// The boundary automatically applies to this Function's role
const app = new sst.aws.Function("MyApp", {
handler: "index.handler",
permissions: [
{
actions: ["s3:ListAllMyBuckets", "sqs:ListQueues"],
resources: ["*"],
},
],
url: true,
});
return {
app: app.url,
};

View the full example.


Current AWS account

You can use the aws.getXXXXOutput() provider functions to get info about the current AWS account. Learn more about provider functions.

sst.config.ts
return {
region: aws.getRegionOutput().name,
account: aws.getCallerIdentityOutput({}).accountId,
};

View the full example.


AWS JSX Email

Uses JSX Email and the Email component to design and send emails.

To test this example, change the sst.config.ts to use your own email address.

sst.config.ts
sender: "email@example.com"

Then run.

Terminal window
npm install
npx sst dev

You’ll get an email from AWS asking you to confirm your email address. Click the link to verify it.

Next, go to the URL in the sst dev CLI output. You should now receive an email rendered using JSX Email.

index.ts
import { Template } from "./templates/email";
await render(Template({
email: "spongebob@example.com",
name: "Spongebob Squarepants"
}))

Once you are ready to go to production, you can:

sst.config.ts
const email = new sst.aws.Email("MyEmail", {
sender: "email@example.com",
});
const api = new sst.aws.Function("MyApi", {
handler: "index.handler",
link: [email],
url: true,
});
return {
api: api.url,
};

View the full example.


Kinesis streams

Create a Kinesis stream, and subscribe to it with a function.

sst.config.ts
const stream = new sst.aws.KinesisStream("MyStream");
// Create a function subscribing to all events
stream.subscribe("AllSub", "subscriber.all");
// Create a function subscribing to events of `bar` type
stream.subscribe("FilteredSub", "subscriber.filtered", {
filters: [
{
data: {
type: ["bar"],
},
},
],
});
const app = new sst.aws.Function("MyApp", {
handler: "publisher.handler",
link: [stream],
url: true,
});
return {
app: app.url,
stream: stream.name,
};

View the full example.


AWS Lambda streaming

An example on how to enable streaming for Lambda functions.

sst.config.ts
{
streaming: true
}

While sst dev doesn’t support streaming, you can use the lambda-stream package to test locally.

Terminal window
npm install lambda-stream

Then, you can use the streamifyResponse function to wrap your handler:

index.ts
import { APIGatewayProxyEventV2 } from "aws-lambda";
import { streamifyResponse, ResponseStream } from "lambda-stream";
export const handler = streamifyResponse(myHandler);
async function myHandler(
_event: APIGatewayProxyEventV2,
responseStream: ResponseStream
): Promise<void> {
return new Promise((resolve, _reject) => {
responseStream.setContentType('text/plain')
responseStream.write('Hello')
setTimeout(() => {
responseStream.write(' World')
responseStream.end()
resolve()
}, 3000)
})
}

When deployed, this will use the awslambda.streamifyResponse.

To test this in your terminal, use the curl command with the --no-buffer option.

Terminal window
curl --no-buffer https://u3dyblk457ghskwbmzrbylpxoi0ayrbb.lambda-url.us-east-1.on.aws

Here we are using a Function URL directly because API Gateway doesn’t support streaming.

sst.config.ts
const fn = new sst.aws.Function("MyFunction", {
url: true,
streaming: true,
timeout: "15 minutes",
handler: "index.handler",
});
return {
url: fn.url,
};

View the full example.


AWS Lambda in a VPC

You can use SST to locally work on Lambda functions that are in a VPC. To do so, you’ll need to enable bastion and nat on the Vpc component.

sst.config.ts
new sst.aws.Vpc("MyVpc", { bastion: true, nat: "managed" });

The NAT gateway is necessary to allow your Lambda function to connect to the internet. While, the bastion host is necessary for your local machine to be able to tunnel to the VPC.

You’ll need to install the tunnel, if you haven’t done this before.

Terminal window
sudo sst tunnel install

This needs sudo to create the network interface on your machine. You’ll only need to do this once.

Now you can run sst dev, your function can access resources in the VPC. For example, here we are connecting to a Redis cluster.

index.ts
const redis = new Cluster(
[{ host: Resource.MyRedis.host, port: Resource.MyRedis.port }],
{
dnsLookup: (address, callback) => callback(null, address),
redisOptions: {
tls: {},
username: Resource.MyRedis.username,
password: Resource.MyRedis.password,
},
}
);

The Redis cluster is in the same VPC as the function.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc", { bastion: true, nat: "managed" });
const redis = new sst.aws.Redis("MyRedis", { vpc });
const api = new sst.aws.Function("MyFunction", {
vpc,
url: true,
link: [redis],
handler: "index.handler"
});
return {
url: api.url,
};

View the full example.


AWS multi-region

To deploy resources to multiple AWS regions, you can create a new provider for the region you want to deploy to.

sst.config.ts
const provider = new aws.Provider("MyProvider", { region: "us-west-2" });

And then pass that in to the resource.

sst.config.ts
new sst.aws.Function("MyFunction", { handler: "index.handler" }, { provider });

If no provider is passed in, the default provider will be used. And if no region is specified, the default region from your credentials will be used.

sst.config.ts
const east = new sst.aws.Function("MyEastFunction", {
url: true,
handler: "index.handler",
});
const provider = new aws.Provider("MyWestProvider", { region: "us-west-2" });
const west = new sst.aws.Function(
"MyWestFunction",
{
url: true,
handler: "index.handler",
},
{ provider }
);
return {
east: east.url,
west: west.url,
};

View the full example.


AWS Next.js add behavior

Here’s how to add additional routes or cache behaviors to the CDN of a Next.js app deployed with OpenNext to AWS.

Specify the path pattern that you want to forward to your new origin. For example, to forward all requests to the /blog path to a different origin.

sst.config.ts
pathPattern: "/blog/*"

And then specify the domain of the new origin.

sst.config.ts
domainName: "blog.example.com"

We use this to transform our site’s CDN and add the additional behaviors.

sst.config.ts
const blogOrigin = {
// The domain of the new origin
domainName: "blog.example.com",
originId: "blogCustomOrigin",
customOriginConfig: {
httpPort: 80,
httpsPort: 443,
originSslProtocols: ["TLSv1.2"],
// If HTTPS is supported
originProtocolPolicy: "https-only",
},
};
const cacheBehavior = {
// The path to forward to the new origin
pathPattern: "/blog/*",
targetOriginId: blogOrigin.originId,
viewerProtocolPolicy: "redirect-to-https",
allowedMethods: ["GET", "HEAD", "OPTIONS"],
cachedMethods: ["GET", "HEAD"],
forwardedValues: {
queryString: true,
cookies: {
forward: "all",
},
},
};
new sst.aws.Nextjs("MyWeb", {
transform: {
cdn: (options: sst.aws.CdnArgs) => {
options.origins = $resolve(options.origins).apply(val => [...val, blogOrigin]);
options.orderedCacheBehaviors = $resolve(
options.orderedCacheBehaviors || []
).apply(val => [...val, cacheBehavior]);
},
},
});

View the full example.


AWS Next.js basic auth

Deploys a simple Next.js app and adds basic auth to it.

This is useful for dev environments where you want to share your app your team but ensure that it’s not publicly accessible.

This works by injecting some code into a CloudFront function that checks the basic auth header and matches it against the USERNAME and PASSWORD secrets.

sst.config.ts
{
injection: $interpolate`
if (
!event.request.headers.authorization
|| event.request.headers.authorization.value !== "Basic ${basicAuth}"
) {
return {
statusCode: 401,
headers: {
"www-authenticate": { value: "Basic" }
}
};
}`,
}

To deploy this, you need to first set the USERNAME and PASSWORD secrets.

Terminal window
sst secret set USERNAME my-username
sst secret set PASSWORD my-password

If you are deploying this to preview environments, you might want to set the secrets using the --fallback flag.

sst.config.ts
const username = new sst.Secret("USERNAME");
const password = new sst.Secret("PASSWORD");
const basicAuth = $resolve([username.value, password.value]).apply(
([username, password]) =>
Buffer.from(`${username}:${password}`).toString("base64")
);
new sst.aws.Nextjs("MyWeb", {
server: {
// Don't password protect prod
edge: $app.stage !== "production"
? {
viewerRequest: {
injection: $interpolate`
if (
!event.request.headers.authorization
|| event.request.headers.authorization.value !== "Basic ${basicAuth}"
) {
return {
statusCode: 401,
headers: {
"www-authenticate": { value: "Basic" }
}
};
}`,
},
}
: undefined,
},
});

View the full example.


AWS Next.js streaming

An example of how to use streaming Next.js RSC. Uses Suspense to stream an async component.

app/page.tsx
<Suspense fallback={<div>Loading...</div>}>
<Friends />
</Suspense>

For this demo we also need to make sure the route is not statically built.

app/page.tsx
export const dynamic = "force-dynamic";

This is deployed with OpenNext, which needs a config to enable streaming.

open-next.config.ts
export default {
default: {
override: {
wrapper: "aws-lambda-streaming"
}
}
};

You should see the friends section load after a 3 second delay.

Safari uses a different heuristic to determine when to stream data. You need to render enough initial HTML to trigger streaming. This is typically only a problem for demo apps.

sst.config.ts
new sst.aws.Nextjs("MyWeb");

View the full example.


AWS Postgres local

In this example, we use a local Docker Postgres instance for dev. While on deploy, we are using RDS.

We use the Pulumi Docker provider to create a local container with Postgres when running sst dev.

sst.config.ts
if ($dev) {
new docker.Container("LocalPostgres", {
name: `postgres-${$app.name}`,
restart: "always",
image: "postgres:16.4",
ports: [{
internal: 5432,
external: port,
}],
envs: [
`POSTGRES_PASSWORD=${password}`,
`POSTGRES_USER=${username}`,
`POSTGRES_DB=${database}`,
],
volumes: [{
hostPath: "/tmp/postgres-data",
containerPath: "/var/lib/postgresql/data",
}],
});
}

We then use the Linkable component to expose the credentials.

sst.config.ts
local = new sst.Linkable("MyPostgres", {
properties: {
host: "localhost",
port,
username,
password,
database,
},
});

On deploy, we create a Postgres RDS database. And we conditionally link the database to our Lambda function.

sst.config.ts
new sst.aws.Function("MyFunction", {
url: true,
handler: "index.handler",
link: [$dev ? local : rds],
vpc: $dev ? undefined : vpc,
});

Our Lambda function connects to the right database through the link.

index.ts
const pool = new Pool({
host: Resource.MyPostgres.host,
port: Resource.MyPostgres.port,
user: Resource.MyPostgres.username,
password: Resource.MyPostgres.password,
database: Resource.MyPostgres.database,
});

Finally, when we run sst remove, the local Postgres container is also removed.

sst.config.ts
let vpc, rds, local;
if ($dev) {
const password = "password";
const username = "postgres";
const database = "local";
const port = 5432;
new docker.Container("LocalPostgres", {
// Unique container name
name: `postgres-${$app.name}`,
restart: "always",
image: "postgres:16.4",
ports: [{
internal: 5432,
external: port,
}],
envs: [
`POSTGRES_PASSWORD=${password}`,
`POSTGRES_USER=${username}`,
`POSTGRES_DB=${database}`,
],
volumes: [{
// Where to store the data locally
hostPath: "/tmp/postgres-data",
containerPath: "/var/lib/postgresql/data",
}],
});
local = new sst.Linkable("MyPostgres", {
properties: {
host: "localhost",
port,
username,
password,
database,
},
});
}
else {
vpc = new sst.aws.Vpc("MyVpc", { bastion: true, nat: "ec2" });
rds = new sst.aws.Postgres("MyPostgres", { vpc });
}
new sst.aws.Function("MyFunction", {
url: true,
handler: "index.handler",
link: [$dev ? local : rds],
vpc: $dev ? undefined : vpc,
});

View the full example.


Prisma in Lambda

To use Prisma in a Lambda function you need to

  • Generate the Prisma Client with the right architecture
  • Copy the generated client to the function
  • Run the function inside a VPC

You can set the architecture using the binaryTargets option in prisma/schema.prisma.

prisma/schema.prisma
// For x86
binaryTargets = ["native", "rhel-openssl-3.0.x"]
// For ARM
// binaryTargets = ["native", "linux-arm64-openssl-3.0.x"]

You can also switch to ARM, just make sure to also change the function architecture in your sst.config.ts.

sst.config.ts
{
// For ARM
architecture: "arm64"
}

To generate the client, you need to run prisma generate when you make changes to the schema.

Since this needs to be done on every deploy, we add a postinstall script to the package.json.

package.json
"scripts": {
"postinstall": "prisma generate"
}

This runs the command on npm install.

We then need to copy the generated client to the function when we deploy.

sst.config.ts
{
copyFiles: [{ from: "node_modules/.prisma/client/" }]
}

Our function also needs to run inside a VPC, since Prisma doesn’t support the Data API.

sst.config.ts
{
vpc
}

Prisma in serverless environments

Prisma is not great in serverless environments. For a couple of reasons:

  1. It doesn’t support Data API, so you need to manage the connection pool on your own.
  2. Without the Data API, your functions need to run inside a VPC.
  3. Due to the internal architecture of their client, it’s also has slower cold starts.

Instead we recommend using Drizzle. This example is here for reference for people that are already using Prisma.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" });
const rds = new sst.aws.Postgres("MyPostgres", { vpc });
const api = new sst.aws.Function("MyApi", {
vpc,
url: true,
link: [rds],
// For ARM
// architecture: "arm64",
handler: "index.handler",
copyFiles: [{ from: "node_modules/.prisma/client/" }],
});
return {
api: api.url,
};

View the full example.


Puppeteer in Lambda

To use Puppeteer in a Lambda function you need:

  1. puppeteer-core
  2. Chromium
    • In sst dev, we’ll use a locally installed Chromium version.
    • In sst deploy, we’ll use the @sparticuz/chromium package. It comes with a pre-built binary for Lambda.

Chromium version

Since Puppeteer has a preferred version of Chromium, we’ll need to check the version of Chrome that a given version of Puppeteer supports. Head over to the Puppeteer’s Chromium Support page and check which versions work together.

For example, Puppeteer v23.1.1 supports Chrome for Testing 127.0.6533.119. So, we’ll use the v127 of @sparticuz/chromium.

Terminal window
npm install puppeteer-core@23.1.1 @sparticuz/chromium@127.0.0

Install Chromium locally

To use this locally, you’ll need to install Chromium.

Terminal window
npx @puppeteer/browsers install chromium@latest --path /tmp/localChromium

Once installed you’ll see the location of the Chromium binary, /tmp/localChromium/chromium/mac_arm-1350406/chrome-mac/Chromium.app/Contents/MacOS/Chromium.

Update this in your Lambda function.

index.ts
// This is the path to the local Chromium binary
const YOUR_LOCAL_CHROMIUM_PATH = "/tmp/localChromium/chromium/mac_arm-1350406/chrome-mac/Chromium.app/Contents/MacOS/Chromium";

You’ll notice we are using the right binary with the SST_DEV environment variable.

index.ts
const browser = await puppeteer.launch({
args: chromium.args,
defaultViewport: chromium.defaultViewport,
executablePath: process.env.SST_DEV
? YOUR_LOCAL_CHROMIUM_PATH
: await chromium.executablePath(),
headless: chromium.headless,
});

Deploy

We don’t need a layer to deploy this because @sparticuz/chromium comes with a pre-built binary for Lambda.

We just need to set it in the nodejs.install.

sst.config.ts
{
nodejs: {
install: ["@sparticuz/chromium"]
}
}

And on deploy, SST will use the right binary.

We are giving our function more memory and a longer timeout since running Puppeteer can take a while.

sst.config.ts
const api = new sst.aws.Function("MyFunction", {
url: true,
memory: "2 GB",
timeout: "15 minutes",
handler: "index.handler",
nodejs: {
install: ["@sparticuz/chromium"],
},
});
return {
url: api.url,
};

View the full example.


Subscribe to queues

Create an SQS queue, subscribe to it, and publish to it from a function.

sst.config.ts
const queue = new sst.aws.Queue("MyQueue");
queue.subscribe("subscriber.handler");
const app = new sst.aws.Function("MyApp", {
handler: "publisher.handler",
link: [queue],
url: true,
});
return {
app: app.url,
queue: queue.url,
};

View the full example.


AWS Remix streaming

Follows the Remix Streaming guide to create an app that streams data.

Uses the defer utility to stream data through the loader function.

app/routes/_index.tsx
return defer({
spongebob,
friends: friendsPromise,
});

Then uses the the Suspense and Await components to render the data.

app/routes/_index.tsx
<Suspense fallback={<div>Loading...</div>}>
<Await resolve={friends}>
{ /* ... */ }
</Await>
</Suspense>

You should see the friends section load after a 3 second delay.

Safari uses a different heuristic to determine when to stream data. You need to render enough initial HTML to trigger streaming. This is typically only a problem for demo apps.

Streaming works out of the box with the Remix component.

sst.config.ts
new sst.aws.Remix("MyWeb");

View the full example.


Router and bucket

Creates a router that serves static files from the public folder of a given bucket.

sst.config.ts
// Create a bucket that CloudFront can access
const bucket = new sst.aws.Bucket("MyBucket", {
access: "cloudfront",
});
// Upload the image to the `public` folder
new aws.s3.BucketObjectv2("MyImage", {
bucket: bucket.name,
key: "public/spongebob.svg",
contentType: "image/svg+xml",
source: $asset("spongebob.svg"),
});
const router = new sst.aws.Router("MyRouter", {
routes: {
"/*": {
bucket,
rewrite: { regex: "^/(.*)$", to: "/public/$1" },
},
},
});
return {
image: $interpolate`${router.url}/spongebob.svg`,
};

View the full example.


Router and function URL

Creates a router that routes all requests to a function with a URL.

sst.config.ts
const api = new sst.aws.Function("MyApi", {
handler: "api.handler",
url: true,
});
const bucket = new sst.aws.Bucket("MyBucket", {
access: "public",
});
const router = new sst.aws.Router("MyRouter", {
domain: "router.ion.dev.sst.dev",
routes: {
"/api/*": api.url,
"/*": $interpolate`https://${bucket.domain}`,
},
});
return {
router: router.url,
bucket: bucket.domain,
};

View the full example.


AWS Cluster Service Discovery

In this example, we are connecting to a service running on a cluster using its AWS Cloud Map service host name. This is useful for service discovery.

We are deploying a service to a cluster in a VPC. And we can access it within the VPC using the service’s cloud map hostname.

lambda.ts
const reponse = await fetch(`http://${Resource.MyService.service}`);

Here we are accessing it through a Lambda function that’s linked to the service and is deployed to the same VPC.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2" });
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
const service = cluster.addService("MyService");
new sst.aws.Function("MyFunction", {
vpc,
url: true,
link: [service],
handler: "lambda.handler",
});

View the full example.


Sharp in Lambda

Uses the Sharp library to resize images. In this example, it resizes a logo.png local file to 100x100 pixels.

sst.config.ts
{
nodejs: { install: ["sharp"] }
}

We don’t need a layer to deploy this because sharp comes with a pre-built binary for Lambda. This is handled by nodejs.install.

In dev, this uses the sharp npm package locally.

package.json
{
"dependencies": {
"sharp": "^0.33.5"
}
}

On deploy, SST will use the right binary from the sharp package for the target Lambda architecture.

sst.config.ts
const func = new sst.aws.Function("MyFunction", {
url: true,
handler: "index.handler",
nodejs: { install: ["sharp"] },
copyFiles: [{ from: "logo.png" }],
});
return {
url: func.url,
};

View the full example.


AWS SolidStart WebSocket endpoint

Deploys a SolidStart app with a WebSocket endpoint in a container to AWS.

Uses the experimental WebSocket support in Nitro.

app.config.ts
export default defineConfig({
server: {
experimental: {
websocket: true,
},
},
}).addRouter({
name: "ws",
type: "http",
handler: "./src/ws.ts",
target: "server",
base: "/ws",
});

Once deployed you can test the /ws endpoint and it’ll send a message back after a 3s delay.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc", { bastion: true });
const cluster = new sst.aws.Cluster("MyCluster", { vpc });
cluster.addService("MyService", {
loadBalancer: {
ports: [{ listen: "80/http", forward: "3000/http" }],
},
dev: {
command: "npm run dev",
},
});

View the full example.


AWS static site basic auth

This deploys a simple static site and adds basic auth to it.

This is useful for dev environments where you want to share a static site with your team but ensure that it’s not publicly accessible.

This works by injecting some code into a CloudFront function that checks the basic auth header and matches it against the USERNAME and PASSWORD secrets.

sst.config.ts
{
injection: $interpolate`
if (
!event.request.headers.authorization
|| event.request.headers.authorization.value !== "Basic ${basicAuth}"
) {
return {
statusCode: 401,
headers: {
"www-authenticate": { value: "Basic" }
}
};
}`,
}

To deploy this, you need to first set the USERNAME and PASSWORD secrets.

Terminal window
sst secret set USERNAME my-username
sst secret set PASSWORD my-password

If you are deploying this to preview environments, you might want to set the secrets using the --fallback flag.

sst.config.ts
const username = new sst.Secret("USERNAME");
const password = new sst.Secret("PASSWORD");
const basicAuth = $resolve([username.value, password.value]).apply(
([username, password]) =>
Buffer.from(`${username}:${password}`).toString("base64")
);
new sst.aws.StaticSite("MySite", {
path: "site",
// Don't password protect prod
edge: $app.stage !== "production"
? {
viewerRequest: {
injection: $interpolate`
if (
!event.request.headers.authorization
|| event.request.headers.authorization.value !== "Basic ${basicAuth}"
) {
return {
statusCode: 401,
headers: {
"www-authenticate": { value: "Basic" }
}
};
}`,
},
}
: undefined,
});

View the full example.


AWS static site

Deploy a simple HTML file as a static site with S3 and CloudFront. The website is stored in the site/ directory.

sst.config.ts
new sst.aws.StaticSite("MySite", {
path: "site",
});

View the full example.


Swift in Lambda

Deploys a simple Swift application to Lambda using the al2023 runtime.

Check out the README in the repo for more details.

sst.config.ts
const swift = new sst.aws.Function("Swift", {
runtime: "provided.al2023",
architecture: process.arch === "arm64" ? "arm64" : "x86_64",
bundle: build("app"),
handler: "bootstrap",
url: true,
});
const router = new sst.aws.Router("SwiftRouter", {
routes: {
"/*": swift.url,
},
domain: "swift.dev.sst.dev",
});
return {
url: router.url,
};

View the full example.


T3 Stack in AWS

Deploy T3 stack with Drizzle and Postgres to AWS.

This example was created using create-t3-app and the following options: tRPC, Drizzle, no auth, Tailwind, Postgres, and the App Router.

Instead of a local database, we’ll be using an RDS Postgres database.

src/server/db/index.ts
const pool = new Pool({
host: Resource.MyPostgres.host,
port: Resource.MyPostgres.port,
user: Resource.MyPostgres.username,
password: Resource.MyPostgres.password,
database: Resource.MyPostgres.database,
});

Similarly, for Drizzle Kit.

drizzle.config.ts
export default {
schema: "./src/server/db/schema.ts",
dialect: "postgresql",
dbCredentials: {
ssl: {
rejectUnauthorized: false,
},
host: Resource.MyPostgres.host,
port: Resource.MyPostgres.port,
user: Resource.MyPostgres.username,
password: Resource.MyPostgres.password,
database: Resource.MyPostgres.database,
},
tablesFilter: ["aws-t3_*"],
} satisfies Config;

In our Next.js app we can access our Postgres database because we link them both. We don’t need to use our .env files.

sst.config.ts
const rds = new sst.aws.Postgres("MyPostgres", { vpc, proxy: true });
new sst.aws.Nextjs("MyWeb", {
vpc,
link: [rds]
});

To run this in dev mode run:

Terminal window
npm install
npx sst dev

It’ll take a few minutes to deploy the database and the VPC.

This also starts a tunnel to let your local machine connect to the RDS Postgres database. Make sure you have it installed, you only need to do this once for your local machine.

Terminal window
sudo npx sst tunnel install

Now in a new terminal you can run the database migrations.

Terminal window
npm run db:push

We also have the Drizzle Studio start automatically in dev mode under the Studio tab.

sst.config.ts
new sst.x.DevCommand("Studio", {
link: [rds],
dev: {
command: "npx drizzle-kit studio",
},
});

And to make sure our credentials are available, we update our package.json with the sst shell CLI.

package.json
"db:generate": "sst shell drizzle-kit generate",
"db:migrate": "sst shell drizzle-kit migrate",
"db:push": "sst shell drizzle-kit push",
"db:studio": "sst shell drizzle-kit studio",

So running npm run db:push will run Drizzle Kit with the right credentials.

To deploy this to production run:

Terminal window
npx sst deploy --stage production

Then run the migrations.

Terminal window
npx sst shell --stage production npx drizzle-kit push

If you are running this locally, you’ll need to have a tunnel running.

Terminal window
npx sst tunnel --stage production

If you are doing this in a CI/CD pipeline, you’d want your build containers to be in the same VPC.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc", { bastion: true, nat: "ec2" });
const rds = new sst.aws.Postgres("MyPostgres", { vpc, proxy: true });
new sst.aws.Nextjs("MyWeb", {
vpc,
link: [rds]
});
new sst.x.DevCommand("Studio", {
link: [rds],
dev: {
command: "npx drizzle-kit studio",
},
});

View the full example.


Subscribe to topics

Create an SNS topic, publish to it from a function, and subscribe to it with a function and a queue.

sst.config.ts
const queue = new sst.aws.Queue("MyQueue");
queue.subscribe("subscriber.handler");
const topic = new sst.aws.SnsTopic("MyTopic");
topic.subscribe("MySubscriber1", "subscriber.handler", {});
topic.subscribeQueue("MySubscriber2", queue.arn);
const app = new sst.aws.Function("MyApp", {
handler: "publisher.handler",
link: [topic],
url: true,
});
return {
app: app.url,
topic: topic.name,
};

View the full example.


Store and search for vector data using the Vector component. Includes a seeder API that uses an LLM to generate embeddings for some movies and optionally their posters.

Once seeded, you can call the search API to query the vector database.

sst.config.ts
const OpenAiApiKey = new sst.Secret("OpenAiApiKey");
const vector = new sst.aws.Vector("MyVectorDB", {
dimension: 1536,
});
const seeder = new sst.aws.Function("Seeder", {
handler: "index.seeder",
link: [OpenAiApiKey, vector],
copyFiles: [
{ from: "iron-man.jpg", to: "iron-man.jpg" },
{
from: "black-widow.jpg",
to: "black-widow.jpg",
},
{
from: "spider-man.jpg",
to: "spider-man.jpg",
},
{ from: "thor.jpg", to: "thor.jpg" },
{
from: "captain-america.jpg",
to: "captain-america.jpg",
},
],
url: true,
});
const app = new sst.aws.Function("MyApp", {
handler: "index.app",
link: [OpenAiApiKey, vector],
url: true,
});
return { seeder: seeder.url, app: app.url };

View the full example.


React SPA with Vite

Deploy a React single-page app (SPA) with Vite to S3 and CloudFront.

sst.config.ts
new sst.aws.StaticSite("Web", {
build: {
command: "pnpm run build",
output: "dist",
},
});

View the full example.


Cloudflare Cron

This example creates a Cloudflare Worker that runs on a schedule.

sst.config.ts
const cron = new sst.cloudflare.Cron("Cron", {
job: "index.ts",
schedules: ["* * * * *"]
});
return {};

View the full example.


Cloudflare KV

This example creates a Cloudflare KV namespace and links it to a worker. Now you can use the SDK to interact with the KV namespace in your worker.

sst.config.ts
const storage = new sst.cloudflare.Kv("MyStorage");
const worker = new sst.cloudflare.Worker("Worker", {
url: true,
link: [storage],
handler: "index.ts",
});
return {
url: worker.url,
};

View the full example.


You might have multiple secrets that need to be used across your app. It can be tedious to create a new secret and link it to each function or resource.

A common pattern to addresses this is to create an object with all your secrets and then link them all at once. Now when you have a new secret, you can add it to the object and it will be automatically available to all your resources.

sst.config.ts
// Manage all secrets together
const secrets = {
secret1: new sst.Secret("Secret1", "some-secret-value-1"),
secret2: new sst.Secret("Secret2", "some-secret-value-2"),
};
const allSecrets = Object.values(secrets);
const bucket = new sst.aws.Bucket("MyBucket");
const api = new sst.aws.Function("MyApi", {
link: [bucket, ...allSecrets],
handler: "index.handler",
url: true,
});
return {
url: api.url,
};

View the full example.


Default function props

Set default props for all the functions in your app using the global $transform.

sst.config.ts
$transform(sst.aws.Function, (args) => {
args.runtime = "nodejs14.x";
args.environment = {
FOO: "BAR",
};
});
new sst.aws.Function("MyFunction", {
handler: "index.ts",
});

View the full example.


Vercel domains

Creates a router that uses domains purchased through and hosted in your Vercel account. Ensure the VERCEL_API_TOKEN and VERCEL_TEAM_ID environment variables are set.

sst.config.ts
const router = new sst.aws.Router("MyRouter", {
domain: {
name: "ion.sst.moe",
dns: sst.vercel.dns({ domain: "sst.moe" }),
},
routes: {
"/*": "https://sst.dev",
},
});
return {
router: router.url,
};

View the full example.