Skip to content

Cluster

Reference doc for the `sst.aws.Cluster` component.

The Cluster component lets you create a cluster of containers and add services and tasks to them. It uses Amazon ECS on AWS Fargate.

Create a Cluster

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc");
const cluster = new sst.aws.Cluster("MyCluster", { vpc });

Add a service

sst.config.ts
cluster.addService("MyService");

Configure the container image

By default, the service will look for a Dockerfile in the root directory. Optionally configure the image context and dockerfile.

sst.config.ts
cluster.addService("MyService", {
image: {
context: "./app",
dockerfile: "Dockerfile"
}
});

Enable auto-scaling

sst.config.ts
cluster.addService("MyService", {
scaling: {
min: 4,
max: 16,
cpuUtilization: 50,
memoryUtilization: 50,
}
});

Expose through API Gateway

You can give your service a public URL by exposing it through API Gateway HTTP API. You can also optionally give it a custom domain.

sst.config.ts
const service = cluster.addService("MyService", {
serviceRegistry: {
port: 80,
},
});
const api = new sst.aws.ApiGatewayV2("MyApi", {
vpc,
domain: "example.com"
});
api.routePrivate("$default", service.nodes.cloudmapService.arn);

Add a load balancer

You can also expose your service by adding a load balancer to it and optionally adding a custom domain.

sst.config.ts
cluster.addService("MyService", {
loadBalancer: {
domain: "example.com",
ports: [
{ listen: "80/http" },
{ listen: "443/https", forward: "80/http" },
]
}
});

Link resources to your service. This will grant permissions to the resources and allow you to access it in your app.

sst.config.ts
const bucket = new sst.aws.Bucket("MyBucket");
cluster.addService("MyService", {
link: [bucket],
});

If your service is written in Node.js, you can use the SDK to access the linked resources.

app.ts
import { Resource } from "sst";
console.log(Resource.MyBucket.name);

Cost

By default, this uses a Linux/X86 Fargate container with 0.25 vCPUs at $0.04048 per vCPU per hour and 0.5 GB of memory at $0.004445 per GB per hour. It includes 20GB of Ephemeral Storage for free with additional storage at $0.000111 per GB per hour. Each container also gets a public IPv4 address at $0.005 per hour.

That works out to $0.04048 x 0.25 x 24 x 30 + $0.004445 x 0.5 x 24 x 30 + $0.005 x 24 x 30 or $13 per month.

Adjust this for the cpu, memory and storage you are using. And check the prices for Linux/ARM if you are using arm64 as your architecture.

The above are rough estimates for us-east-1, check out the Fargate pricing and the Public IPv4 Address pricing for more details.

Scaling

By default, scaling is disabled. If enabled, adjust the above for the number of containers.

API Gateway

If you expose your service through API Gateway, you’ll need to add the cost of API Gateway HTTP API as well. For services that don’t get a lot of traffic, this ends up being a lot cheaper since API Gateway is pay per request.

Learn more about using Cluster with API Gateway.

Application Load Balancer

If you add loadBalancer HTTP or HTTPS ports, an ALB is created at $0.0225 per hour, $0.008 per LCU-hour, and $0.005 per hour if HTTPS with a custom domain is used. Where LCU is a measure of how much traffic is processed.

That works out to $0.0225 x 24 x 30 or $16 per month. Add $0.005 x 24 x 30 or $4 per month for HTTPS. Also add the LCU-hour used.

The above are rough estimates for us-east-1, check out the Application Load Balancer pricing for more details.

Network Load Balancer

If you add loadBalancer TCP, UDP, or TLS ports, an NLB is created at $0.0225 per hour and $0.006 per NLCU-hour. Where NCLU is a measure of how much traffic is processed.

That works out to $0.0225 x 24 x 30 or $16 per month. Also add the NLCU-hour used.

The above are rough estimates for us-east-1, check out the Network Load Balancer pricing for more details.


Constructor

new Cluster(name, args, opts?)

Parameters

ClusterArgs

transform?

Type Object

Transform how this component creates its underlying resources.

transform.cluster?

Type ClusterArgs | (args: ClusterArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Cluster resource.

vpc

Type Vpc | Input<Object>

The VPC to use for the cluster.

Create a Vpc component.

sst.config.ts
const myVpc = new sst.aws.Vpc("MyVpc");

And pass it in.

{
vpc: myVpc
}

By default, both the load balancer and the services are deployed in public subnets. The above is equivalent to:

{
vpc: {
id: myVpc.id,
securityGroups: myVpc.securityGroups,
containerSubnets: myVpc.publicSubnets,
loadBalancerSubnets: myVpc.publicSubnets,
cloudmapNamespaceId: myVpc.nodes.cloudmapNamespace.id,
cloudmapNamespaceName: myVpc.nodes.cloudmapNamespace.name,
}
}

vpc.cloudmapNamespaceId

Type Input<string>

The ID of the Cloud Map namespace to use for the service.

vpc.cloudmapNamespaceName

Type Input<string>

The name of the Cloud Map namespace to use for the service.

vpc.containerSubnets?

Type Input<Input<string>[]>

A list of subnet IDs in the VPC to place the containers in.

vpc.id

Type Input<string>

The ID of the VPC.

vpc.loadBalancerSubnets

Type Input<Input<string>[]>

A list of subnet IDs in the VPC to place the load balancer in.

vpc.securityGroups

Type Input<Input<string>[]>

A list of VPC security group IDs for the service.

Properties

nodes

Type Object

The underlying resources this component creates.

nodes.cluster

Type Cluster

The Amazon ECS Cluster.

Methods

addService

addService(name, args?)

Parameters

Returns Service

Add a service to the cluster.

sst.config.ts
cluster.addService("MyService");

You can also configure the service. For example, set a custom domain.

sst.config.ts
cluster.addService("MyService", {
domain: "example.com"
});

Enable auto-scaling.

sst.config.ts
cluster.addService("MyService", {
scaling: {
min: 4,
max: 16,
cpuUtilization: 50,
memoryUtilization: 50,
}
});

By default this starts a single container. To add multiple containers in the service, pass in an array of containers args.

sst.config.ts
cluster.addService("MyService", {
architecture: "arm64",
containers: [
{
name: "app",
image: "nginxdemos/hello:plain-text"
},
{
name: "admin",
image: {
context: "./admin",
dockerfile: "Dockerfile"
}
}
]
});

This is useful for running sidecar containers.

ClusterServiceArgs

architecture?

Type Input<x86_64 | arm64>

Default “x86_64”

The CPU architecture of the container in this service.

{
architecture: "arm64"
}

command?

Type Input<Input<string>[]>

The command to override the default command in the container.

{
command: ["npm", "run", "start"]
}

containers?

Type Input<Object>[]

The containers to run in the service.

By default this starts a single container. To add multiple containers in the service, pass in an array of containers args.

{
containers: [
{
name: "app",
image: "nginxdemos/hello:plain-text"
},
{
name: "admin",
image: {
context: "./admin",
dockerfile: "Dockerfile"
}
}
]
}

If you specify containers, you cannot list the above args at the top-level. For example, you cannot pass in image at the top level.

{
image: "nginxdemos/hello:plain-text",
containers: [
{
name: "app",
image: "nginxdemos/hello:plain-text"
},
{
name: "admin",
image: "nginxdemos/hello:plain-text"
}
]
}

You will need to pass in image as a part of the containers.

containers[].command?

Type Input<string[]>

The command to override the default command in the container. Same as the top-level command.

containers[].cpu?

Type ${number} vCPU

The amount of CPU allocated to the container.

By default, a container can use up to all the CPU allocated to the service. If set, the container is capped at this allocation even if the service has idle CPU available.

Note that the sum of all the containers’ CPU must be less than or equal to the service’s CPU.

{
cpu: "0.25 vCPU"
}

containers[].dev?

Type Object

Configure how this container works in sst dev. Same as the top-level dev.

containers[].dev.autostart?

Type Input<boolean>

Configure if you want to automatically start this when sst dev starts. Same as the top-level dev.autostart.

containers[].dev.command

Type Input<string>

The command that sst dev runs to start this in dev mode. Same as the top-level dev.command.

containers[].dev.directory?

Type Input<string>

Change the directory from where the command is run. Same as the top-level dev.directory.

containers[].entrypoint?

Type Input<string[]>

The entrypoint to override the default entrypoint in the container. Same as the top-level entrypoint.

containers[].environment?

Type Input<Record<string, Input<string>>>

Key-value pairs of values that are set as container environment variables. Same as the top-level environment.

containers[].health?

Type Input<Object>

Configure the health check for the container. Same as the top-level health.

containers[].health.command

Type Input<string[]>

A string array representing the command that the container runs to determine if it is healthy.

It must start with CMD to run the command arguments directly. Or CMD-SHELL to run the command with the container’s default shell.

{
command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"]
}
containers[].health.interval?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “30 seconds”

The time between running the command for the health check. Must be between 5 seconds and 300 seconds.

containers[].health.retries?

Type Input<number>

Default 3

The number of consecutive failures required to consider the check to have failed. Must be between 1 and 10.

containers[].health.startPeriod?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “0 seconds”

The grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. Must be between 0 seconds and 300 seconds.

containers[].health.timeout?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “5 seconds”

The maximum time to allow one command to run. Must be between 2 seconds and 60 seconds.

containers[].image?

Type Input<string | Object>

Configure the Docker image for the container. Same as the top-level image.

containers[].image.args?

Type Input<Record<string, Input<string>>>

Key-value pairs of build args. Same as the top-level image.args.

containers[].image.context?

Type Input<string>

The path to the Docker build context. Same as the top-level image.context.

containers[].image.dockerfile?

Type Input<string>

The path to the Dockerfile. Same as the top-level image.dockerfile.

containers[].logging?

Type Input<Object>

Configure the service’s logs in CloudWatch. Same as the top-level logging.

containers[].logging.retention?

Type Input<1 day | 3 days | 5 days | 1 week | 2 weeks | 1 month | 2 months | 3 months | 4 months | 5 months | 6 months | 1 year | 13 months | 18 months | 2 years | 3 years | 5 years | 6 years | 7 years | 8 years | 9 years | 10 years | forever>

The duration the logs are kept in CloudWatch. Same as the top-level logging.retention.

containers[].memory?

Type ${number} GB

The amount of memory allocated to the container.

By default, a container can use up to all the memory allocated to the service. If set, the container is capped at this allocation. If exceeded, the container will be killed even if the service has idle memory available.

Note that the sum of all the containers’ memory must be less than or equal to the service’s memory.

{
memory: "0.5 GB"
}

containers[].name

Type Input<string>

The name of the container.

This is used as the --name option in the Docker run command.

containers[].ssm?

Type Input<Record<string, Input<string>>>

Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables. Same as the top-level ssm.

containers[].volumes?

Type Input<Object>[]

Mount Amazon EFS file systems into the container. Same as the top-level efs.

containers[].volumes[].efs

Type Input<Efs | Object>

The Amazon EFS file system to mount.

containers[].volumes[].efs.accessPoint

Type Input<string>

The ID of the EFS access point.

containers[].volumes[].efs.fileSystem

Type Input<string>

The ID of the EFS file system.

containers[].volumes[].path

Type Input<string>

The path to mount the volume.

cpu?

Type 0.25 vCPU | 0.5 vCPU | 1 vCPU | 2 vCPU | 4 vCPU | 8 vCPU | 16 vCPU

Default “0.25 vCPU”

The amount of CPU allocated to the container in this service. If there are multiple containers in the service, this is the total amount of CPU shared across all the containers.

{
cpu: "1 vCPU"
}

dev?

Type false | Object

Configure how this component works in sst dev.

By default, your service in not deployed in sst dev. Instead, you can use the dev.command to start your app locally. It’ll be run as a separate process in the sst dev multiplexer. Read more about sst dev.

To disable dev mode and deploy your service, pass in false.

dev.autostart?

Type Input<boolean>

Default true

Configure if you want to automatically start this when sst dev starts. You can still start it manually later.

dev.command?

Type Input<string>

The command that sst dev runs to start this in dev mode. This is the command you run when you want to run your service locally.

dev.directory?

Type Input<string>

Default Uses the image.dockerfile path

Change the directory from where the command is run.

dev.url?

Type Input<string>

Default http://url-unavailable-in-dev.mode

The url when this is running in dev mode.

Since this component is not deployed in sst dev, there is no real URL. But if you are using this component’s url or linking to this component’s url, it can be useful to have a placeholder URL. It avoids having to handle it being undefined.

entrypoint?

Type Input<string[]>

The entrypoint to override the default entrypoint in the container.

{
entrypoint: ["/usr/bin/my-entrypoint"]
}

environment?

Type Input<Record<string, Input<string>>>

Key-value pairs of values that are set as container environment variables. The keys need to:

  • Start with a letter
  • Be at least 2 characters long
  • Contain only letters, numbers, or underscores
{
environment: {
DEBUG: "true"
}
}

executionRole?

Type Input<string>

Default Creates a new role

Assigns the given IAM role name to AWS ECS to launch and manage the containers in the service. This allows you to pass in a previously created role.

By default, the service creates a new IAM role when it’s created.

{
executionRole: "my-execution-role"
}

health?

Type Input<Object>

Default Health check is disabled

Configure the health check that ECS runs on your containers.

This health check is run by ECS. While, loadBalancer.health is run by the load balancer, if you are using one. This is off by default. While the load balancer one cannot be disabled.

This config maps to the HEALTHCHECK parameter of the docker run command. Learn more about container health checks.

{
health: {
command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"],
startPeriod: "60 seconds",
timeout: "5 seconds",
interval: "30 seconds",
retries: 3
}
}

health.command

Type Input<string[]>

A string array representing the command that the container runs to determine if it is healthy.

It must start with CMD to run the command arguments directly. Or CMD-SHELL to run the command with the container’s default shell.

{
command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"]
}

health.interval?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “30 seconds”

The time between running the command for the health check. Must be between 5 seconds and 300 seconds.

health.retries?

Type Input<number>

Default 3

The number of consecutive failures required to consider the check to have failed. Must be between 1 and 10.

health.startPeriod?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “0 seconds”

The grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. Must be between 0 seconds and 300 seconds.

health.timeout?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “5 seconds”

The maximum time to allow one command to run. Must be between 2 seconds and 60 seconds.

image?

Type Input<string | Object>

Default Build a Docker image from the Dockerfile in the root directory.

Configure the Docker build command for building the image or specify a pre-built image.

Building a Docker image.

Prior to building the image, SST will automatically add the .sst directory to the .dockerignore if not already present.

{
image: {
context: "./app",
dockerfile: "Dockerfile",
args: {
MY_VAR: "value"
}
}
}

Alternatively, you can pass in a pre-built image.

{
image: "nginxdemos/hello:plain-text"
}

image.args?

Type Input<Record<string, Input<string>>>

Key-value pairs of build args to pass to the Docker build command.

{
args: {
MY_VAR: "value"
}
}

image.context?

Type Input<string>

Default ”.”

The path to the Docker build context. The path is relative to your project’s sst.config.ts.

To change where the Docker build context is located.

{
context: "./app"
}

image.dockerfile?

Type Input<string>

Default “Dockerfile”

The path to the Dockerfile. The path is relative to the build context.

To use a different Dockerfile.

{
dockerfile: "Dockerfile.prod"
}

image.tags?

Type Input<Input<string>[]>

Tags to apply to the Docker image.

{
tags: ["v1.0.0", "commit-613c1b2"]
}

Type Input<any[]>

Link resources to your service. This will:

  1. Grant the permissions needed to access the resources.
  2. Allow you to access it in your app using the SDK.

Takes a list of components to link to the service.

{
link: [bucket, stripeKey]
}

loadBalancer?

Type Input<Object>

Default Load balancer is not created

Configure a load balancer to route traffic to the containers.

While you can expose a service through API Gateway, it’s better to use a load balancer for most traditional web applications. It is more expensive to start but at higher levels of traffic it ends up being more cost effective.

Also, if you need to listen on network layer protocols like tcp or udp, you have to expose it through a load balancer.

By default, the endpoint is an autogenerated load balancer URL. You can also add a custom domain for the endpoint.

{
loadBalancer: {
domain: "example.com",
ports: [
{ listen: "80/http", redirect: "443/https" },
{ listen: "443/https", forward: "80/http" }
]
}
}

loadBalancer.domain?

Type Input<string | Object>

Set a custom domain for your load balancer endpoint.

Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you’ll need to pass in a cert that validates domain ownership and add the DNS records.

By default this assumes the domain is hosted on Route 53.

{
domain: "example.com"
}

For domains hosted on Cloudflare.

{
domain: {
name: "example.com",
dns: sst.cloudflare.dns()
}
}
loadBalancer.domain.aliases?

Type Input<string[]>

Alias domains that should be used.

{
domain: {
name: "app1.example.com",
aliases: ["app2.example.com"]
}
}
loadBalancer.domain.cert?

Type Input<string>

The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically.

To manually set up a domain on an unsupported provider, you’ll need to:

  1. Validate that you own the domain by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner.
  2. Once validated, set the certificate ARN as the cert and set dns to false.
  3. Add the DNS records in your provider to point to the load balancer endpoint.
{
domain: {
name: "example.com",
dns: false,
cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63"
}
}
loadBalancer.domain.dns?

Type Input<false | sst.aws.dns | sst.cloudflare.dns | sst.vercel.dns>

Default sst.aws.dns

The DNS provider to use for the domain. Defaults to the AWS.

Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing.

Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you’ll need to set dns to false and pass in a certificate validating ownership via cert.

Specify the hosted zone ID for the Route 53 domain.

{
domain: {
name: "example.com",
dns: sst.aws.dns({
zone: "Z2FDTNDATAQYW2"
})
}
}

Use a domain hosted on Cloudflare, needs the Cloudflare provider.

{
domain: {
name: "example.com",
dns: sst.cloudflare.dns()
}
}

Use a domain hosted on Vercel, needs the Vercel provider.

{
domain: {
name: "example.com",
dns: sst.vercel.dns()
}
}
loadBalancer.domain.name

Type Input<string>

The custom domain you want to use.

{
domain: {
name: "example.com"
}
}

Can also include subdomains based on the current stage.

{
domain: {
name: `${$app.stage}.example.com`
}
}

Wildcard domains are supported.

{
domain: {
name: "*.example.com"
}
}

loadBalancer.health?

Type Input<Record<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls, Input<Object>>>

Configure the health check that the load balancer runs on your containers.

This health check is run by the load balancer. While, health is run by ECS. This cannot be disabled if you are using a load balancer. While the other is off by default.

Since this cannot be disabled, here are some tips on how to debug an unhealthy health check.

How to debug a load balancer health check

If you notice a Unhealthy: Health checks failed error, it’s because the health check has failed. When it fails, the load balancer will terminate the containers, causing any requests to fail.

Here’s how to debug it:

  1. Verify the health check path.

    By default, the load balancer checks the / path. Ensure it’s accessible in your containers. If your application runs on a different path, then update the path in the health check config accordingly.

  2. Confirm the containers are operational.

    Navigate to ECS console > select the cluster > go to the Tasks tab > choose Any desired status under the Filter desired status dropdown > select a task and check for errors under the Logs tab. If it has error that means that the container failed to start.

  3. If the container was terminated by the load balancer while still starting up, try increasing the health check interval and timeout.

For http and https the default is:

{
path: "/",
healthyThreshold: 5,
successCodes: "200",
timeout: "5 seconds",
unhealthyThreshold: 2,
interval: "30 seconds"
}

For tcp and udp the default is:

{
healthyThreshold: 5,
timeout: "6 seconds",
unhealthyThreshold: 2,
interval: "30 seconds"
}

To configure the health check, we use the port/protocol format. Here we are configuring a health check that pings the /health path on port 8080 every 10 seconds.

{
ports: [
{ listen: "80/http", forward: "8080/http" }
],
health: {
"8080/http": {
path: "/health",
interval: "10 seconds"
}
}
}
loadBalancer.health[].healthyThreshold?

Type Input<number>

Default 5

The number of consecutive successful health check requests required to consider the target healthy. Must be between 2 and 10.

loadBalancer.health[].interval?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “30 seconds”

The time period between each health check request. Must be between 5 seconds and 300 seconds.

loadBalancer.health[].path?

Type Input<string>

Default ”/”

The URL path to ping on the service for health checks. Only applicable to http and https protocols.

loadBalancer.health[].successCodes?

Type Input<string>

Default “200”

One or more HTTP response codes the health check treats as successful. Only applicable to http and https protocols.

{
successCodes: "200-299"
}
loadBalancer.health[].timeout?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “5 seconds”

The timeout for each health check request. If no response is received within this time, it is considered failed. Must be between 2 seconds and 120 seconds.

loadBalancer.health[].unhealthyThreshold?

Type Input<number>

Default 2

The number of consecutive failed health check requests required to consider the target unhealthy. Must be between 2 and 10.

loadBalancer.ports

Type Input<Object[]>

Configure the mapping for the ports the load balancer listens to, forwards, or redirects to the service. This supports two types of protocols:

  1. Application Layer Protocols: http and https. This’ll create an Application Load Balancer.
  2. Network Layer Protocols: tcp, udp, tcp_udp, and tls. This’ll create a Network Load Balancer.

You can not configure both application and network layer protocols for the same service.

Here we are listening on port 80 and forwarding it to the service on port 8080.

{
ports: [
{ listen: "80/http", forward: "8080/http" }
]
}

The forward port and protocol defaults to the listen port and protocol. So in this case both are 80/http.

{
ports: [
{ listen: "80/http" }
]
}

If multiple containers are configured via the containers argument, you need to specify which container the traffic should be forwarded to.

{
ports: [
{ listen: "80/http", container: "app" },
{ listen: "8000/http", container: "admin" }
]
}

You can also route the same port to multiple containers via path-based routing.

{
ports: [
{ listen: "80/http", container: "app", path: "/api/*" },
{ listen: "80/http", container: "admin", path: "/admin/*" }
]
}

Additionally, you can redirect traffic from one port to another. This is commonly used to redirect http to https.

{
ports: [
{ listen: "80/http", redirect: "443/https" },
{ listen: "443/https", forward: "80/http" }
]
}
loadBalancer.ports[].container?

Type Input<string>

The name of the container to forward the traffic to.

You need this if there’s more than one container.

If there is only one container, the traffic is automatically forwarded to that container.

loadBalancer.ports[].forward?

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

Default The same port and protocol as listen.

The port and protocol of the container the service forwards the traffic to. Uses the format {port}/{protocol}.

loadBalancer.ports[].listen

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

The port and protocol the service listens on. Uses the format {port}/{protocol}.

loadBalancer.ports[].path?

Type Input<string>

Default Requests to all paths are forwarded.

Configure path-based routing. Only requests matching the path are forwarded to the container. Only applicable to “http” protocols.

The path pattern is case-sensitive, supports wildcards, and can be up to 128 characters.

  • * matches 0 or more characters.
  • ? matches exactly 1 character.

For example:

  • /api/*
  • `/api/*.png
loadBalancer.ports[].redirect?

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

The port and protocol to redirect the traffic to. Uses the format {port}/{protocol}.

loadBalancer.public?

Type Input<boolean>

Default true

Configure if the load balancer should be public or private.

When set to false, the load balancer endpoint will only be accessible within the VPC.

logging?

Type Input<Object>

Default { retention: “1 month” }

Configure the service’s logs in CloudWatch.

{
logging: {
retention: "forever"
}
}

logging.retention?

Type Input<1 day | 3 days | 5 days | 1 week | 2 weeks | 1 month | 2 months | 3 months | 4 months | 5 months | 6 months | 1 year | 13 months | 18 months | 2 years | 3 years | 5 years | 6 years | 7 years | 8 years | 9 years | 10 years | forever>

Default “1 month”

The duration the logs are kept in CloudWatch.

memory?

Type ${number} GB

Default “0.5 GB”

The amount of memory allocated to the container in this service. If there are multiple containers in the service, this is the total amount of memory shared across all the containers.

{
memory: "2 GB"
}

permissions?

Type Input<Object[]>

Permissions and the resources that the service needs to access. These permissions are used to create the service’s task role.

Allow the service to read and write to an S3 bucket called my-bucket.

{
permissions: [
{
actions: ["s3:GetObject", "s3:PutObject"],
resources: ["arn:aws:s3:::my-bucket/*"]
},
]
}

Allow the service to perform all actions on an S3 bucket called my-bucket.

{
permissions: [
{
actions: ["s3:*"],
resources: ["arn:aws:s3:::my-bucket/*"]
},
]
}

Granting the service permissions to access all resources.

{
permissions: [
{
actions: ["*"],
resources: ["*"]
},
]
}

permissions[].actions

Type string[]

The IAM actions that can be performed.

{
actions: ["s3:*"]
}

permissions[].resources

Type Input<string>[]

The resourcess specified using the IAM ARN format.

{
resources: ["arn:aws:s3:::my-bucket/*"]
}

scaling?

Type Input<Object>

Default { min: 1, max: 1 }

Configure the service to automatically scale up or down based on the CPU or memory utilization of a container. By default, scaling is disabled and the service will run in a single container.

{
scaling: {
min: 4,
max: 16,
cpuUtilization: 50,
memoryUtilization: 50
}
}

scaling.cpuUtilization?

Type Input<number | false>

Default 70

The target CPU utilization percentage to scale up or down. It’ll scale up when the CPU utilization is above the target and scale down when it’s below the target.

{
scaling: {
cpuUtilization: 50
}
}

scaling.max?

Type Input<number>

Default 1

The maximum number of containers to scale up to.

{
scaling: {
max: 16
}
}

scaling.memoryUtilization?

Type Input<number | false>

Default 70

The target memory utilization percentage to scale up or down. It’ll scale up when the memory utilization is above the target and scale down when it’s below the target.

{
scaling: {
memoryUtilization: 50
}
}

scaling.min?

Type Input<number>

Default 1

The minimum number of containers to scale down to.

{
scaling: {
min: 4
}
}

serviceRegistry?

Type Input<Object>

Configure the CloudMap service registry for the service.

This creates an srv record in the CloudMap service. This is needed if you want to connect an ApiGatewayV2 VPC link to the service.

API Gateway will forward requests to the given port on the service.

{
serviceRegistry: {
port: 80
}
}

serviceRegistry.port

Type number

The port in the service to forward requests to.

ssm?

Type Input<Record<string, Input<string>>>

Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables.

{
ssm: {
DATABASE_PASSWORD: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-secret-123abc"
}
}

storage?

Type ${number} GB

Default “20 GB”

The amount of ephemeral storage (in GB) allocated to the container in this service.

{
storage: "100 GB"
}

taskRole?

Type Input<string>

Default Creates a new role

Assigns the given IAM role name to the containers running in the service. This allows you to pass in a previously created role.

By default, the service creates a new IAM role when it’s created. It’ll update this role if you add permissions or link resources.

However, if you pass in a role, you’ll need to update it manually if you add permissions or link resources.

{
taskRole: "my-task-role"
}

transform?

transform.autoScalingTarget?

Type TargetArgs | (args: TargetArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Application Auto Scaling target resource.

transform.executionRole?

Type RoleArgs | (args: RoleArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Execution IAM Role resource.

transform.image?

Type ImageArgs | (args: ImageArgs, opts: ComponentResourceOptions, name: string) => void

Transform the Docker Image resource.

transform.listener?

Type ListenerArgs | (args: ListenerArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Load Balancer listener resource.

transform.loadBalancer?

Type LoadBalancerArgs | (args: LoadBalancerArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Load Balancer resource.

transform.loadBalancerSecurityGroup?

Type SecurityGroupArgs | (args: SecurityGroupArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Security Group resource for the Load Balancer.

transform.logGroup?

Type LogGroupArgs | (args: LogGroupArgs, opts: ComponentResourceOptions, name: string) => void

Transform the CloudWatch log group resource.

transform.service?

Type ServiceArgs | (args: ServiceArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Service resource.

transform.target?

Type TargetGroupArgs | (args: TargetGroupArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Load Balancer target group resource.

transform.taskDefinition?

Type TaskDefinitionArgs | (args: TaskDefinitionArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Task Definition resource.

transform.taskRole?

Type RoleArgs | (args: RoleArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Task IAM Role resource.

volumes?

Type Input<Object>[]

Mount Amazon EFS file systems into the container.

Create an EFS file system.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc");
const fileSystem = new sst.aws.Efs("MyFileSystem", { vpc });

And pass it in.

{
volumes: [
{
efs: fileSystem,
path: "/mnt/efs"
}
]
}

Or pass in a the EFS file system ID.

{
volumes: [
{
efs: {
fileSystem: "fs-12345678",
accessPoint: "fsap-12345678"
},
path: "/mnt/efs"
}
]
}

volumes[].efs

Type Input<Efs | Object>

The Amazon EFS file system to mount.

volumes[].efs.accessPoint

Type Input<string>

The ID of the EFS access point.

volumes[].efs.fileSystem

Type Input<string>

The ID of the EFS file system.

volumes[].path

Type Input<string>

The path to mount the volume.