Skip to content
22K
Console

Service

Reference doc for the `sst.aws.Service` component.

The Service component is internally used by the Cluster component to deploy services to Amazon ECS. It uses AWS Fargate.

This component is returned by the addService method of the Cluster component.


Constructor

new Service(name, args, opts?)

Parameters

ServiceArgs

architecture?

Type Input<x86_64 | arm64>

Default “x86_64”

The CPU architecture of the container.

{
architecture: "arm64"
}

capacity?

Type Input<spot | Object>

Default Regular Fargate

Configure the capacity provider; regular Fargate or Fargate Spot, for this service.

Fargate Spot allows you to run containers on spare AWS capacity at around 50% discount compared to regular Fargate. Learn more about Fargate pricing.

There are a couple of caveats:

  1. AWS may reclaim this capacity and turn off your service after a two-minute warning. This is rare, but it can happen.
  2. If there’s no spare capacity, you’ll get an error.

This makes Fargate Spot a good option for dev or PR environments. You can set this using.

{
capacity: "spot"
}

You can also configure the % of regular vs spot capacity you want through the weight prop. And optionally set the base or first X number of tasks that’ll be started using a given capacity.

For example, the base: 1 says that the first task uses regular Fargate, and from that point on there will be an even split between the capacity providers.

{
capacity: {
fargate: { weight: 1, base: 1 },
spot: { weight: 1 }
}
}

The base works in tandem with the scaling prop. So setting base to X doesn’t mean it’ll start those tasks right away. It means that as your service scales up, according to the scaling prop, it’ll ensure that the first X tasks will be with the given capacity.

And this is why you can only set the base for only one capacity provider. So you are not allowed to do the following.

{
capacity: {
fargate: { weight: 1, base: 1 },
// This will give you an error
spot: { weight: 1, base: 1 }
}
}

When you change the capacity, the ECS service is terminated and recreated. This will cause some temporary downtime.

Here are some examples settings.

  • Use only Fargate Spot.

    {
    capacity: "spot"
    }
  • Use 50% regular Fargate and 50% Fargate Spot.

    {
    capacity: {
    fargate: { weight: 1 },
    spot: { weight: 1 }
    }
    }
  • Use 50% regular Fargate and 50% Fargate Spot. And ensure that the first 2 tasks use regular Fargate.

    {
    capacity: {
    fargate: { weight: 1, base: 2 },
    spot: { weight: 1 }
    }
    }

capacity.fargate?

Type Input<Object>

Configure how the regular Fargate capacity is allocated.

capacity.fargate.base?

Type Input<number>

Start the first base number of tasks with the given capacity.

capacity.fargate.weight

Type Input<number>

Ensure the given ratio of tasks are started for this capacity.

capacity.spot?

Type Input<Object>

Configure how the Fargate spot capacity is allocated.

capacity.spot.base?

Type Input<number>

Start the first base number of tasks with the given capacity.

capacity.spot.weight

Type Input<number>

Ensure the given ratio of tasks are started for this capacity.

cluster

Type Cluster

The cluster to use for the service.

command?

Type Input<Input<string>[]>

The command to override the default command in the container.

{
command: ["npm", "run", "start"]
}

containers?

Type Input<Object>[]

The containers to run in the service.

By default this starts a single container. To add multiple containers in the service, pass in an array of containers args.

{
containers: [
{
name: "app",
image: "nginxdemos/hello:plain-text"
},
{
name: "admin",
image: {
context: "./admin",
dockerfile: "Dockerfile"
}
}
]
}

If you specify containers, you cannot list the above args at the top-level. For example, you cannot pass in image at the top level.

{
image: "nginxdemos/hello:plain-text",
containers: [
{
name: "app",
image: "nginxdemos/hello:plain-text"
},
{
name: "admin",
image: "nginxdemos/hello:plain-text"
}
]
}

You will need to pass in image as a part of the containers.

containers[].command?

Type Input<string[]>

The command to override the default command in the container. Same as the top-level command.

containers[].cpu?

Type ${number} vCPU

The amount of CPU allocated to the container.

By default, a container can use up to all the CPU allocated to all the containers. If set, this container is capped at this allocation even if more idle CPU is available.

The sum of all the containers’ CPU must be less than or equal to the total available CPU.

{
cpu: "0.25 vCPU"
}

containers[].dev?

Type Object

Configure how this container works in sst dev. Same as the top-level dev.

containers[].dev.autostart?

Type Input<boolean>

Configure if you want to automatically start this when sst dev starts. Same as the top-level dev.autostart.

containers[].dev.command

Type Input<string>

The command that sst dev runs to start this in dev mode. Same as the top-level dev.command.

containers[].dev.directory?

Type Input<string>

Change the directory from where the command is run. Same as the top-level dev.directory.

containers[].entrypoint?

Type Input<string[]>

The entrypoint to override the default entrypoint in the container. Same as the top-level entrypoint.

containers[].environment?

Type Input<Record<string, Input<string>>>

Key-value pairs of values that are set as container environment variables. Same as the top-level environment.

containers[].health?

Type Input<Object>

Configure the health check for the container. Same as the top-level health.

containers[].health.command

Type Input<string[]>

A string array representing the command that the container runs to determine if it is healthy.

It must start with CMD to run the command arguments directly. Or CMD-SHELL to run the command with the container’s default shell.

{
command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"]
}
containers[].health.interval?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “30 seconds”

The time between running the command for the health check. Must be between 5 seconds and 300 seconds.

containers[].health.retries?

Type Input<number>

Default 3

The number of consecutive failures required to consider the check to have failed. Must be between 1 and 10.

containers[].health.startPeriod?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “0 seconds”

The grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. Must be between 0 seconds and 300 seconds.

containers[].health.timeout?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “5 seconds”

The maximum time to allow one command to run. Must be between 2 seconds and 60 seconds.

containers[].image?

Type Input<string | Object>

Configure the Docker image for the container. Same as the top-level image.

containers[].image.args?

Type Input<Record<string, Input<string>>>

Key-value pairs of build args. Same as the top-level image.args.

containers[].image.context?

Type Input<string>

The path to the Docker build context. Same as the top-level image.context.

containers[].image.dockerfile?

Type Input<string>

The path to the Dockerfile. Same as the top-level image.dockerfile.

containers[].image.target?

Type Input<string>

The stage to build up to. Same as the top-level image.target.

containers[].logging?

Type Input<Object>

Configure the logs in CloudWatch. Same as the top-level logging.

containers[].logging.retention?

Type Input<1 day | 3 days | 5 days | 1 week | 2 weeks | 1 month | 2 months | 3 months | 4 months | 5 months | 6 months | 1 year | 13 months | 18 months | 2 years | 3 years | 5 years | 6 years | 7 years | 8 years | 9 years | 10 years | forever>

The duration the logs are kept in CloudWatch. Same as the top-level logging.retention.

containers[].memory?

Type ${number} GB

The amount of memory allocated to the container.

By default, a container can use up to all the memory allocated to all the containers. If set, the container is capped at this allocation. If exceeded, the container will be killed even if there is idle memory available.

The sum of all the containers’ memory must be less than or equal to the total available memory.

{
memory: "0.5 GB"
}

containers[].name

Type Input<string>

The name of the container.

This is used as the --name option in the Docker run command.

containers[].ssm?

Type Input<Record<string, Input<string>>>

Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables. Same as the top-level ssm.

containers[].volumes?

Type Input<Object>[]

Mount Amazon EFS file systems into the container. Same as the top-level efs.

containers[].volumes[].efs

Type Input<Efs | Object>

The Amazon EFS file system to mount.

containers[].volumes[].efs.accessPoint

Type Input<string>

The ID of the EFS access point.

containers[].volumes[].efs.fileSystem

Type Input<string>

The ID of the EFS file system.

containers[].volumes[].path

Type Input<string>

The path to mount the volume.

cpu?

Type 0.25 vCPU | 0.5 vCPU | 1 vCPU | 2 vCPU | 4 vCPU | 8 vCPU | 16 vCPU

Default “0.25 vCPU”

The amount of CPU allocated to the container. If there are multiple containers, this is the total amount of CPU shared across all the containers.

{
cpu: "1 vCPU"
}

dev?

Type false | Object

Configure how this component works in sst dev.

By default, your service in not deployed in sst dev. Instead, you can set the dev.command and it’ll be started locally in a separate tab in the sst dev multiplexer. Read more about sst dev.

This makes it so that the container doesn’t have to be redeployed on every change. To disable this and deploy your service in sst dev, pass in false.

dev.autostart?

Type Input<boolean>

Default true

Configure if you want to automatically start this when sst dev starts. You can still start it manually later.

dev.command?

Type Input<string>

The command that sst dev runs to start this in dev mode. This is the command you run when you want to run your service locally.

dev.directory?

Type Input<string>

Default Uses the image.dockerfile path

Change the directory from where the command is run.

dev.url?

Type Input<string>

Default http://url-unavailable-in-dev.mode

The url when this is running in dev mode.

Since this component is not deployed in sst dev, there is no real URL. But if you are using this component’s url or linking to this component’s url, it can be useful to have a placeholder URL. It avoids having to handle it being undefined.

entrypoint?

Type Input<string[]>

The entrypoint that overrides the default entrypoint in the container.

{
entrypoint: ["/usr/bin/my-entrypoint"]
}

environment?

Type Input<Record<string, Input<string>>>

Key-value pairs of values that are set as container environment variables. The keys need to:

  1. Start with a letter.
  2. Be at least 2 characters long.
  3. Contain only letters, numbers, or underscores.
{
environment: {
DEBUG: "true"
}
}

executionRole?

Type Input<string>

Default Creates a new role

Assigns the given IAM role name to AWS ECS to launch and manage the containers. This allows you to pass in a previously created role.

By default, a new IAM role is created.

{
executionRole: "my-execution-role"
}

health?

Type Input<Object>

Default Health check is disabled

Configure the health check that ECS runs on your containers.

This health check is run by ECS. While, loadBalancer.health is run by the load balancer, if you are using one. This is off by default. While the load balancer one cannot be disabled.

This config maps to the HEALTHCHECK parameter of the docker run command. Learn more about container health checks.

{
health: {
command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"],
startPeriod: "60 seconds",
timeout: "5 seconds",
interval: "30 seconds",
retries: 3
}
}

health.command

Type Input<string[]>

A string array representing the command that the container runs to determine if it is healthy.

It must start with CMD to run the command arguments directly. Or CMD-SHELL to run the command with the container’s default shell.

{
command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"]
}

health.interval?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “30 seconds”

The time between running the command for the health check. Must be between 5 seconds and 300 seconds.

health.retries?

Type Input<number>

Default 3

The number of consecutive failures required to consider the check to have failed. Must be between 1 and 10.

health.startPeriod?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “0 seconds”

The grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. Must be between 0 seconds and 300 seconds.

health.timeout?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “5 seconds”

The maximum time to allow one command to run. Must be between 2 seconds and 60 seconds.

image?

Type Input<string | Object>

Default Build a Docker image from the Dockerfile in the root directory.

Configure the Docker build command for building the image or specify a pre-built image.

Building a Docker image.

Prior to building the image, SST will automatically add the .sst directory to the .dockerignore if not already present.

{
image: {
context: "./app",
dockerfile: "Dockerfile",
args: {
MY_VAR: "value"
}
}
}

Alternatively, you can pass in a pre-built image.

{
image: "nginxdemos/hello:plain-text"
}

image.args?

Type Input<Record<string, Input<string>>>

Key-value pairs of build args to pass to the Docker build command.

{
args: {
MY_VAR: "value"
}
}

image.context?

Type Input<string>

Default ”.”

The path to the Docker build context. The path is relative to your project’s sst.config.ts.

To change where the Docker build context is located.

{
context: "./app"
}

image.dockerfile?

Type Input<string>

Default “Dockerfile”

The path to the Dockerfile. The path is relative to the build context.

To use a different Dockerfile.

{
dockerfile: "Dockerfile.prod"
}

image.tags?

Type Input<Input<string>[]>

Tags to apply to the Docker image.

{
tags: ["v1.0.0", "commit-613c1b2"]
}

image.target?

Type Input<string>

The stage to build up to in a multi-stage Dockerfile.

{
target: "stage1"
}

Type Input<any[]>

Link resources to your containers. This will:

  1. Grant the permissions needed to access the resources.
  2. Allow you to access it in your app using the SDK.

Takes a list of components to link to the containers.

{
link: [bucket, stripeKey]
}

loadBalancer?

Type Input<Object>

Default Load balancer is not created

Configure a load balancer to route traffic to the containers.

While you can expose a service through API Gateway, it’s better to use a load balancer for most traditional web applications. It is more expensive to start but at higher levels of traffic it ends up being more cost effective.

Also, if you need to listen on network layer protocols like tcp or udp, you have to expose it through a load balancer.

By default, the endpoint is an autogenerated load balancer URL. You can also add a custom domain for the endpoint.

{
loadBalancer: {
domain: "example.com",
rules: [
{ listen: "80/http", redirect: "443/https" },
{ listen: "443/https", forward: "80/http" }
]
}
}

loadBalancer.domain?

Type Input<string | Object>

Set a custom domain for your load balancer endpoint.

Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you’ll need to pass in a cert that validates domain ownership and add the DNS records.

By default this assumes the domain is hosted on Route 53.

{
domain: "example.com"
}

For domains hosted on Cloudflare.

{
domain: {
name: "example.com",
dns: sst.cloudflare.dns()
}
}
loadBalancer.domain.aliases?

Type Input<string[]>

Alias domains that should be used.

{
domain: {
name: "app1.example.com",
aliases: ["app2.example.com"]
}
}
loadBalancer.domain.cert?

Type Input<string>

The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically.

To manually set up a domain on an unsupported provider, you’ll need to:

  1. Validate that you own the domain by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner.
  2. Once validated, set the certificate ARN as the cert and set dns to false.
  3. Add the DNS records in your provider to point to the load balancer endpoint.
{
domain: {
name: "example.com",
dns: false,
cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63"
}
}
loadBalancer.domain.dns?

Type Input<false | sst.aws.dns | sst.cloudflare.dns | sst.vercel.dns>

Default sst.aws.dns

The DNS provider to use for the domain. Defaults to the AWS.

Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing.

Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you’ll need to set dns to false and pass in a certificate validating ownership via cert.

Specify the hosted zone ID for the Route 53 domain.

{
domain: {
name: "example.com",
dns: sst.aws.dns({
zone: "Z2FDTNDATAQYW2"
})
}
}

Use a domain hosted on Cloudflare, needs the Cloudflare provider.

{
domain: {
name: "example.com",
dns: sst.cloudflare.dns()
}
}

Use a domain hosted on Vercel, needs the Vercel provider.

{
domain: {
name: "example.com",
dns: sst.vercel.dns()
}
}
loadBalancer.domain.name

Type Input<string>

The custom domain you want to use.

{
domain: {
name: "example.com"
}
}

Can also include subdomains based on the current stage.

{
domain: {
name: `${$app.stage}.example.com`
}
}

Wildcard domains are supported.

{
domain: {
name: "*.example.com"
}
}

loadBalancer.health?

Type Input<Record<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls, Input<Object>>>

Configure the health check that the load balancer runs on your containers.

This health check is run by the load balancer. While, health is run by ECS. This cannot be disabled if you are using a load balancer. While the other is off by default.

Since this cannot be disabled, here are some tips on how to debug an unhealthy health check.

How to debug a load balancer health check

If you notice a Unhealthy: Health checks failed error, it’s because the health check has failed. When it fails, the load balancer will terminate the containers, causing any requests to fail.

Here’s how to debug it:

  1. Verify the health check path.

    By default, the load balancer checks the / path. Ensure it’s accessible in your containers. If your application runs on a different path, then update the path in the health check config accordingly.

  2. Confirm the containers are operational.

    Navigate to ECS console > select the cluster > go to the Tasks tab > choose Any desired status under the Filter desired status dropdown > select a task and check for errors under the Logs tab. If it has error that means that the container failed to start.

  3. If the container was terminated by the load balancer while still starting up, try increasing the health check interval and timeout.

For http and https the default is:

{
path: "/",
healthyThreshold: 5,
successCodes: "200",
timeout: "5 seconds",
unhealthyThreshold: 2,
interval: "30 seconds"
}

For tcp and udp the default is:

{
healthyThreshold: 5,
timeout: "6 seconds",
unhealthyThreshold: 2,
interval: "30 seconds"
}

To configure the health check, we use the port/protocol format. Here we are configuring a health check that pings the /health path on port 8080 every 10 seconds.

{
rules: [
{ listen: "80/http", forward: "8080/http" }
],
health: {
"8080/http": {
path: "/health",
interval: "10 seconds"
}
}
}
loadBalancer.health[].healthyThreshold?

Type Input<number>

Default 5

The number of consecutive successful health check requests required to consider the target healthy. Must be between 2 and 10.

loadBalancer.health[].interval?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “30 seconds”

The time period between each health check request. Must be between 5 seconds and 300 seconds.

loadBalancer.health[].path?

Type Input<string>

Default ”/”

The URL path to ping on the service for health checks. Only applicable to http and https protocols.

loadBalancer.health[].successCodes?

Type Input<string>

Default “200”

One or more HTTP response codes the health check treats as successful. Only applicable to http and https protocols.

{
successCodes: "200-299"
}
loadBalancer.health[].timeout?

Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds>

Default “5 seconds”

The timeout for each health check request. If no response is received within this time, it is considered failed. Must be between 2 seconds and 120 seconds.

loadBalancer.health[].unhealthyThreshold?

Type Input<number>

Default 2

The number of consecutive failed health check requests required to consider the target unhealthy. Must be between 2 and 10.

loadBalancer.public?

Type Input<boolean>

Default true

Configure if the load balancer should be public or private.

When set to false, the load balancer endpoint will only be accessible within the VPC.

loadBalancer.rules?

Type Input<Object[]>

Configure the mapping for the ports the load balancer listens to, forwards, or redirects to the service. This supports two types of protocols:

  1. Application Layer Protocols: http and https. This’ll create an Application Load Balancer.
  2. Network Layer Protocols: tcp, udp, tcp_udp, and tls. This’ll create a Network Load Balancer.

You can not configure both application and network layer protocols for the same service.

Here we are listening on port 80 and forwarding it to the service on port 8080.

{
rules: [
{ listen: "80/http", forward: "8080/http" }
]
}

The forward port and protocol defaults to the listen port and protocol. So in this case both are 80/http.

{
rules: [
{ listen: "80/http" }
]
}

If multiple containers are configured via the containers argument, you need to specify which container the traffic should be forwarded to.

{
rules: [
{ listen: "80/http", container: "app" },
{ listen: "8000/http", container: "admin" }
]
}

You can also route the same port to multiple containers via path-based routing.

{
rules: [
{
listen: "80/http",
container: "app",
conditions: { path: "/api/*" }
},
{
listen: "80/http",
container: "admin",
conditions: { path: "/admin/*" }
}
]
}

Additionally, you can redirect traffic from one port to another. This is commonly used to redirect http to https.

{
rules: [
{ listen: "80/http", redirect: "443/https" },
{ listen: "443/https", forward: "80/http" }
]
}
loadBalancer.rules[].conditions?

Type Input<Object>

The conditions for the redirect. Only applicable to http and https protocols.

loadBalancer.rules[].conditions.path?

Type Input<string>

Default Requests to all paths are forwarded.

Configure path-based routing. Only requests matching the path are forwarded to the container.

{
path: "/api/*"
}

The path pattern is case-sensitive, supports wildcards, and can be up to 128 characters.

  • * matches 0 or more characters. For example, /api/* matches /api/ or /api/orders.
  • ? matches exactly 1 character. For example, /api/?.png matches /api/a.png.
loadBalancer.rules[].conditions.query?

Type Input<Input<Object>[]>

Default Query string is not checked when forwarding requests.

Configure query string based routing. Only requests matching one of the query string conditions are forwarded to the container.

Takes a list of key, the name of the query string parameter, and value pairs. Where value is the value of the query string parameter. But it can be a pattern as well.

If multiple key and value pairs are provided, it’ll match requests with any of the query string parameters.

For example, to match requests with query string version=v1.

{
query: [
{ key: "version", value: "v1" }
]
}

Or match requests with query string matching env=test*.

{
query: [
{ key: "env", value: "test*" }
]
}

Match requests with query string version=v1 or env=test*.

{
query: [
{ key: "version", value: "v1" },
{ key: "env", value: "test*" }
]
}

Match requests with any query string key with value example.

{
query: [
{ value: "example" }
]
}
loadBalancer.rules[].conditions.query[].key?

Type Input<string>

The name of the query string parameter.

loadBalancer.rules[].conditions.query[].value

Type Input<string>

The value of the query string parameter.

If no key is provided, it’ll match any request where a query string parameter with the given value exists.

loadBalancer.rules[].container?

Type Input<string>

The name of the container to forward the traffic to. This maps to the name defined in the container prop.

You only need this if there’s more than one container. If there’s only one container, the traffic is automatically forwarded there.

loadBalancer.rules[].forward?

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

Default The same port and protocol as listen.

The port and protocol of the container the service forwards the traffic to. Uses the format {port}/{protocol}.

{
forward: "80/http"
}
loadBalancer.rules[].listen

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

The port and protocol the service listens on. Uses the format {port}/{protocol}.

{
listen: "80/http"
}
loadBalancer.rules[].redirect?

Type Input<${number}/https | ${number}/http | ${number}/tcp | ${number}/udp | ${number}/tcp_udp | ${number}/tls>

The port and protocol to redirect the traffic to. Uses the format {port}/{protocol}.

{
redirect: "80/http"
}

logging?

Type Input<Object>

Default { retention: “1 month” }

Configure the logs in CloudWatch.

{
logging: {
retention: "forever"
}
}

logging.retention?

Type Input<1 day | 3 days | 5 days | 1 week | 2 weeks | 1 month | 2 months | 3 months | 4 months | 5 months | 6 months | 1 year | 13 months | 18 months | 2 years | 3 years | 5 years | 6 years | 7 years | 8 years | 9 years | 10 years | forever>

Default “1 month”

The duration the logs are kept in CloudWatch.

memory?

Type ${number} GB

Default “0.5 GB”

The amount of memory allocated to the container. If there are multiple containers, this is the total amount of memory shared across all the containers.

{
memory: "2 GB"
}

permissions?

Type Input<Object[]>

Permissions and the resources that you need to access. These permissions are used to create the task role.

Allow the container to read and write to an S3 bucket called my-bucket.

{
permissions: [
{
actions: ["s3:GetObject", "s3:PutObject"],
resources: ["arn:aws:s3:::my-bucket/*"]
},
]
}

Allow the container to perform all actions on an S3 bucket called my-bucket.

{
permissions: [
{
actions: ["s3:*"],
resources: ["arn:aws:s3:::my-bucket/*"]
},
]
}

Granting the container permissions to access all resources.

{
permissions: [
{
actions: ["*"],
resources: ["*"]
},
]
}

permissions[].actions

Type string[]

The IAM actions that can be performed.

{
actions: ["s3:*"]
}

permissions[].effect?

Type allow | deny

Default “allow”

Configures whether the permission is allowed or denied.

{
effect: "deny"
}

permissions[].resources

Type Input<string>[]

The resourcess specified using the IAM ARN format.

{
resources: ["arn:aws:s3:::my-bucket/*"]
}

scaling?

Type Input<Object>

Default { min: 1, max: 1 }

Configure the service to automatically scale up or down based on the CPU or memory utilization of a container. By default, scaling is disabled and the service will run in a single container.

{
scaling: {
min: 4,
max: 16,
cpuUtilization: 50,
memoryUtilization: 50
}
}

scaling.cpuUtilization?

Type Input<number | false>

Default 70

The target CPU utilization percentage to scale up or down. It’ll scale up when the CPU utilization is above the target and scale down when it’s below the target.

{
scaling: {
cpuUtilization: 50
}
}

scaling.max?

Type Input<number>

Default 1

The maximum number of containers to scale up to.

{
scaling: {
max: 16
}
}

scaling.memoryUtilization?

Type Input<number | false>

Default 70

The target memory utilization percentage to scale up or down. It’ll scale up when the memory utilization is above the target and scale down when it’s below the target.

{
scaling: {
memoryUtilization: 50
}
}

scaling.min?

Type Input<number>

Default 1

The minimum number of containers to scale down to.

{
scaling: {
min: 4
}
}

scaling.requestCount?

Type Input<number | false>

Default false

The target request count to scale up or down. It’ll scale up when the request count is above the target and scale down when it’s below the target.

{
scaling: {
requestCount: 1500
}
}

serviceRegistry?

Type Input<Object>

Configure the CloudMap service registry for the service.

This creates an srv record in the CloudMap service. This is needed if you want to connect an ApiGatewayV2 VPC link to the service.

API Gateway will forward requests to the given port on the service.

{
serviceRegistry: {
port: 80
}
}

serviceRegistry.port

Type number

The port in the service to forward requests to.

ssm?

Type Input<Record<string, Input<string>>>

Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables.

{
ssm: {
DATABASE_PASSWORD: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-secret-123abc"
}
}

storage?

Type ${number} GB

Default “20 GB”

The amount of ephemeral storage (in GB) allocated to the container.

{
storage: "100 GB"
}

taskRole?

Type Input<string>

Default Creates a new role

Assigns the given IAM role name to the containers. This allows you to pass in a previously created role.

By default, a new IAM role is created. It’ll update this role if you add permissions or link resources.

However, if you pass in a role, you’ll need to update it manually if you add permissions or link resources.

{
taskRole: "my-task-role"
}

transform?

transform.autoScalingTarget?

Type TargetArgs | (args: TargetArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Application Auto Scaling target resource.

transform.executionRole?

Type RoleArgs | (args: RoleArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Execution IAM Role resource.

transform.image?

Type ImageArgs | (args: ImageArgs, opts: ComponentResourceOptions, name: string) => void

Transform the Docker Image resource.

transform.listener?

Type ListenerArgs | (args: ListenerArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Load Balancer listener resource.

transform.loadBalancer?

Type LoadBalancerArgs | (args: LoadBalancerArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Load Balancer resource.

transform.loadBalancerSecurityGroup?

Type SecurityGroupArgs | (args: SecurityGroupArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Security Group resource for the Load Balancer.

transform.logGroup?

Type LogGroupArgs | (args: LogGroupArgs, opts: ComponentResourceOptions, name: string) => void

Transform the CloudWatch log group resource.

transform.service?

Type ServiceArgs | (args: ServiceArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Service resource.

transform.target?

Type TargetGroupArgs | (args: TargetGroupArgs, opts: ComponentResourceOptions, name: string) => void

Transform the AWS Load Balancer target group resource.

transform.taskDefinition?

Type TaskDefinitionArgs | (args: TaskDefinitionArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Task Definition resource.

transform.taskRole?

Type RoleArgs | (args: RoleArgs, opts: ComponentResourceOptions, name: string) => void

Transform the ECS Task IAM Role resource.

volumes?

Type Input<Object>[]

Mount Amazon EFS file systems into the container.

Create an EFS file system.

sst.config.ts
const vpc = new sst.aws.Vpc("MyVpc");
const fileSystem = new sst.aws.Efs("MyFileSystem", { vpc });

And pass it in.

{
volumes: [
{
efs: fileSystem,
path: "/mnt/efs"
}
]
}

Or pass in a the EFS file system ID.

{
volumes: [
{
efs: {
fileSystem: "fs-12345678",
accessPoint: "fsap-12345678"
},
path: "/mnt/efs"
}
]
}

volumes[].efs

Type Input<Efs | Object>

The Amazon EFS file system to mount.

volumes[].efs.accessPoint

Type Input<string>

The ID of the EFS access point.

volumes[].efs.fileSystem

Type Input<string>

The ID of the EFS file system.

volumes[].path

Type Input<string>

The path to mount the volume.

vpc

vpc.cloudmapNamespaceId

Type Input<string>

The ID of the Cloud Map namespace to use for the service.

vpc.cloudmapNamespaceName

Type Input<string>

The name of the Cloud Map namespace to use for the service.

vpc.containerSubnets

Type Input<Input<string>[]>

A list of subnet IDs in the VPC to place the containers in.

vpc.id

Type Input<string>

The ID of the VPC.

vpc.loadBalancerSubnets

Type Input<Input<string>[]>

A list of subnet IDs in the VPC to place the load balancer in.

vpc.securityGroups

Type Input<Input<string>[]>

A list of VPC security group IDs for the service.

Properties

nodes

nodes.executionRole

Type undefined | Role

The Amazon ECS Execution Role.

nodes.taskRole

Type Role

The Amazon ECS Task Role.

nodes.autoScalingTarget

Type Target

The Amazon Application Auto Scaling target.

nodes.cloudmapService

Type Service

The Amazon Cloud Map service.

nodes.loadBalancer

Type LoadBalancer

The Amazon Elastic Load Balancer.

nodes.service

Type Service

The Amazon ECS Service.

nodes.taskDefinition

Type Output<TaskDefinition>

The Amazon ECS Task Definition.

service

Type Output<string>

The name of the Cloud Map service. This is useful for service discovery.

url

Type Output<string>

The URL of the service.

If public.domain is set, this is the URL with the custom domain. Otherwise, it’s the autogenerated load balancer URL.

SDK

Use the SDK in your runtime to interact with your infrastructure.


This is accessible through the Resource object in the SDK.

  • service string

    The name of the Cloud Map service. This is useful for service discovery.

  • url undefined | string

    The URL of the service.

    If public.domain is set, this is the URL with the custom domain. Otherwise, it’s the autogenerated load balancer URL.