# SST Documentation > The complete SST documentation for building full-stack applications on AWS and Cloudflare. ## All Providers Use 150+ Pulumi or Terraform providers in your app. https://sst.dev/docs/all-providers Aside from the [built-in](/docs/components#built-in) components, SST supports any of the **150+** Pulumi and Terraform providers. Check out the full list in the [Directory](#directory). --- ## Add a provider To add a provider to your app run. ```bash sst add ``` This command adds the provider to your config, installs the packages, and adds the namespace of the provider to your globals. :::caution You don't need to `import` the provider packages in your `sst.config.ts`. ::: SST manages these packages internally and you do not need to import the package in your `sst.config.ts`. For example, to add the Stripe provider. ```bash sst add stripe ``` Read more about [providers](/docs/providers). --- ### Preloaded SST comes preloaded with the following providers, so you **don't need to add them**. - [AWS](https://www.pulumi.com/registry/packages/aws/) - [Cloudflare](https://www.pulumi.com/registry/packages/cloudflare/) These are used internally to power the [built-in](/docs/components#built-in) components. --- ## Use a resource Once added, you can use a resource from the provider in your `sst.config.ts`. For example, use a Stripe resource in your config's `run` function. ```ts title="sst.config.ts" {4-7} // ... async run() { new stripe.Product("MyStripeProduct", { name: "SST Paid Plan", description: "This is how SST makes money", }); }, }); ``` As mentioned above, since the AWS provider comes preloaded, you can use any AWS resource directly as well. ```ts title="sst.config.ts" new aws.apprunner.Service("MyService", { serviceName: "example", sourceConfiguration: { imageRepository: { imageConfiguration: { port: "8000" }, imageIdentifier: "public.ecr.aws/aws-containers/hello-app-runner:latest", imageRepositoryType: "ECR_PUBLIC" } } }); ``` --- ## Directory Below is the full list of providers that SST supports. ```bash sst add ``` Install any of the following using the package name as the `provider`. For example, `sst add auth0`. If you want SST to support a Terraform provider or update a version, you can **submit a PR** to the [sst/provider](https://github.com/sst/provider) repo. --- | Provider | Package | |----------------------------------|------------------------------------------------------------| | [ACI](https://www.pulumi.com/registry/packages/aci) | `@netascode/aci` | | [ACME](https://www.pulumi.com/registry/packages/acme) | `@pulumiverse/acme` | | [Aiven](https://www.pulumi.com/registry/packages/aiven) | `aiven` | | [Akamai](https://www.pulumi.com/registry/packages/akamai) | `akamai` | | [Alibaba Cloud](https://www.pulumi.com/registry/packages/alicloud) | `alicloud` | | [Amazon EKS](https://www.pulumi.com/registry/packages/eks) | `eks` | | [Aquasec](https://www.pulumi.com/registry/packages/aquasec) | `@pulumiverse/aquasec` | | [Artifactory](https://www.pulumi.com/registry/packages/artifactory) | `artifactory` | | [Astra DB](https://www.pulumi.com/registry/packages/astra) | `@pulumiverse/astra` | | [Auth0](https://www.pulumi.com/registry/packages/auth0) | `auth0` | | [Auto Deploy](https://www.pulumi.com/registry/packages/auto-deploy) | `auto-deploy` | | [AWS API Gateway](https://www.pulumi.com/registry/packages/aws-apigateway) | `aws-apigateway` | | [AWS](https://www.pulumi.com/registry/packages/aws/) | `aws` | | [AWS Control Tower](https://www.pulumi.com/registry/packages/awscontroltower) | `@lbrlabs/pulumi-awscontroltower` | | [AWS IAM](https://www.pulumi.com/registry/packages/aws-iam) | `aws-iam` | | [AWS Cloud Control](https://www.pulumi.com/registry/packages/aws-native) | `aws-native` | | [AWS QuickStart Aurora Postgres](https://www.pulumi.com/registry/packages/aws-quickstart-aurora-postgres) | `aws-quickstart-aurora-postgres` | | [AWS QuickStart Redshift](https://www.pulumi.com/registry/packages/aws-quickstart-redshift) | `aws-quickstart-redshift` | | [AWS QuickStart VPC](https://www.pulumi.com/registry/packages/aws-quickstart-vpc) | `aws-quickstart-vpc` | | [AWS S3 Replicated Bucket](https://www.pulumi.com/registry/packages/aws-s3-replicated-bucket) | `aws-s3-replicated-bucket` | | [AWS Static Website](https://www.pulumi.com/registry/packages/aws-static-website) | `aws-static-website` | | [AWSx](https://www.pulumi.com/registry/packages/awsx) | `awsx` | | [AzAPI](https://www.pulumi.com/registry/packages/azapi) | `@ediri/azapi` | | [Azure Active Directory](https://www.pulumi.com/registry/packages/azuread) | `azuread` | | [Azure Classic](https://www.pulumi.com/registry/packages/azure) | `azure` | | [Azure Justrun](https://www.pulumi.com/registry/packages/azure-justrun) | `pulumi-azure-justrun` | | [Azure Native](https://www.pulumi.com/registry/packages/azure-native) | `azure-native` | | [Azure Quickstart ACR Geo Replication](https://www.pulumi.com/registry/packages/azure-quickstart-acr-geo-replication) | `azure-quickstart-acr-geo-replication` | | [Azure QuickStart ACR Geo Replication](https://www.pulumi.com/registry/packages/azure-quickstart-acr-geo-replication/) | `azure-quickstart-acr-geo-replication` | | [Azure Static Website](https://www.pulumi.com/registry/packages/azure-static-website) | `azure-static-website` | | [AzureDevOps](https://www.pulumi.com/registry/packages/azuredevops) | `azuredevops` | | [Buildkite](https://www.pulumi.com/registry/packages/buildkite) | `@pulumiverse/buildkite` | | [Checkly](https://www.pulumi.com/registry/packages/checkly) | `@checkly/pulumi` | | [Cisco Catalyst SD-WAN](https://www.pulumi.com/registry/packages/sdwan) | `sdwan` | | [Cisco ISE](https://www.pulumi.com/registry/packages/ise/) | `ise` | | [Civo](https://www.pulumi.com/registry/packages/civo) | `civo` | | [Cloud-Init](https://www.pulumi.com/registry/packages/cloudinit) | `cloudinit` | | [CloudAMQP](https://www.pulumi.com/registry/packages/cloudamqp) | `cloudamqp` | | [Cloudflare](https://www.pulumi.com/registry/packages/cloudflare/) | `cloudflare` | | [CockroachDB](https://www.pulumi.com/registry/packages/cockroach/) | `@pulumiverse/cockroach` | | [Command](https://www.pulumi.com/registry/packages/command/) | `command` | | [Confluent](https://www.pulumi.com/registry/packages/confluentcloud/) | `confluentcloud` | | [Consul](https://www.pulumi.com/registry/packages/consul) | `consul` | | [Control Plane](https://www.pulumi.com/registry/packages/cpln/) | `@pulumiverse/cpln` | | [Databricks](https://www.pulumi.com/registry/packages/databricks) | `databricks` | | [Datadog](https://www.pulumi.com/registry/packages/datadog) | `datadog` | | [dbt Cloud](https://www.pulumi.com/registry/packages/dbtcloud/) | `dbtcloud` | | [DigitalOcean](https://www.pulumi.com/registry/packages/digitalocean) | `digitalocean` | | [DNSimple](https://www.pulumi.com/registry/packages/dnsimple) | `dnsimple` | | [Docker](https://www.pulumi.com/registry/packages/docker) | `docker` | | [Docker Build](https://www.pulumi.com/registry/packages/docker-build) | `docker-build` | | [Doppler](https://www.pulumi.com/registry/packages/doppler) | `@pulumiverse/doppler` | | [Dynatrace](https://www.pulumi.com/registry/packages/dynatrace) | `@pulumiverse/dynatrace` | | [Elastic Cloud](https://www.pulumi.com/registry/packages/ec/) | `ec` | | [Equinix](https://www.pulumi.com/registry/packages/equinix/) | `@equinix-labs/pulumi-equinix` | | [ESXi Native](https://www.pulumi.com/registry/packages/esxi-native) | `@pulumiverse/esxi-native` | | [Event Store Cloud](https://www.pulumi.com/registry/packages/eventstorecloud/) | `@eventstore/pulumi-eventstorecloud` | | [Exoscale](https://www.pulumi.com/registry/packages/exoscale) | `@pulumiverse/exoscale` | | [F5 BIG-IP](https://www.pulumi.com/registry/packages/f5bigip) | `f5bigip` | | [Fastly](https://www.pulumi.com/registry/packages/fastly) | `fastly` | | [Flux](https://www.pulumi.com/registry/packages/flux) | `@worawat/flux` | | [Fortios](https://www.pulumi.com/registry/packages/fortios) | `@pulumiverse/fortios` | | [FusionAuth](https://www.pulumi.com/registry/packages/fusionauth) | `pulumi-fusionauth` | | [Gandi](https://www.pulumi.com/registry/packages/gandi) | `@pulumiverse/gandi` | | [GCP Global CloudRun](https://www.pulumi.com/registry/packages/gcp-global-cloudrun) | `gcp-global-cloudrun` | | [Genesis Cloud](https://www.pulumi.com/registry/packages/genesiscloud/) | `@genesiscloud/pulumi-genesiscloud` | | [GitHub](https://www.pulumi.com/registry/packages/github) | `github` | | [GitLab](https://www.pulumi.com/registry/packages/gitlab) | `gitlab` | | [Google Cloud Classic](https://www.pulumi.com/registry/packages/gcp) | `gcp` | | [Google Cloud Native](https://www.pulumi.com/registry/packages/google-native/) | `google-native` | | [Google Cloud Static Website](https://www.pulumi.com/registry/packages/google-cloud-static-website/) | `google-cloud-static-website` | | [Grafana](https://www.pulumi.com/registry/packages/grafana) | `@pulumiverse/grafana` | | [Harbor](https://www.pulumi.com/registry/packages/harbor) | `@pulumiverse/harbor` | | [Harness](https://www.pulumi.com/registry/packages/harness) | `harness` | | [HashiCorp Vault](https://www.pulumi.com/registry/packages/vault) | `vault` | | [HCP](https://www.pulumi.com/registry/packages/hcp) | `@grapl/pulumi-hcp` | | [Hetzner Cloud](https://www.pulumi.com/registry/packages/hcloud) | `hcloud` | | [Impart Security](https://www.pulumi.com/registry/packages/impart/) | `@impart-security/pulumi-impart` | | [InfluxDB](https://www.pulumi.com/registry/packages/influxdb) | `@komminarlabs/influxdb` | | [Kafka](https://www.pulumi.com/registry/packages/kafka) | `kafka` | | [Keycloak](https://www.pulumi.com/registry/packages/keycloak) | `keycloak` | | [Kong](https://www.pulumi.com/registry/packages/kong) | `kong` | | [Koyeb](https://www.pulumi.com/registry/packages/koyeb) | `@koyeb/pulumi-koyeb` | | [Kubernetes](https://www.pulumi.com/registry/packages/kubernetes) | `kubernetes` | | [Kubernetes Cert Manager](https://www.pulumi.com/registry/packages/kubernetes-cert-manager) | `kubernetes-cert-manager` | | [Kubernetes CoreDNS](https://www.pulumi.com/registry/packages/kubernetes-coredns) | `kubernetes-coredns` | | [LaunchDarkly](https://registry.terraform.io/providers/launchdarkly/launchdarkly) | `lauchdarkly` | | [LBr Labs EKS](https://www.pulumi.com/registry/packages/lbrlabs-eks) | `@lbrlabs/pulumi-eks` | | [libvirt](https://www.pulumi.com/registry/packages/libvirt) | `libvirt` | | [Linode](https://www.pulumi.com/registry/packages/linode) | `linode` | | [Mailgun](https://www.pulumi.com/registry/packages/mailgun) | `mailgun` | | [Matchbox](https://www.pulumi.com/registry/packages/matchbox) | `@pulumiverse/matchbox` | | [Miniflux](https://www.pulumi.com/registry/packages/aws-miniflux/) | `aws-miniflux` | | [MinIO](https://www.pulumi.com/registry/packages/minio) | `minio` | | [MongoDB Atlas](https://www.pulumi.com/registry/packages/mongodbatlas) | `mongodbatlas` | | [MSSQL](https://www.pulumi.com/registry/packages/mssql) | `@pulumiverse/mssql` | | [MySQL](https://www.pulumi.com/registry/packages/mysql) | `mysql` | | [Neon](https://www.pulumi.com/registry/packages/neon) | `neon` | | [New Relic](https://www.pulumi.com/registry/packages/newrelic) | `newrelic` | | [NGINX Ingress Controller](https://www.pulumi.com/registry/packages/kubernetes-ingress-nginx/) | `kubernetes-ingress-nginx` | | [ngrok](https://www.pulumi.com/registry/packages/ngrok) | `@pierskarsenbarg/ngrok` | | [Nomad](https://www.pulumi.com/registry/packages/nomad) | `nomad` | | [NS1](https://www.pulumi.com/registry/packages/ns1) | `ns1` | | [Nuage](https://www.pulumi.com/registry/packages/nuage) | `nuage` | | [Nutanix](https://www.pulumi.com/registry/packages/nutanix) | `@pierskarsenbarg/nutanix` | | [Okta](https://www.pulumi.com/registry/packages/okta) | `okta` | | [OneLogin](https://www.pulumi.com/registry/packages/onelogin) | `onelogin` | | [OpenStack](https://www.pulumi.com/registry/packages/openstack) | `openstack` | | [Opsgenie](https://www.pulumi.com/registry/packages/opsgenie) | `opsgenie` | | [Oracle Cloud Infrastructure](https://www.pulumi.com/registry/packages/oci) | `oci` | | [OVHCloud](https://www.pulumi.com/registry/packages/ovh) | `@ovh-devrelteam/pulumi-ovh` | | [PagerDuty](https://www.pulumi.com/registry/packages/pagerduty) | `pagerduty` | | [Pinecone](https://www.pulumi.com/registry/packages/pinecone) | `@pinecone-database/pulumi` | | [PlanetScale](https://github.com/sst/pulumi-planetscale) | `planetscale` | | [Port](https://www.pulumi.com/registry/packages/port) | `@port-labs/port` | | [PostgreSQL](https://www.pulumi.com/registry/packages/postgresql) | `postgresql` | | [Prodvana](https://www.pulumi.com/registry/packages/prodvana) | `@prodvana/pulumi-prodvana` | | [Proxmox Virtual Environment](https://www.pulumi.com/registry/packages/proxmoxve) | `@muhlba91/pulumi-proxmoxve` | | [Pulumi Cloud](https://www.pulumi.com/registry/packages/pulumiservice) | `pulumiservice` | | [purrl](https://www.pulumi.com/registry/packages/purrl) | `@pulumiverse/purrl` | | [Qovery](https://www.pulumi.com/registry/packages/qovery) | `@ediri/qovery` | | [RabbitMQ](https://www.pulumi.com/registry/packages/rabbitmq) | `rabbitmq` | | [Rancher2](https://www.pulumi.com/registry/packages/rancher2) | `rancher2` | | [Railway](https://registry.terraform.io/providers/terraform-community-providers/railway/latest) | `railway` | | [random](https://www.pulumi.com/registry/packages/random) | `random` | | [Redis Cloud](https://www.pulumi.com/registry/packages/rediscloud) | `@rediscloud/pulumi-rediscloud` | | [Rootly](https://www.pulumi.com/registry/packages/rootly) | `@rootly/pulumi` | | [Runpod](https://www.pulumi.com/registry/packages/runpod) | `@runpod-infra/pulumi` | | [Scaleway](https://www.pulumi.com/registry/packages/scaleway) | `@pulumiverse/scaleway` | | [Sentry](https://www.pulumi.com/registry/packages/sentry) | `@pulumiverse/sentry` | | [SignalFx](https://www.pulumi.com/registry/packages/signalfx) | `signalfx` | | [Slack](https://www.pulumi.com/registry/packages/slack) | `slack` | | [Snowflake](https://www.pulumi.com/registry/packages/snowflake) | `snowflake` | | [Splight](https://www.pulumi.com/registry/packages/splight) | `@splightplatform/pulumi-splight` | | [Splunk](https://www.pulumi.com/registry/packages/splunk) | `splunk` | | [Spotinst](https://www.pulumi.com/registry/packages/spotinst) | `spotinst` | | [Statuscake](https://www.pulumi.com/registry/packages/statuscake) | `@pulumiverse/statuscake` | | [Strata Cloud Manager](https://www.pulumi.com/registry/packages/scm) | `scm` | | [Stripe](https://github.com/georgegebbett/pulumi-stripe) | `stripe` | | [Stripe Official](https://github.com/stripe/terraform-provider-stripe) | `stripe-official` | | [StrongDM](https://www.pulumi.com/registry/packages/sdm/) | `@pierskarsenbarg/sdm` | | [Sumo Logic](https://www.pulumi.com/registry/packages/sumologic) | `sumologic` | | [Supabase](https://github.com/sst/pulumi-supabase) | `supabase` | | [Symbiosis](https://www.pulumi.com/registry/packages/symbiosis) | `@symbiosis-cloud/symbiosis-pulumi` | | [Synced Folder](https://www.pulumi.com/registry/packages/synced-folder) | `synced-folder` | | [Tailscale](https://www.pulumi.com/registry/packages/tailscale) | `tailscale` | | [Talos Linux](https://www.pulumi.com/registry/packages/talos) | `@pulumiverse/talos` | | [Time](https://www.pulumi.com/registry/packages/time) | `@pulumiverse/time` | | [TLS](https://www.pulumi.com/registry/packages/tls) | `tls` | | [Twingate](https://www.pulumi.com/registry/packages/twingate) | `@twingate/pulumi-twingate` | | [Unifi](https://www.pulumi.com/registry/packages/unifi) | `@pulumiverse/unifi` | | [Upstash](https://www.pulumi.com/registry/packages/upstash) | `@upstash/pulumi` | | [Venafi](https://www.pulumi.com/registry/packages/venafi) | `venafi` | | [Vercel](https://www.pulumi.com/registry/packages/vercel) | `vercel` | | [VMware vSphere](https://www.pulumi.com/registry/packages/vsphere) | `vsphere` | | [Volcengine](https://www.pulumi.com/registry/packages/volcengine) | `@volcengine/pulumi` | | [vSphere](https://www.pulumi.com/registry/packages/vsphere) | `vsphere` | | [Vultr](https://www.pulumi.com/registry/packages/vultr) | `@ediri/vultr` | | [Wavefront](https://www.pulumi.com/registry/packages/wavefront) | `wavefront` | | [Yandex](https://www.pulumi.com/registry/packages/yandex) | `yandex` | | [Zitadel](https://www.pulumi.com/registry/packages/zitadel) | `@pulumiverse/zitadel` | | [Zscaler Internet Access](https://www.pulumi.com/registry/packages/zia/) | `@bdzscaler/pulumi-zia` | | [Zscaler Private Access](https://www.pulumi.com/registry/packages/zpa/) | `@bdzscaler/pulumi-zpa` | Any missing providers or typos? Feel free to _Edit this page_ and submit a PR. --- ## Set up AWS Accounts A simple and secure guide to setting up AWS accounts. https://sst.dev/docs/aws-accounts Unsurprisingly there are multiple ways to set up AWS accounts. And unfortunately the default process misses a few things that'll likely make this a lot easier for your team. :::tip If you are using IAM users or have credential files, this guide is for you. ::: --- The ideal setup is to have multiple AWS accounts grouped under a single AWS Organization. While your team authenticates through SSO to access the Console and the CLI. While this sounds complicated, it's a one time process that you'll never have to think about again. Let's get started. --- ## Management account The first step is to [**create a management account**](https://portal.aws.amazon.com/billing/signup?type=enterprise#/start/email). 1. Start by using a **work email alias**. For example `aws@acme.com`. This'll forward to your real email. It allows you to give other people access to it in the future. 2. The **account name** should be your company name, for example `acme`. 3. Enter your **billing info** and **confirm your identity**. 4. Choose **basic support**. You can upgrade this later. Once you're done you should be able to login and access the AWS Console. These credentials are overly powerful. You should rarely ever need them again. Feel free to throw away the password after completing this guide. You can always do a password reset if it's needed. :::tip The Management account is what you'll use to manage the users in your organization. ::: This account won't have anything deployed to it besides the IAM Identity Center which is how we'll manage the users in our organization. --- ### AWS Organization Next, we'll create an organization. This allows you to manage multiple AWS accounts together. We'll need this as we create separate accounts for dev and prod. Search **AWS Organization** in the search bar to go to its dashboard and click **Create an organization**. You'll see that the management account is already in the organization. --- ### IAM Identity Center Now let's enable IAM Identity Center. 1. Search **IAM Identity Center** and go to its dashboard. Click **Enable**. :::note Make a note of the region you're in for the IAM Identity Center. ::: This'll be created in one region and you cannot change it. However, it doesn't matter too much which one it is. You'll just need to navigate to that region when you are trying to find this again. 2. Click **Enable**. This will give your organization a unique URL to login. :::note Make a note of the URL that IAM Identity Center gives you. ::: This is auto-generated but you can click **Customize** to select a unique name. You'll want to bookmark this for later. --- ## Root user Now we'll create a root user in IAM Identity Center. 1. Click **Users** on the left and then **Add user** to create a user for yourself. Make your username your work email, for example `dax@acme.com`, and fill out the required fields. 2. Skip adding the user to groups. 3. Finish creating the user. We've created the user. Now let's give it access to our management account. --- ### User access Go to the left panel and click **AWS Accounts**. 1. Select your management account. It should be tagged as such. And click **Assign users or groups**. 2. Select the Users tab, make sure your user is selected and hit **Next**. 3. Now we'll need to create a new permission set. We need to do this once. Click **Create permission set**. 4. In the new tab select **Predefined permission set** and **AdministratorAccess**. Click **Next**. 5. Increase the session duration to 12 hours. This is the most convenient option. Click **Next** and then **Create**. 6. Close the tab, return to the previous one and hit the refresh icon. Select **AdministratorAccess** and click **Next** and then **Submit**. This might seem complicated but all we did was grant the user an _AdministratorAccess role_ into the management account. Now you're ready to log in to your user account. --- ### Login Check your email and you should have an invite. 1. **Accept the invite** and **create a new password**. Be sure to save it in your password manager. This is important because this account has access to the management account. :::note If you already have an SSO provider, like Google you can allow your team to _Login with Google_. Let us know if you'd like us to document that as well. ::: 2. Sign in and you should see your organization with a **list of accounts** below it. You currently only have access to the management account we created above. So click it and you should see the AdministratorAccess role. 3. Click **Management Console** to login to the AWS Console. You're now done setting up the root user account! --- ## Dev and prod accounts As mentioned earlier, your management account isn't meant to deploy any resources. It's meant to manage users. So a good initial setup is to create separate `dev` and `production` accounts. This helps create some isolation. The `dev` account will be shared between your team while the `production` account is just for production. You can also create a staging account or an account per developer but we'll start simple. --- Navigate back to **AWS Organizations** by searching for it. 1. Click **Add an AWS account**. 2. For the account name append `-dev` to whatever you called your management account. For example, `acme-dev`. 3. For the email address choose a new email alias. If you're using Google for email, you can do `aws+dev@acme.com` and it'll still go to your `aws@acme.com` email. 4. Click **Create AWS account**. **Repeat this step** and create the `-production` as well. So you should now have an `acme-dev` and an `acme-production`. It'll take a few seconds to finish creating. --- ### Assign users Once it's done head over to **IAM Identity Center** to grant your user access to these accounts. 1. Select the **AWS Accounts** tab on the left. 2. Select your newly created `acme-dev` and `acme-production` accounts and click **Assign users or groups**. 3. In the **Users** tab select your user and click **Next**. 4. Select the **AdministratorAccess** permission set and click **Next** and **Submit**. Now you can go back to your SSO URL. You should now see three different accounts and you'll be able to login to whichever one you want. :::tip You can find your SSO URL by clicking **Dashboard** on the left. ::: You can create additional users and add them to these accounts using the steps above. You can reuse the role or create one with stricter permissions. Next, let's configure the AWS CLI and SST to use this setup. --- ## Configure AWS CLI The great thing about this setup is that you no longer need to generate AWS IAM credentials for your local machine, you can just use SSO. This is both simpler and more secure. :::tip You can [download](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) the AWS CLI from the AWS docs. ::: All you need is a single configuration file for the AWS CLI, SST, or any random scripts you want to run. And there will never be any long lived credentials stored on your machine. --- 1. Add the following block to a `~/.aws/config` file. ```bash title="~/.aws/config" [sso-session acme] sso_start_url = https://acme.awsapps.com/start sso_region = us-east-1 ``` Make sure to replace the `sso_start_url` with your SSO URL that you bookmarked. And set the region where you created IAM Identity Center as the `sso_region`. 2. Add an entry for each environment, in this case `dev` and `production`. ```bash title="~/.aws/config" [profile acme-dev] sso_session = acme sso_account_id = sso_role_name = AdministratorAccess region = us-east-1 [profile acme-production] sso_session = acme sso_account_id = sso_role_name = AdministratorAccess region = us-east-1 ``` You can find the account ID from your SSO login url. If you expand the account you will see it listed with a `#` sign. The region specified in the config is the default region that the CLI will use when one isn't specified. :::tip With this setup you won't need to save your AWS credentials locally. ::: And the role name is the one we created above. If you created a different role, you'd need to change this. 3. Now you can login by running. ```bash aws sso login --sso-session=acme ``` This'll open your browser and prompt you to allow access. The sessions will last 12 hours, as we had configured previously. If you're using Windows with WSL, you can add a script to open the login browser of the host machine.
View script ```sh title="login.sh" #!/bin/bash if grep -q WSL /proc/version; then export BROWSER=wslview fi aws sso login --sso-session=acme ```
4. Optionally, for Node.js projects, it can be helpful to add this to a `package.json` script so your team can just run `npm run sso` to login. ```json title="package.json" "scripts": { "sso": "aws sso login --sso-session=acme" } ``` 5. Finally, test that everything is working with a simple CLI command that targets your dev account. ```bash aws sts get-caller-identity --profile=acme-dev ``` Next, let's configure SST to use these profiles. --- ## Configure SST In your `sst.config.ts` file check which stage you are deploying to and return the right profile. ```ts title="sst.config.ts" {8} app(input) { return { name: "my-sst-app", home: "aws", providers: { aws: { profile: input.stage === "production" ? "acme-production" : "acme-dev" } } }; }, async run() { // Your resources } }); ``` This will use the `acme-production` profile just for production and use `acme-dev` for everything else. :::note The `AWS_PROFILE` environment variable will override the profile set in your `sst.config.ts`. ::: If you've configured AWS credentials previously through the `AWS_PROFILE` environment variable or through a `.env` file, it will override the profile set in your `sst.config.ts`. So make sure to remove any references to `AWS_PROFILE`. Now to deploy to your production account you just pass in the stage. ```bash sst deploy --stage production ``` And we are done! --- To summarize, here what we've created: 1. A management account to manage the users in our organization. 2. A root user that can login to the management account. 3. Dev and production accounts for our apps. 4. Finally, given the root user access to both accounts. You can extend these by adding more users, or adding additional accounts, or modifying the roles you grant. --- ## Basics The basics of building apps with SST. https://sst.dev/docs/basics The main difference between working on SST versus any other framework is that everything related to your app is all **defined in code**. 1. SST **automatically manages** the resources in AWS (or any provider) defined in your app. 2. You don't need to **make any manual changes** to them in your cloud provider's console. This idea of _automating everything_ can feel unfamiliar at first. So let's go through the basics and look at some core concepts. --- ## Setup Before you start working on your app, there are a couple of things we recommend setting up. Starting with your code editor. --- ### Editor SST apps are configured through a file called `sst.config.ts`. It's a TypeScript file and it can work with your editor to type check and autocomplete your code. It can also show you inline help. **Type check** ![Editor typecheck](../../../assets/docs/basics/editor-typecheck.png) **Autocomplete** ![Editor autocomplete](../../../assets/docs/basics/editor-autocomplete.png) **Inline help** ![Editor help](../../../assets/docs/basics/editor-help.png) Most modern editors; VS Code and Neovim included, should do the above automatically. But you should start by making sure that your editor has been set up. --- ### Credentials SST apps are deployed to your infrastructure. So whether you are deploying to AWS, or Cloudflare, or any other cloud provider, make sure you have their credentials configured locally. Learn more about how to [configure your AWS credentials](/docs/iam-credentials/). --- ### Console SST also comes with a [Console](/docs/console/). It shows you all your apps, the resources in them, lets you configure _git push to deploy_, and also send you alerts for when there are any issues. While it is optional, we recommend creating a free account and linking it to your AWS account. Learn more about the [SST Console](/docs/console/). --- ## sst.config.ts Now that you are ready to work on your app and your `sst.config.ts`, let's take a look at what it means to _configure everything in code_. --- ### IaC Infrastructure as Code or _IaC_ is a process of automating the management of infrastructure through code. Rather than doing it manually through a console or user interface. :::tip You won't need to use the AWS Console to configure your SST app. ::: Say your app has a Function and an S3 bucket, you would define that in your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Function("MyFunction", { handler: "index.handler" }); ``` You won't need to go to the Lambda and S3 parts of the AWS Console. SST will do the work for you. In the above snippets, `sst.aws.Function` and `sst.aws.Bucket` are called Components. Learn more about [Components](/docs/components/). --- ### Resources The reason this works is because when SST deploys the above app, it'll convert it into a set of commands. These then call AWS with your credentials to create the underlying resources. So the above components get transformed into a list of low level resources in AWS. :::tip You are not directly responsible for the low level resources that SST creates. ::: If you log in to your AWS Console you can see what gets created internally. While these might look a little intimidating, they are all managed by SST and you are not directly responsible for them. SST will create, track, and remove all the low level resources defined in your app. --- #### Exceptions There are some exceptions to this. You might have resources that are not defined in your SST config. These could include the following resources: 1. **Previously created** You might've previously created some resources by hand that you would like to use in your new SST app. You can import these resources into your app. Moving forward, SST will manage them for you. Learn more about [importing resources](/docs/import-resources/). 2. **Externally managed** You might have resources that are managed by a different team. In this case, you don't want SST to manage them. You simply want to reference them in your app. Learn more about [referencing resources](/docs/reference-resources/). 3. **Shared across stages** If you are creating preview environments, you might not want to make copies of certain resources, like your database. You might want to share these across stages. Learn more about [sharing across stages](/docs/share-across-stages/). --- ### Linking Let's say you wanted your function from the above example to upload a file to the S3 bucket, you'd need to hardcode the name of the bucket in your API. SST avoids this by allowing you to **link resources** together. ```ts title="sst.config.ts" {3} new sst.aws.Function("MyFunction", { handler: "index.handler", link: [bucket] }); ``` Now in your function you can access the bucket using SST's [SDK](/docs/reference/sdk/). ```ts title="index.ts" "Resource.MyBucket.name" console.log(Resource.MyBucket.name); ``` There's a difference between the two snippets above. One is your **infrastructure code** and the other is your **runtime code**. One is run while creating your app, while the other runs when your users use your app. :::tip You can access your infrastructure in your runtime using the SST SDK. ::: The _link_ allows you to access your **infrastructure** in your **runtime code**. Learn more about [resource linking](/docs/linking/). --- ### State When you make a change to your `sst.config.ts`, like we did above. SST only deploys the changes. ```diff lang="ts" title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "index.handler", + link: [bucket] }); ``` It does this by maintaining a _state_ of your app. The state is a tree of all the resources in your app and all their properties. The state is stored in a file locally and backed up to a bucket in your AWS (or Cloudflare) account. :::tip You can view the state of your app and its history in the SST Console. ::: A word of caution, if for some reason you delete your state locally and in your provider, SST won't be able to manage the resources anymore. To SST this app won't exist anymore. :::danger Do not delete the bucket that stores your app's state. ::: To fix this, you'll have to manually re-import all those resources back into your app. Learn more about [how state works](/docs/state/). --- #### Out of sync We mentioned above that you are not responsible for the low level resources that SST creates. But this isn't just a point of convenience; it's something you should not do. :::caution Do not manually make changes to the low level resources that SST creates. ::: The reason for this is that, SST only applies the diffs when your `sst.config.ts` changes. So if you manually change the resources, it'll be out of sync with your state. You can fix some of this by running [`sst refresh`](reference/cli/#refresh) but in general you should avoid doing this. --- ## App So now that we know how IaC works, a lot of the workflow and concepts will begin to make sense. Starting with the key parts of an app. --- ### Name Every app has a name. The name is used as a namespace. It allows SST to deploy multiple apps to the same cloud provider account, while isolating the resources in an app. If you change the name of your app in your `sst.config.ts`, SST will create a completely new set of resources for it. It **does not** rename the resources. :::caution To rename an app, you'll need to remove the resources from the old one and deploy to the new one. ::: So if you: 1. Create an app with the name `my-sst-app` in your `sst.config.ts` and deploy it. 2. Rename the app in your `sst.config.ts` to `my-new-sst-app` and deploy again. You will now have two apps in your AWS account called `my-sst-app` and `my-new-sst-app`. If you want to rename your app, you'll need to [remove](/docs/basics/#remove) the old app first and then deploy a new one with the new name. --- ### Stage An app can have multiple stages. A stage is like an _environment_, it's a separate version of your app. For example, you might have a dev stage, a production stage, or a personal stage. It's useful to have multiple versions of your app because it lets you make changes and test in one version while your users continue to use the other. You create a new stage by deploying to it with the `--stage ` CLI option. The stage name is used as a namespace to create a new version of your app. It's similar to how the app name is used as a namespace. :::caution To rename a stage, you'll need to [remove](/docs/basics/#remove) the resources from the old one and deploy to the new one. ::: Similar to app names, stages cannot be renamed. So if you wanted to rename a `development` stage to `dev`; you'll need to first remove `development` and then deploy `dev`. --- #### Personal stages By default, if no stage is passed in, SST creates a stage using the username in your computer. This is called a **personal stage**. Personal stages are typically used in _dev_ mode and every developer on your team should use their own personal stage. We'll look at this in detail below. --- ### Region Most resources that are created in AWS (and many other providers) belong to a specific region. So when you deploy your app, it's deployed to a specific region. :::caution To switch regions, you'll need to [remove](/docs/basics/#remove) the resources from one region and deploy to the new one. ::: For AWS, the region comes from your AWS credentials but it can be specified in the `sst.config.ts`. ```ts title="sst.config.ts" {5-7} app(input) { return { name: "my-sst-app", providers: { aws: { region: "us-west-2" } } }; } }); ``` Similar to the app and stage, if you want to switch regions; you'll need to remove your app in the old region and deploy it to the new one. --- ## Commands Now with the above background let's look at the workflow of building an SST app. Let's say you've created an app by running. ```bash sst init ``` --- ### Dev To start with, you'll run your app in dev. ```bash sst dev ``` This deploys your app to your _personal_ stage in _dev mode_. It brings up a multiplexer that deploys your app, runs your functions, creates a tunnel, and starts your frontend and container services. It deploys your app a little differently and is optimized for local development. 1. It runs the functions in your app [_Live_](/docs/live/) by deploying a **_stub_ version**. These proxy any requests to your local machine. 2. It **does not deploy** your frontends or container services. Instead, it starts them locally. 3. It also creates a [_tunnel_](/docs/reference/cli#tunnel) that allows them to connect to any resources that are deployed in a VPC. :::note Only use `sst dev` in your personal stage. ::: For this reason we recommend only using your personal stage for local development. And instead deploying to a separate stage when you want to share your app with your users. Learn more about [`sst dev`](/docs/reference/cli/#dev). --- ### Deploy Once you are ready to go to production you can run. ```bash sst deploy --stage production ``` You can use any stage name for production here. --- ### Remove If you want to remove your app and all the resources in it, you can run. ```bash sst remove --stage ``` You want to be careful while running this command because it permanently removes all the resources from your AWS (or cloud provider) account. :::caution Be careful while running `sst remove` since it permanently removes all your resources. ::: To prevent accidental removal, our template `sst.config.ts` comes with the following. ```ts title="sst.config.ts" removal: input?.stage === "production" ? "retain" : "remove", ``` This is telling SST that if the stage is called `production` then on remove, retain critical resources like buckets and databases. This should avoid any accidental data loss. Learn more about [removal policies](/docs/reference/config/#removal). --- ## With a team This workflow really shines when working with a team. Let's look at what it looks like with a basic git workflow. 1. Every developer on the team uses `sst dev` to work in their own isolated personal stage. 2. You commit your changes to a branch called `dev`. 3. Any changes to the `dev` branch are auto-deployed using `sst deploy --stage dev`. 4. Your team tests changes made to the `dev` stage of your app. 5. If they look good, `dev` is merged into a branch called `production`. 6. And any changes to the `production` branch are auto-deployed to the `production` stage with `sst deploy --stage production`. In this setup, you have a separate stage per developer, a _dev_ stage for testing, and a _production_ stage. --- ### Autodeploy To have a branch automatically deploy to a stage when commits are pushed to it, you need to configure GitHub Actions. ![SST Console Autodeploy](../../../assets/docs/basics/sst-console-autodeploy.png) Or you can connect your repo to the SST Console and it'll auto-deploy your app for you. Learn more about [Autodeploy](/docs/console/#autodeploy). --- ### PR environments You can also set it up to create preview environments. So when a pull request (say PR#12) is created, you auto-deploy a new stage using `sst deploy --stage pr-12`. And once the PR is merged, the preview environment or stage gets removed using `sst remove --stage pr-12`. Just like above, you can configure this using GitHub Actions or let the SST Console do it for you. --- And there you have it. You are now ready to build apps the _SST way_. --- ## Cloudflare Learn how to use SST with Cloudflare https://sst.dev/docs/cloudflare [Cloudflare](https://cloudflare.com) lets you deploy apps with Workers and connect services like D1, R2, and DNS. This guide covers how to set it up with SST. --- ## Install Add the Cloudflare provider to your SST app. Learn more about [providers](/docs/providers). ```bash sst add cloudflare ``` This adds the provider to your `sst.config.ts`. ```ts title="sst.config.ts" {3} { providers: { cloudflare: "5.37.1", }, } ``` If Cloudflare should store your app state, set [`home`](/docs/reference/config/#home) to `"cloudflare"`. This is useful in setups where you plan to use Cloudflare as you main cloud provider. --- ## Credentials You can create an account token in the Cloudflare dashboard under [Manage Account > API Tokens](https://dash.cloudflare.com/profile/api-tokens). Start with the **Edit Cloudflare Workers** template and add these permissions: - *Account - D1 - Edit* - *Zone - DNS - Edit* :::tip If your app uses other Cloudflare products, add the permissions those features need. ::: Give the token access to the account your application will be deploying to. If you are using Cloudflare DNS with SST, include the zones that SST should update. Set `CLOUDFLARE_DEFAULT_ACCOUNT_ID` to the Cloudflare account ID that SST should use. If you leave it unset, SST falls back to the first account that Cloudflare returns for that token. Then set these variables in your shell, `.env`, or CI environment before you deploy: ```bash ``` --- ## Components SST includes Cloudflare components for Workers, storage, queues, cron jobs, AI bindings, and more. ### Worker Create a Cloudflare Worker and enable a URL so it can handle HTTP requests. ```ts title="sst.config.ts" const worker = new sst.cloudflare.Worker("MyWorker", { handler: "index.ts", url: true, }); return { url: worker.url, }; ``` Use the [`Worker`](/docs/component/cloudflare/worker/) component to build APIs and edge handlers on Cloudflare. ### Storage Create Cloudflare storage resources and link them to your Worker. For example, here's a D1 database. ```ts title="sst.config.ts" const db = new sst.cloudflare.D1("MyDatabase"); new sst.cloudflare.Worker("MyWorker", { handler: "index.ts", link: [db], url: true, }); ``` Then access it in your handler through `Resource`. ```ts title="index.ts" async fetch() { const row = await Resource.MyDatabase.prepare( "SELECT id FROM todo ORDER BY id DESC LIMIT 1", ).first(); return Response.json(row); }, }; ``` The same pattern works with [`Bucket`](/docs/component/cloudflare/bucket/) for R2 and [`Kv`](/docs/component/cloudflare/kv/) for KV namespaces. ### Queue Use [`Queue`](/docs/component/cloudflare/queue/) for async work. ```ts title="sst.config.ts" const queue = new sst.cloudflare.Queue("MyQueue"); queue.subscribe("consumer.ts"); const producer = new sst.cloudflare.Worker("Producer", { handler: "producer.ts", link: [queue], url: true, }); ``` For scheduled work, use [`Cron`](/docs/component/cloudflare/cron/) to run a worker on a cron expression. ### More components Browse the component docs for [`Worker`](/docs/component/cloudflare/worker/), [`Astro`](/docs/component/cloudflare/astro/), [`Bucket`](/docs/component/cloudflare/bucket/), [`D1`](/docs/component/cloudflare/d1/), [`Kv`](/docs/component/cloudflare/kv/), [`Queue`](/docs/component/cloudflare/queue/), [`Cron`](/docs/component/cloudflare/cron/), and [`Ai`](/docs/component/cloudflare/ai/). If you are using Cloudflare DNS with SST, use [`sst.cloudflare.dns`](/docs/component/cloudflare/dns/) with [custom domains](/docs/custom-domains/). --- ## Cloudflare Vite plugin Cloudflare SSR components like [Astro](/docs/component/cloudflare/astro/) or [TanStack Start](/docs/component/cloudflare/tanstack-start/) need the Cloudflare Vite plugin to work correctly. :::caution Do not include any Wrangler configuration files (`wrangler.toml`, `wrangler.json`) in your project. SST manages these for you and will generate them as needed. ::: You need to configure the Vite plugin to use the SST-managed Wrangler config. ```ts title="vite.config.ts" plugins: [ cloudflare({ configPath: process.env.SST_WRANGLER_CONFIG, }), ], }); ``` This environment variable is set by SST and ensures the plugin uses the generated Wrangler configuration. :::tip There is an [open PR](https://github.com/cloudflare/workers-sdk/pull/13587) on the Cloudflare Workers SDK that will add automatic detection of the SST-managed Wrangler config. Once merged, you won't need to explicitly set `configPath`. ::: --- ## Examples Check out the full examples: - [Cloudflare Workers with SST](/docs/start/cloudflare/worker/) - [Hono on Cloudflare with SST](/docs/start/cloudflare/hono/) - [tRPC on Cloudflare with SST](/docs/start/cloudflare/trpc/) - [Astro on Cloudflare](https://github.com/sst/sst/tree/dev/examples/cloudflare-astro) - [Cloudflare D1](https://github.com/sst/sst/tree/dev/examples/cloudflare-d1) - [Cloudflare KV](https://github.com/sst/sst/tree/dev/examples/cloudflare-kv) - [Cloudflare Queue](https://github.com/sst/sst/tree/dev/examples/cloudflare-queue) - [Cloudflare Cron](https://github.com/sst/sst/tree/dev/examples/cloudflare-cron) --- ## Common Errors A list of CLI error messages and how to fix them. https://sst.dev/docs/common-errors Below is a collection of common errors you might encounter when using SST. :::tip The error messages in the CLI link to this doc. ::: The error messages and descriptions in this doc are auto-generated from the CLI. --- ## TooManyCacheBehaviors > TooManyCacheBehaviors: Your request contains more CacheBehaviors than are allowed per distribution This error usually happens to `SvelteKit`, `SolidStart`, `Nuxt`, and `Analog` components. CloudFront distributions have a **limit of 25 cache behaviors** per distribution. Each top-level file or directory in your frontend app's asset directory creates a cache behavior. For example, in the case of SvelteKit, the static assets are in the `static/` directory. If you have a file and a directory in it, it'll create 2 cache behaviors. ```bash frame="none" static/ ├── icons/ # Cache behavior for /icons/* └── logo.png # Cache behavior for /logo.png ``` So if you have many of these at the top-level, you'll hit the limit. You can request a limit increase through the AWS Support. Alternatively, you can move some of these into subdirectories. For example, moving them to an `images/` directory, will only create 1 cache behavior. ```bash frame="none" static/ └── images/ # Cache behavior for /images/* ├── icons/ └── logo.png ``` Learn more about these [CloudFront limits](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-limits.html#limits-web-distributions). --- ## Alb Reference doc for the `sst.aws.Alb` component. https://sst.dev/docs/component/aws/alb The `Alb` component lets you create a standalone Application Load Balancer that can be shared across multiple services. #### Create a shared ALB ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const alb = new sst.aws.Alb("SharedAlb", { vpc, domain: "app.example.com", listeners: [ { port: 80, protocol: "http" }, { port: 443, protocol: "https" }, ], }); ``` #### Attach services to the ALB ```ts title="sst.config.ts" new sst.aws.Service("Api", { cluster, image: "api:latest", loadBalancer: { instance: alb, rules: [ { listen: "443/https", forward: "8080/http", conditions: { path: "/api/*" }, priority: 100 }, ], }, }); ``` #### Reference an existing ALB ```ts title="sst.config.ts" const alb = sst.aws.Alb.get("SharedAlb", "arn:aws:elasticloadbalancing:..."); ``` --- ## Constructor ```ts new Alb(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`AlbArgs`](#albargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## AlbArgs ### domain? **Type** `string | Object` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) Set a custom domain for the load balancer. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. ```js { domain: "example.com" } ``` For domains on Cloudflare: ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` aliases? **Type** `string[]` Alias domains that should also point to this load balancer. cert? **Type** `Input` The ARN of an ACM certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. dns? **Type** `false | `[`sst.aws.dns`](/docs/component/aws/dns/)` | `[`sst.cloudflare.dns`](/docs/component/cloudflare/dns/)` | `[`sst.vercel.dns`](/docs/component/vercel/dns/) **Default** `sst.aws.dns` The DNS provider to use. Defaults to AWS Route 53. Set to `false` for manual DNS setup. name **Type** `string` The custom domain name. ### listeners **Type** [`AlbListenerArgs`](#alblistenerargs)`[]` The listeners for the load balancer. Each entry creates a listener on the specified port and protocol. ```js { listeners: [ { port: 80, protocol: "http" }, { port: 443, protocol: "https" } ] } ``` ### public? **Type** `Input` **Default** `true` Configure if the load balancer should be public (internet-facing) or private (internal). When set to `false`, the load balancer endpoint will only be accessible within the VPC. ### transform? **Type** `Object` - [`listener?`](#transform-listener) - [`loadBalancer?`](#transform-loadbalancer) - [`securityGroup?`](#transform-securitygroup) [Transform](/docs/components#transform) how this component creates its underlying resources. listener? **Type** [`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)` | (args: `[`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer listener resource. loadBalancer? **Type** [`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)` | (args: `[`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer resource. securityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Security Group resource for the Load Balancer. ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`id`](#vpc-id) - [`privateSubnets`](#vpc-privatesubnets) - [`publicSubnets`](#vpc-publicsubnets) The VPC to deploy the ALB in. Can be an SST `Vpc` component or a custom VPC configuration. Using an SST Vpc component: ```js { vpc: myVpc } ``` Using a custom VPC: ```js { vpc: { id: "vpc-0123456789abcdef0", publicSubnets: ["subnet-abc", "subnet-def"], privateSubnets: ["subnet-ghi", "subnet-jkl"] } } ``` id **Type** `Input` The VPC ID. privateSubnets **Type** `Input[]>` The private subnet IDs. publicSubnets **Type** `Input[]>` The public subnet IDs. ## Properties ### arn **Type** `Output` The ARN of the load balancer. ### dnsName **Type** `Output` The DNS name of the load balancer. ### nodes **Type** `Object` - [`listeners`](#nodes-listeners) - [`loadBalancer`](#nodes-loadbalancer) - [`securityGroup`](#nodes-securitygroup) The underlying resources this component creates. listeners **Type** `Record` The AWS Listener resources, keyed by "PROTOCOLPORT" (e.g. "HTTPS443"). loadBalancer **Type** [`LoadBalancer`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/) The AWS Load Balancer resource. securityGroup **Type** [`SecurityGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/) The AWS Security Group resource. ### securityGroupId **Type** `Output` The security group ID of the load balancer. ### url **Type** `Output` The URL of the load balancer. If a custom domain is set, this will be the custom domain URL (eg. `https://app.example.com/`). Otherwise, it's the ALB's DNS name. ### zoneId **Type** `Output` The zone ID of the load balancer. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the load balancer. If a custom domain is set, this will be the custom domain URL (eg. `https://app.example.com/`). Otherwise, it's the ALB's DNS name. ## Methods ### getListener ```ts getListener(protocol, port) ``` #### Parameters - `protocol` `string` - `port` `number` **Returns** [`Listener`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/) Get a specific listener by protocol and port. ```ts const listener = alb.getListener("https", 443); ``` ### static get ```ts Alb.get(name, loadBalancerArn, opts?) ``` #### Parameters - `name` `string` The name of the component. - `loadBalancerArn` `Input` The ARN of the existing ALB. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) Component resource options. **Returns** [`Alb`](.) Reference an existing ALB by its ARN. ```ts const alb = sst.aws.Alb.get("SharedAlb", "arn:aws:elasticloadbalancing:..."); ``` ## AlbListenerArgs ### port **Type** `number` The port to listen on. ```js { port: 443 } ``` ### protocol **Type** `"https" | "http"` The protocol to listen on. Only `http` and `https` are supported (ALB-only). ```js { protocol: "https" } ``` --- ## Analog Reference doc for the `sst.aws.Analog` component. https://sst.dev/docs/component/aws/analog The `Analog` component lets you deploy a [Analog](https://analogjs.org) app to AWS. #### Minimal example Deploy an Analog app that's in the project root. ```js title="sst.config.ts" new sst.aws.Analog("MyWeb"); ``` #### Change the path Deploys the Analog app in the `my-analog-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.Analog("MyWeb", { path: "my-analog-app/" }); ``` #### Add a custom domain Set a custom domain for your Analog app. ```js {2} title="sst.config.ts" new sst.aws.Analog("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.Analog("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your Analog app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Analog("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your Analog app. ```ts title="src/app/app.config.ts" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new Analog(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`AnalogArgs`](#analogargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## AnalogArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the Analog app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your Analog app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the Analog app to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your Analog app is run in dev mode; it's not deployed. ::: Instead of deploying your Analog app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your Analog app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set [environment variables](https://analogjs.org/docs/guides/migrating#using-environment-variables) in your Analog app. These are made available: 1. In `ng build`, they are loaded into `process.env`. 2. Locally while running `sst dev ng serve`. :::tip You can also `link` resources to your Analog app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: Only variables prefixed with `VITE_` are available in the browser. ```js { environment: { API_URL: api.url, // Accessible in the browser VITE_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your Analog app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your Analog app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your Analog app is located. This path is relative to your `sst.config.ts`. By default it assumes your Analog app is in the root of your SST app. If your Analog app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your Analog app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your Analog app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your Analog app **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the `base` and `apiPrefix` options in your `vite.config.ts`. The `apiPrefix` value should not begin with a slash. :::caution If routing to a path, you need to set that as the base path in your Analog app as well. ::: ```js title="vite.config.ts" {5,8} plugins: [ analog({ // Does NOT start with a slash apiPrefix: "docs/api" }) ], base: "/docs" })); ``` To serve your Analog app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your Analog app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set the base path and API prefix in your `vite.config.ts`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the Analog app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the Analog app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## ApiGatewayWebSocketRoute Reference doc for the `sst.aws.ApiGatewayWebSocketRoute` component. https://sst.dev/docs/component/aws/apigateway-websocket-route The `ApiGatewayWebSocketRoute` component is internally used by the `ApiGatewayWebSocket` component to add routes to your [API Gateway WebSocket API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `route` method of the `ApiGatewayWebSocket` component. --- ## Constructor ```ts new ApiGatewayWebSocketRoute(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`integration`](#nodes-integration) - [`permission`](#nodes-permission) - [`route`](#nodes-route) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. integration **Type** [`Integration`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/) The API Gateway HTTP API integration. permission **Type** [`Permission`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/permission/) The Lambda permission. route **Type** `Output<`[`Route`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/)`>` The API Gateway HTTP API route. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function. ## Args ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the service. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### auth? **Type** `Input` - [`iam?`](#auth-iam) - [`jwt?`](#auth-jwt) `Input` - [`authorizer`](#auth-jwt-authorizer) - [`scopes?`](#auth-jwt-scopes) - [`lambda?`](#auth-lambda) Enable auth for your WebSocket API. By default, auth is disabled. ```js { auth: { iam: true } } ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. jwt? **Type** `Input` Enable JWT or JSON Web Token authorization for a given API route. When JWT auth is enabled, clients need to include a valid JWT in their requests. You can configure JWT auth. ```js { auth: { jwt: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. authorizer **Type** `Input` Authorizer ID of the JWT authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the JWT grants. If the JWT does not have the required scope, the request is rejected. By default it does not require any scopes. lambda? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { lambda: myAuthorizer.id } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. ### handler **Type** `Input` The function that’ll be invoked. ### route **Type** `Input` The path for the route. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`route?`](#transform-route) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway WebSocket API integration resource. route? **Type** [`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)` | (args: `[`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway WebSocket API route resource. --- ## ApiGatewayWebSocket Reference doc for the `sst.aws.ApiGatewayWebSocket` component. https://sst.dev/docs/component/aws/apigateway-websocket The `ApiGatewayWebSocket` component lets you add an [Amazon API Gateway WebSocket API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-websocket-api.html) to your app. #### Create the API ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayWebSocket("MyApi"); ``` #### Add a custom domain ```js {2} title="sst.config.ts" new sst.aws.ApiGatewayWebSocket("MyApi", { domain: "api.example.com" }); ``` #### Add routes ```ts title="sst.config.ts" api.route("$connect", "src/connect.handler"); api.route("$disconnect", "src/disconnect.handler"); api.route("$default", "src/default.handler"); api.route("sendMessage", "src/sendMessage.handler"); ``` --- ## Constructor ```ts new ApiGatewayWebSocket(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`ApiGatewayWebSocketArgs`](#apigatewaywebsocketargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ApiGatewayWebSocketArgs ### accessLog? **Type** `Input` - [`retention?`](#accesslog-retention) **Default** `{retention: "1 month"}` Configure the [API Gateway logs](https://docs.aws.amazon.com/apigateway/latest/developerguide/view-cloudwatch-log-events-in-cloudwatch-console.html) in CloudWatch. By default, access logs are enabled and kept for 1 month. ```js { accessLog: { retention: "forever" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `1 month` The duration the API Gateway logs are kept in CloudWatch. ### domain? **Type** `Input` - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name?`](#domain-name) - [`nameId?`](#domain-nameid) - [`path?`](#domain-path) Set a custom domain for your WebSocket API. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the API Gateway URL. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name? **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` nameId? **Type** `Input` Use an existing API Gateway domain name. By default, a new API Gateway domain name is created. If you'd like to use an existing domain name, set the `nameId` to the ID of the domain name and **do not** pass in `name`. ```js { domain: { nameId: "example.com" } } ``` path? **Type** `Input` The base mapping for the custom domain. This adds a suffix to the URL of the API. Given the following base path and domain name. ```js { domain: { name: "api.example.com", path: "v1" } } ``` The full URL of the API will be `https://api.example.com/v1/`. :::note There's an extra trailing slash when a base path is set. ::: By default there is no base path, so if the `name` is `api.example.com`, the full URL will be `https://api.example.com`. ### transform? **Type** `Object` - [`accessLog?`](#transform-accesslog) - [`api?`](#transform-api) - [`domainName?`](#transform-domainname) - [`route?`](#transform-route) `Object` - [`args?`](#transform-route-args) - [`handler?`](#transform-route-handler) - [`stage?`](#transform-stage) [Transform](/docs/components#transform) how this component creates its underlying resources. accessLog? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch LogGroup resource used for access logs. api? **Type** [`ApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/api/#inputs)` | (args: `[`ApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/api/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway WebSocket API resource. domainName? **Type** [`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/domainname/#inputs)` | (args: `[`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/domainname/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway WebSocket API domain name resource. route? **Type** `Object` Transform the routes. This can be used to customize the handler function and the arguments for each route. ```js { transform: { route: { handler: { link: [bucket, stripeKey] }, args: { auth: { iam: true } } } } } ``` args? **Type** [`ApiGatewayWebSocketRouteArgs`](#apigatewaywebsocketrouteargs)` | (args: `[`ApiGatewayWebSocketRouteArgs`](#apigatewaywebsocketrouteargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the arguments for the route. handler? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the handler function for the route. stage? **Type** [`StageArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/stage/#inputs)` | (args: `[`StageArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/stage/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway WebSocket API stage resource. ## Properties ### managementEndpoint **Type** `Output` The management endpoint for the API used by the API Gateway Management API client. This is useful for sending messages to connected clients. ### nodes **Type** `Object` - [`api`](#nodes-api) - [`logGroup`](#nodes-loggroup) - [`domainName`](#nodes-domainname) The underlying [resources](/docs/components/#nodes) this component creates. api **Type** [`Api`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/api/) The Amazon API Gateway V2 API. logGroup **Type** [`LogGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/) The CloudWatch LogGroup for the access logs. domainName **Type** `Output<`[`DomainName`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/domainname/)`>` The API Gateway HTTP API domain name. ### url **Type** `Output` The URL of the API. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated API Gateway URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `managementEndpoint` `string` The management endpoint for the API used by the API Gateway Management API client. This is useful for sending messages to connected clients. - `url` `string` The URL of the API. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated API Gateway URL. ## Methods ### addAuthorizer ```ts addAuthorizer(name, args) ``` #### Parameters - `name` `string` The name of the authorizer. - `args` [`ApiGatewayWebSocketAuthorizerArgs`](#apigatewaywebsocketauthorizerargs) Configure the authorizer. **Returns** [`ApiGatewayV2Authorizer`](/docs/component/aws/apigatewayv2-authorizer) Add an authorizer to the API Gateway WebSocket API. Add a Lambda authorizer. ```js title="sst.config.ts" api.addAuthorizer({ name: "myAuthorizer", lambda: { function: "src/authorizer.index" } }); ``` Add a JWT authorizer. ```js title="sst.config.ts" const authorizer = api.addAuthorizer({ name: "myAuthorizer", jwt: { issuer: "https://issuer.com/", audiences: ["https://api.example.com"], identitySource: "$request.header.AccessToken" } }); ``` Add a Cognito UserPool as a JWT authorizer. ```js title="sst.config.ts" const pool = new sst.aws.CognitoUserPool("MyUserPool"); const poolClient = userPool.addClient("Web"); const authorizer = api.addAuthorizer({ name: "myCognitoAuthorizer", jwt: { issuer: $interpolate`https://cognito-idp.${aws.getRegionOutput().region}.amazonaws.com/${pool.id}`, audiences: [poolClient.id] } }); ``` Now you can use the authorizer in your routes. ```js title="sst.config.ts" api.route("GET /", "src/get.handler", { auth: { jwt: { authorizer: authorizer.id } } }); ``` ### route ```ts route(route, handler, args?) ``` #### Parameters - `route` `string` The path for the route. - `handler` `Input` The function that'll be invoked. - `args?` [`ApiGatewayWebSocketRouteArgs`](#apigatewaywebsocketrouteargs) Configure the route. **Returns** [`ApiGatewayWebSocketRoute`](/docs/component/aws/apigateway-websocket-route) Add a route to the API Gateway WebSocket API. There are three predefined routes: - `$connect`: When the client connects to the API. - `$disconnect`: When the client or the server disconnects from the API. - `$default`: The default or catch-all route. In addition, you can create custom routes. When a request comes in, the API Gateway will look for the specific route defined by the user. If no route matches, the `$default` route will be invoked. :::caution [API Gateway has strict rate limits](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html) for creating and updating resources. Creating one Lambda function for every route can significantly slow down your deployments. Use a single Lambda and handle routing in code if you don't need specific API Gateway features. ::: Add a simple route. ```js title="sst.config.ts" api.route("sendMessage", "src/sendMessage.handler"); ``` Add a predefined route. ```js title="sst.config.ts" api.route("$default", "src/default.handler"); ``` Enable auth for a route. ```js title="sst.config.ts" api.route("sendMessage", "src/sendMessage.handler", { auth: { iam: true } }); ``` Customize the route handler. ```js title="sst.config.ts" api.route("sendMessage", { handler: "src/sendMessage.handler", memory: "2048 MB" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" api.route("sendMessage", "arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ## ApiGatewayWebSocketAuthorizerArgs ### jwt? **Type** `Input` - [`audiences`](#jwt-audiences) - [`identitySource?`](#jwt-identitysource) - [`issuer`](#jwt-issuer) Create a JWT or JSON Web Token authorizer that can be used by the routes. Configure JWT auth. ```js { jwt: { issuer: "https://issuer.com/", audiences: ["https://api.example.com"], identitySource: "$request.header.AccessToken" } } ``` You can also use Cognito as the identity provider. ```js { jwt: { audiences: [userPoolClient.id], issuer: $interpolate`https://cognito-idp.${aws.getArnOutput(userPool).region}.amazonaws.com/${userPool.id}`, } } ``` Where `userPool` and `userPoolClient` are: ```js const userPool = new aws.cognito.UserPool(); const userPoolClient = new aws.cognito.UserPoolClient(); ``` audiences **Type** `Input[]>` List of the intended recipients of the JWT. A valid JWT must provide an `aud` that matches at least one entry in this list. identitySource? **Type** `Input` **Default** `"route.request.header.Authorization"` Specifies where to extract the JWT from the request. issuer **Type** `Input` Base domain of the identity provider that issues JSON Web Tokens. ```js { issuer: "https://issuer.com/" } ``` ### lambda? **Type** `Input` - [`function`](#lambda-function) - [`identitySources?`](#lambda-identitysources) - [`payload?`](#lambda-payload) - [`response?`](#lambda-response) Create a Lambda authorizer that can be used by the routes. Configure Lambda auth. ```js { lambda: { function: "src/authorizer.index" } } ``` function **Type** `Input` The Lambda authorizer function. Takes the handler path or the function args. Add a simple authorizer. ```js { function: "src/authorizer.index" } ``` Customize the authorizer handler. ```js { function: { handler: "src/authorizer.index", memory: "2048 MB" } } ``` identitySources? **Type** `Input[]>` **Default** `["route.request.header.Authorization"]` Specifies where to extract the identity from. ```js { identitySources: ["$request.header.RequestToken"] } ``` payload? **Type** `Input<"1.0" | "2.0">` **Default** `"2.0"` The JWT payload version. ```js { payload: "2.0" } ``` response? **Type** `Input<"simple" | "iam">` **Default** `"simple"` The response type. ```js { response: "iam" } ``` ### transform? **Type** `Object` - [`authorizer?`](#transform-authorizer) [Transform](/docs/components#transform) how this component creates its underlying resources. authorizer? **Type** [`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/authorizer/#inputs)` | (args: `[`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/authorizer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway authorizer resource. ## ApiGatewayWebSocketRouteArgs ### auth? **Type** `Input` - [`iam?`](#auth-iam) - [`jwt?`](#auth-jwt) `Input` - [`authorizer`](#auth-jwt-authorizer) - [`scopes?`](#auth-jwt-scopes) - [`lambda?`](#auth-lambda) Enable auth for your WebSocket API. By default, auth is disabled. ```js { auth: { iam: true } } ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. jwt? **Type** `Input` Enable JWT or JSON Web Token authorization for a given API route. When JWT auth is enabled, clients need to include a valid JWT in their requests. You can configure JWT auth. ```js { auth: { jwt: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. authorizer **Type** `Input` Authorizer ID of the JWT authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the JWT grants. If the JWT does not have the required scope, the request is rejected. By default it does not require any scopes. lambda? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { lambda: myAuthorizer.id } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`route?`](#transform-route-1) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway WebSocket API integration resource. route? **Type** [`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)` | (args: `[`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway WebSocket API route resource. --- ## ApiGatewayV1ApiKey Reference doc for the `sst.aws.ApiGatewayV1ApiKey` component. https://sst.dev/docs/component/aws/apigatewayv1-api-key The `ApiGatewayV1ApiKey` component is internally used by the `ApiGatewayV1UsagePlan` component to add API keys to [Amazon API Gateway REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-rest-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addApiKey` method of the `ApiGatewayV1UsagePlan` component. --- ## Constructor ```ts new ApiGatewayV1ApiKey(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`ApiKeyArgs`](#apikeyargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`apiKey`](#nodes-apikey) The underlying [resources](/docs/components/#nodes) this component creates. apiKey **Type** [`ApiKey`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/apikey/) The API Gateway API Key. ### value **Type** `Output` The API key value. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `value` `string` The API key value. ## ApiKeyArgs ### apiId **Type** `Input` The API Gateway REST API to use for the API key. ### usagePlanId **Type** `Input` The API Gateway Usage Plan to use for the API key. ### value? **Type** `Input` The value of the API key. If not provided, it will be generated automatically. ```js { value: "d41d8cd98f00b204e9800998ecf8427e" } ``` --- ## ApiGatewayV1Authorizer Reference doc for the `sst.aws.ApiGatewayV1Authorizer` component. https://sst.dev/docs/component/aws/apigatewayv1-authorizer The `ApiGatewayV1Authorizer` component is internally used by the `ApiGatewayV1` component to add authorizers to [Amazon API Gateway REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-rest-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addAuthorizer` method of the `ApiGatewayV1` component. --- ## Constructor ```ts new ApiGatewayV1Authorizer(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`AuthorizerArgs`](#authorizerargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### id **Type** `Output` The ID of the authorizer. ### nodes **Type** `Object` - [`authorizer`](#nodes-authorizer) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. authorizer **Type** [`Authorizer`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/authorizer/) The API Gateway Authorizer. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function used by the authorizer. ## AuthorizerArgs ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the route. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### identitySource? **Type** `Input` **Default** `"method.request.header.Authorization"` Specifies where to extract the authorization token from the request. ```js { identitySource: "method.request.header.AccessToken" } ``` ### name **Type** `string` The name of the authorizer. ```js { name: "myAuthorizer" } ``` ### requestFunction? **Type** `Input` The Lambda request authorizer function. Takes the handler path or the function args. ```js { requestFunction: "src/authorizer.index" } ``` ### tokenFunction? **Type** `Input` The Lambda token authorizer function. Takes the handler path or the function args. ```js { tokenFunction: "src/authorizer.index" } ``` ### transform? **Type** `Object` - [`authorizer?`](#transform-authorizer) [Transform](/docs/components#transform) how this component creates its underlying resources. authorizer? **Type** [`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/authorizer/#inputs)` | (args: `[`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/authorizer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway authorizer resource. ### ttl? **Type** `Input` **Default** `300` Time to live for cached authorizer results in seconds. ```js { ttl: 30 } ``` ### userPools? **Type** `Input[]>` A list of user pools used as the authorizer. ```js { name: "myAuthorizer", userPools: [userPool.arn] } ``` Where `userPool` is: ```js const userPool = new aws.cognito.UserPool(); ``` --- ## ApiGatewayV1IntegrationRoute Reference doc for the `sst.aws.ApiGatewayV1IntegrationRoute` component. https://sst.dev/docs/component/aws/apigatewayv1-integration-route The `ApiGatewayV1IntegrationRoute` component is internally used by the `ApiGatewayV1` component to add routes to your [API Gateway REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-rest-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `routeIntegration` method of the `ApiGatewayV1` component. --- ## Constructor ```ts new ApiGatewayV1IntegrationRoute(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`integration`](#nodes-integration) - [`method`](#nodes-method) The underlying [resources](/docs/components/#nodes) this component creates. integration **Type** [`Integration`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/) The API Gateway REST API integration. method **Type** `Output<`[`Method`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/)`>` The API Gateway REST API method. ## Args ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the route. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### apiKey? **Type** `Input` **Default** `false` Specify if an API key is required for the route. By default, an API key is not required. ```js { apiKey: true } ``` ### auth? **Type** `Input` - [`cognito?`](#auth-cognito) `Input` - [`authorizer`](#auth-cognito-authorizer) - [`scopes?`](#auth-cognito-scopes) - [`custom?`](#auth-custom) - [`iam?`](#auth-iam) **Default** `false` Enable auth for your REST API. By default, auth is disabled. ```js { auth: { iam: true } } ``` cognito? **Type** `Input` Enable Cognito User Pool authorization for a given API route. You can configure JWT auth. ```js { auth: { cognito: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is: ```js const userPool = new aws.cognito.UserPool(); const myAuthorizer = api.addAuthorizer({ name: "MyAuthorizer", userPools: [userPool.arn] }); ``` authorizer **Type** `Input` Authorizer ID of the Cognito User Pool authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the authorization token grants. custom? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { custom: myAuthorizer.id } } ``` Where `myAuthorizer` is: ```js const userPool = new aws.cognito.UserPool(); const myAuthorizer = api.addAuthorizer({ name: "MyAuthorizer", userPools: [userPool.arn] }); ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. ### integration **Type** [`ApiGatewayV1IntegrationArgs`](/docs/component/aws/apigatewayv1#apigatewayv1integrationargs) The route integration. ### method **Type** `string` The route method. ### path **Type** `string` The route path. ### resourceId **Type** `Input` The route resource ID. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`method?`](#transform-method) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API integration resource. method? **Type** [`MethodArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/#inputs)` | (args: `[`MethodArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API method resource. --- ## ApiGatewayV1LambdaRoute Reference doc for the `sst.aws.ApiGatewayV1LambdaRoute` component. https://sst.dev/docs/component/aws/apigatewayv1-lambda-route The `ApiGatewayV1LambdaRoute` component is internally used by the `ApiGatewayV1` component to add routes to your [API Gateway REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-rest-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `route` method of the `ApiGatewayV1` component. --- ## Constructor ```ts new ApiGatewayV1LambdaRoute(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`integration`](#nodes-integration) - [`method`](#nodes-method) - [`permission`](#nodes-permission) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. integration **Type** [`Integration`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/) The API Gateway REST API integration. method **Type** `Output<`[`Method`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/)`>` The API Gateway REST API method. permission **Type** [`Permission`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/permission/) The Lambda permission. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function. ## Args ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the route. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### apiKey? **Type** `Input` **Default** `false` Specify if an API key is required for the route. By default, an API key is not required. ```js { apiKey: true } ``` ### auth? **Type** `Input` - [`cognito?`](#auth-cognito) `Input` - [`authorizer`](#auth-cognito-authorizer) - [`scopes?`](#auth-cognito-scopes) - [`custom?`](#auth-custom) - [`iam?`](#auth-iam) **Default** `false` Enable auth for your REST API. By default, auth is disabled. ```js { auth: { iam: true } } ``` cognito? **Type** `Input` Enable Cognito User Pool authorization for a given API route. You can configure JWT auth. ```js { auth: { cognito: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is: ```js const userPool = new aws.cognito.UserPool(); const myAuthorizer = api.addAuthorizer({ name: "MyAuthorizer", userPools: [userPool.arn] }); ``` authorizer **Type** `Input` Authorizer ID of the Cognito User Pool authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the authorization token grants. custom? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { custom: myAuthorizer.id } } ``` Where `myAuthorizer` is: ```js const userPool = new aws.cognito.UserPool(); const myAuthorizer = api.addAuthorizer({ name: "MyAuthorizer", userPools: [userPool.arn] }); ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. ### handler **Type** `Input` The route function. ### method **Type** `string` The route method. ### path **Type** `string` The route path. ### resourceId **Type** `Input` The route resource ID. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`method?`](#transform-method) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API integration resource. method? **Type** [`MethodArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/#inputs)` | (args: `[`MethodArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API method resource. --- ## ApiGatewayV1UsagePlan Reference doc for the `sst.aws.ApiGatewayV1UsagePlan` component. https://sst.dev/docs/component/aws/apigatewayv1-usage-plan The `ApiGatewayV1UsagePlan` component is internally used by the `ApiGatewayV1` component to add usage plans to [Amazon API Gateway REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-rest-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addUsagePlan` method of the `ApiGatewayV1` component. --- ## Constructor ```ts new ApiGatewayV1UsagePlan(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`UsagePlanArgs`](#usageplanargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`usagePlan`](#nodes-usageplan) The underlying [resources](/docs/components/#nodes) this component creates. usagePlan **Type** [`UsagePlan`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/usageplan/) The API Gateway Usage Plan. ## Methods ### addApiKey ```ts addApiKey(name, args?) ``` #### Parameters - `name` `string` The name of the API key. - `args?` [`ApiGatewayV1ApiKeyArgs`](/docs/component/aws/apigatewayv1#apigatewayv1apikeyargs) Configure the API key. **Returns** [`ApiGatewayV1ApiKey`](/docs/component/aws/apigatewayv1-api-key) Add an API key to the API Gateway usage plan. ```js title="sst.config.ts" plan.addApiKey("MyKey", { value: "d41d8cd98f00b204e9800998ecf8427e", }); ``` ## UsagePlanArgs ### apiId **Type** `Input` The API Gateway REST API to use for the usage plan. ### apiStage **Type** `Input` The stage of the API Gateway REST API. ### quota? **Type** `Input` - [`limit`](#quota-limit) - [`offset?`](#quota-offset) - [`period`](#quota-period) Configure a cap on the total number of requests allowed within a specified time period. ```js { quota: { limit: 1000, period: "month", offset: 0 } } ``` limit **Type** `Input` The maximum number of requests that can be made in the specified period of time. offset? **Type** `Input` The number of days into the period when the quota counter is reset. For example, this resets the quota at the beginning of each month. ```js { period: "month", offset: 0 } ``` period **Type** `Input<"day" | "week" | "month">` The time period for which the quota applies. ### throttle? **Type** `Input` - [`burst?`](#throttle-burst) - [`rate?`](#throttle-rate) Configure rate limits to protect your API from being overwhelmed by too many requests at once. ```js { throttle: { rate: 100, burst: 200 } } ``` burst? **Type** `Input` The maximum number of requests permitted in a short-term spike beyond the rate limit. rate? **Type** `Input` The steady-state maximum number of requests allowed per second. --- ## ApiGatewayV1 Reference doc for the `sst.aws.ApiGatewayV1` component. https://sst.dev/docs/component/aws/apigatewayv1 The `ApiGatewayV1` component lets you add an [Amazon API Gateway REST API](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-rest-api.html) to your app. #### Create the API ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayV1("MyApi"); ``` #### Add routes ```ts title="sst.config.ts" api.route("GET /", "src/get.handler"); api.route("POST /", "src/post.handler"); api.deploy(); ``` :::note You need to call `deploy` after you've added all your routes. ::: #### Configure the routes ```ts title="sst.config.ts" api.route("GET /", "src/get.handler", { auth: { iam: true } }); ``` #### Configure the route handler You can configure the Lambda function that'll handle the route. ```ts title="sst.config.ts" api.route("POST /", { handler: "src/post.handler", memory: "2048 MB" }); ``` #### Default props for all routes You can use a `transform` to set some default props for all your routes. For example, instead of setting the `memory` for each route. ```ts title="sst.config.ts" api.route("GET /", { handler: "src/get.handler", memory: "2048 MB" }); api.route("POST /", { handler: "src/post.handler", memory: "2048 MB" }); ``` You can set it through the `transform`. ```ts title="sst.config.ts" {6} const api = new sst.aws.ApiGatewayV1("MyApi", { transform: { route: { handler: (args, opts) => { // Set the default if it's not set by the route args.memory ??= "2048 MB"; } } } }); api.route("GET /", "src/get.handler"); api.route("POST /", "src/post.handler"); ``` With this we set the `memory` if it's not overridden by the route. --- ## Constructor ```ts new ApiGatewayV1(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`ApiGatewayV1Args`](#apigatewayv1args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ApiGatewayV1Args ### accessLog? **Type** `Input` - [`retention?`](#accesslog-retention) **Default** `{retention: "1 month"}` Configure the [API Gateway logs](https://docs.aws.amazon.com/apigateway/latest/developerguide/view-cloudwatch-log-events-in-cloudwatch-console.html) in CloudWatch. By default, access logs are enabled and retained for 1 month. ```js { accessLog: { retention: "forever" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `1 month` The duration the API Gateway logs are retained in CloudWatch. ### cors? **Type** `Input` **Default** `true` Enable the CORS or Cross-origin resource sharing for your API. Disable CORS. ```js { cors: false } ``` ### domain? **Type** `Input` - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`nameId?`](#domain-nameid) - [`path?`](#domain-path) Set a custom domain for your REST API. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the API Gateway URL. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` nameId? **Type** `Input` Use an existing API Gateway domain name. By default, a new API Gateway domain name is created. If you'd like to use an existing domain name, set the `nameId` to the ID of the domain name and **do not** pass in `name`. ```js { domain: { nameId: "example.com" } } ``` path? **Type** `Input` The base mapping for the custom domain. This adds a suffix to the URL of the API. Given the following base path and domain name. ```js { domain: { name: "api.example.com", path: "v1" } } ``` The full URL of the API will be `https://api.example.com/v1/`. :::note There's an extra trailing slash when a base path is set. ::: By default there is no base path, so if the `name` is `api.example.com`, the full URL will be `https://api.example.com`. ### endpoint? **Type** `Input` - [`type`](#endpoint-type) - [`vpcEndpointIds?`](#endpoint-vpcendpointids) **Default** `{type: "edge"}` Configure the type of API Gateway REST API endpoint. - `edge`: The default; it creates a CloudFront distribution for the API. Useful for cases where requests are geographically distributed. - `regional`: Endpoints are deployed in specific AWS regions and are intended to be accessed directly by clients within or near that region. - `private`: Endpoints allows access to the API only from within a specified Amazon VPC (Virtual Private Cloud) using VPC endpoints. These do not expose the API to the public internet. Learn more about the [different types of endpoints](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html). For example, to create a regional endpoint. ```js { endpoint: { type: "regional" } } ``` And to create a private endpoint. ```js { endpoint: { type: "private", vpcEndpointIds: ["vpce-0dccab6fb1e828f36"] } } ``` type **Type** `"edge" | "regional" | "private"` The type of the API Gateway REST API endpoint. vpcEndpointIds? **Type** `Input[]>` The VPC endpoint IDs for the `private` endpoint. ### transform? **Type** `Object` - [`accessLog?`](#transform-accesslog) - [`api?`](#transform-api) - [`deployment?`](#transform-deployment) - [`domainName?`](#transform-domainname) - [`route?`](#transform-route) `Object` - [`args?`](#transform-route-args) - [`handler?`](#transform-route-handler) - [`stage?`](#transform-stage) [Transform](/docs/components#transform) how this component creates its underlying resources. accessLog? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch LogGroup resource used for access logs. api? **Type** [`RestApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/restapi/#inputs)` | (args: `[`RestApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/restapi/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API resource. deployment? **Type** [`DeploymentArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/deployment/#inputs)` | (args: `[`DeploymentArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/deployment/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API deployment resource. domainName? **Type** [`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/domainname/#inputs)` | (args: `[`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/domainname/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API domain name resource. route? **Type** `Object` Transform the routes. This is called for every route that is added. :::note This is applied right before the resource is created. ::: You can use this to set any default props for all the routes and their handler function. Like the other transforms, you can either pass in an object or a callback. Here we are setting a default memory of `2048 MB` for our routes. ```js { transform: { route: { handler: (args, opts) => { // Set the default if it's not set by the route args.memory ??= "2048 MB"; } } } } ``` Defaulting to IAM auth for all our routes. ```js { transform: { route: { args: (props) => { // Set the default if it's not set by the route props.auth ??= { iam: true }; } } } } ``` args? **Type** [`ApiGatewayV1RouteArgs`](#apigatewayv1routeargs)` | (args: `[`ApiGatewayV1RouteArgs`](#apigatewayv1routeargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the arguments for the route. handler? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the handler function of the route. stage? **Type** [`StageArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/stage/#inputs)` | (args: `[`StageArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/stage/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API stage resource. ## Properties ### nodes **Type** `Object` - [`api`](#nodes-api) - [`logGroup`](#nodes-loggroup) - [`stage`](#nodes-stage) - [`domainName`](#nodes-domainname) The underlying [resources](/docs/components/#nodes) this component creates. api **Type** [`RestApi`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/restapi/) The Amazon API Gateway REST API logGroup **Type** `undefined | `[`LogGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/) The CloudWatch LogGroup for the access logs. stage **Type** `undefined | `[`Stage`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/stage/) The Amazon API Gateway REST API stage domainName **Type** `Output<`[`DomainName`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/domainname/)`>` The API Gateway REST API domain name. ### url **Type** `Output` The URL of the API. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the API. ## Methods ### addAuthorizer ```ts addAuthorizer(args) ``` #### Parameters - `args` [`ApiGatewayV1AuthorizerArgs`](#apigatewayv1authorizerargs) Configure the authorizer. **Returns** [`ApiGatewayV1Authorizer`](/docs/component/aws/apigatewayv1-authorizer) Add an authorizer to the API Gateway REST API. For example, add a Lambda token authorizer. ```js title="sst.config.ts" api.addAuthorizer({ name: "myAuthorizer", tokenFunction: "src/authorizer.index" }); ``` Add a Lambda REQUEST authorizer. ```js title="sst.config.ts" api.addAuthorizer({ name: "myAuthorizer", requestFunction: "src/authorizer.index" }); ``` Add a Cognito User Pool authorizer. ```js title="sst.config.ts" const userPool = new aws.cognito.UserPool(); api.addAuthorizer({ name: "myAuthorizer", userPools: [userPool.arn] }); ``` You can also customize the authorizer. ```js title="sst.config.ts" api.addAuthorizer({ name: "myAuthorizer", tokenFunction: "src/authorizer.index", ttl: 30 }); ``` ### addUsagePlan ```ts addUsagePlan(name, args) ``` #### Parameters - `name` `string` The name of the usage plan. - `args` [`ApiGatewayV1UsagePlanArgs`](#apigatewayv1usageplanargs) Configure the usage plan. **Returns** [`ApiGatewayV1UsagePlan`](/docs/component/aws/apigatewayv1-usage-plan) Add a usage plan to the API Gateway REST API. To add a usage plan to an API, you need to enable the API key for a route, and then deploy the API. ```ts title="sst.config.ts" {4} const api = new sst.aws.ApiGatewayV1("MyApi"); api.route("GET /", "src/get.handler", { apiKey: true }); api.deploy(); ``` Then define your usage plan. ```js title="sst.config.ts" const plan = api.addUsagePlan("MyPlan", { throttle: { rate: 100, burst: 200 }, quota: { limit: 1000, period: "month", offset: 0 } }); ``` And create the API key for the plan. ```js title="sst.config.ts" const key = plan.addApiKey("MyKey"); ``` You can now link the API and API key to other resources, like a function. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [api, key] }); ``` Once linked, include the key in the `x-api-key` header with your requests. ```ts title="src/lambda.ts" await fetch(Resource.MyApi.url, { headers: { "x-api-key": Resource.MyKey.value } }); ``` ### deploy ```ts deploy() ``` **Returns** `void` Creates a deployment for the API Gateway REST API. :::caution Your routes won't be added if `deploy` isn't called. ::: Your routes won't be added if this isn't called after you've added them. This is due to a quirk in the way API Gateway V1 is created internally. ### route ```ts route(route, handler, args?) ``` #### Parameters - `route` `string` The path for the route. - `handler` `Input` The function that'll be invoked. - `args?` [`ApiGatewayV1RouteArgs`](#apigatewayv1routeargs) Configure the route. **Returns** [`ApiGatewayV1LambdaRoute`](/docs/component/aws/apigatewayv1-lambda-route) Add a route to the API Gateway REST API. The route is a combination of an HTTP method and a path, `{METHOD} /{path}`. :::caution [API Gateway has strict rate limits](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html) for creating and updating resources. Creating one Lambda function for every endpoint can significantly slow down your deployments. Use a single Lambda and handle routing in code if you don't need specific API Gateway features. ::: A method could be one of `GET`, `POST`, `PUT`, `DELETE`, `PATCH`, `HEAD`, `OPTIONS`, or `ANY`. Here `ANY` matches any HTTP method. The path can be a combination of - Literal segments, `/notes`, `/notes/new`, etc. - Parameter segments, `/notes/{noteId}`, `/notes/{noteId}/attachments/{attachmentId}`, etc. - Greedy segments, `/{proxy+}`, `/notes/{proxy+}`, etc. The `{proxy+}` segment is a greedy segment that matches all child paths. It needs to be at the end of the path. :::tip The `{proxy+}` is a greedy segment, it matches all its child paths. ::: When a request comes in, the API Gateway will look for the most specific match. :::note You cannot have duplicate routes. ::: Add a simple route. ```js title="sst.config.ts" api.route("GET /", "src/get.handler"); ``` Match any HTTP method. ```js title="sst.config.ts" api.route("ANY /", "src/route.handler"); ``` Add a default or fallback route. Here for every request other than `GET /hi`, the `default.handler` function will be invoked. ```js title="sst.config.ts" api.route("GET /hi", "src/get.handler"); api.route("ANY /", "src/default.handler"); api.route("ANY /{proxy+}", "src/default.handler"); ``` The `/{proxy+}` matches any path that starts with `/`, so if you want a fallback route for the root `/` path, you need to add a `ANY /` route as well. Add a parameterized route. ```js title="sst.config.ts" api.route("GET /notes/{id}", "src/get.handler"); ``` Add a greedy route. ```js title="sst.config.ts" api.route("GET /notes/{proxy+}", "src/greedy.handler"); ``` Enable auth for a route. ```js title="sst.config.ts" api.route("GET /", "src/get.handler") api.route("POST /", "src/post.handler", { auth: { iam: true } }); ``` Customize the route handler. ```js title="sst.config.ts" api.route("GET /", { handler: "src/get.handler", memory: "2048 MB" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" api.route("GET /", "arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ### routeIntegration ```ts routeIntegration(route, integration, args?) ``` #### Parameters - `route` `string` The path for the route. - `integration` [`ApiGatewayV1IntegrationArgs`](#apigatewayv1integrationargs) The integration configuration. - `args?` [`ApiGatewayV1RouteArgs`](#apigatewayv1routeargs) Configure the route. **Returns** [`ApiGatewayV1IntegrationRoute`](/docs/component/aws/apigatewayv1-integration-route) Add a custom integration to the API Gateway REST API. [Learn more about integrations](https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-integration-settings.html). Add a route to trigger a Step Function state machine execution. ```js title="sst.config.ts" api.routeIntegration("POST /run-my-state-machine", { type: "aws", uri: "arn:aws:apigateway:us-east-1:states:startExecution", credentials: "arn:aws:iam::123456789012:role/apigateway-execution-role", integrationHttpMethod: "POST", requestTemplates: { "application/json": JSON.stringify({ input: "$input.json('$')", stateMachineArn: "arn:aws:states:us-east-1:123456789012:stateMachine:MyStateMachine" }) }, passthroughBehavior: "when-no-match" }); ``` ## ApiGatewayV1ApiKeyArgs ### value? **Type** `Input` The value of the API key. If not provided, it will be generated automatically. ```js { value: "d41d8cd98f00b204e9800998ecf8427e" } ``` ## ApiGatewayV1AuthorizerArgs ### identitySource? **Type** `Input` **Default** `"method.request.header.Authorization"` Specifies where to extract the authorization token from the request. ```js { identitySource: "method.request.header.AccessToken" } ``` ### name **Type** `string` The name of the authorizer. ```js { name: "myAuthorizer" } ``` ### requestFunction? **Type** `Input` The Lambda request authorizer function. Takes the handler path or the function args. ```js { requestFunction: "src/authorizer.index" } ``` ### tokenFunction? **Type** `Input` The Lambda token authorizer function. Takes the handler path or the function args. ```js { tokenFunction: "src/authorizer.index" } ``` ### transform? **Type** `Object` - [`authorizer?`](#transform-authorizer) [Transform](/docs/components#transform) how this component creates its underlying resources. authorizer? **Type** [`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/authorizer/#inputs)` | (args: `[`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/authorizer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway authorizer resource. ### ttl? **Type** `Input` **Default** `300` Time to live for cached authorizer results in seconds. ```js { ttl: 30 } ``` ### userPools? **Type** `Input[]>` A list of user pools used as the authorizer. ```js { name: "myAuthorizer", userPools: [userPool.arn] } ``` Where `userPool` is: ```js const userPool = new aws.cognito.UserPool(); ``` ## ApiGatewayV1DomainArgs ### cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the API Gateway URL. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` ### dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` ### name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` ### nameId? **Type** `Input` Use an existing API Gateway domain name. By default, a new API Gateway domain name is created. If you'd like to use an existing domain name, set the `nameId` to the ID of the domain name and **do not** pass in `name`. ```js { domain: { nameId: "example.com" } } ``` ### path? **Type** `Input` The base mapping for the custom domain. This adds a suffix to the URL of the API. Given the following base path and domain name. ```js { domain: { name: "api.example.com", path: "v1" } } ``` The full URL of the API will be `https://api.example.com/v1/`. :::note There's an extra trailing slash when a base path is set. ::: By default there is no base path, so if the `name` is `api.example.com`, the full URL will be `https://api.example.com`. ## ApiGatewayV1IntegrationArgs ### credentials? **Type** `Input` The credentials to use to call the AWS service. ### integrationHttpMethod? **Type** `Input<"GET" | "POST" | "PUT" | "DELETE" | "HEAD" | "OPTIONS" | "ANY" | "PATCH">` The HTTP method to use to call the integration. ### passthroughBehavior? **Type** `Input<"when-no-match" | "never" | "when-no-templates">` The passthrough behavior to use to call the integration. Required if `requestTemplates` is set. ### requestParameters? **Type** `Input>>` Map of request query string parameters and headers that should be passed to the backend responder. ### requestTemplates? **Type** `Input>>` Map of the integration's request templates. ### type **Type** `Input<"aws" | "http" | "aws-proxy" | "mock" | "http-proxy">` The type of the API Gateway REST API integration. ### uri? **Type** `Input` The URI of the API Gateway REST API integration. ## ApiGatewayV1RouteArgs ### apiKey? **Type** `Input` **Default** `false` Specify if an API key is required for the route. By default, an API key is not required. ```js { apiKey: true } ``` ### auth? **Type** `Input` - [`cognito?`](#auth-cognito) `Input` - [`authorizer`](#auth-cognito-authorizer) - [`scopes?`](#auth-cognito-scopes) - [`custom?`](#auth-custom) - [`iam?`](#auth-iam) **Default** `false` Enable auth for your REST API. By default, auth is disabled. ```js { auth: { iam: true } } ``` cognito? **Type** `Input` Enable Cognito User Pool authorization for a given API route. You can configure JWT auth. ```js { auth: { cognito: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is: ```js const userPool = new aws.cognito.UserPool(); const myAuthorizer = api.addAuthorizer({ name: "MyAuthorizer", userPools: [userPool.arn] }); ``` authorizer **Type** `Input` Authorizer ID of the Cognito User Pool authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the authorization token grants. custom? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { custom: myAuthorizer.id } } ``` Where `myAuthorizer` is: ```js const userPool = new aws.cognito.UserPool(); const myAuthorizer = api.addAuthorizer({ name: "MyAuthorizer", userPools: [userPool.arn] }); ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`method?`](#transform-method) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API integration resource. method? **Type** [`MethodArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/#inputs)` | (args: `[`MethodArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/method/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway REST API method resource. ## ApiGatewayV1UsagePlanArgs ### quota? **Type** `Input` - [`limit`](#quota-limit) - [`offset?`](#quota-offset) - [`period`](#quota-period) Configure a cap on the total number of requests allowed within a specified time period. ```js { quota: { limit: 1000, period: "month", offset: 0 } } ``` limit **Type** `Input` The maximum number of requests that can be made in the specified period of time. offset? **Type** `Input` The number of days into the period when the quota counter is reset. For example, this resets the quota at the beginning of each month. ```js { period: "month", offset: 0 } ``` period **Type** `Input<"day" | "week" | "month">` The time period for which the quota applies. ### throttle? **Type** `Input` - [`burst?`](#throttle-burst) - [`rate?`](#throttle-rate) Configure rate limits to protect your API from being overwhelmed by too many requests at once. ```js { throttle: { rate: 100, burst: 200 } } ``` burst? **Type** `Input` The maximum number of requests permitted in a short-term spike beyond the rate limit. rate? **Type** `Input` The steady-state maximum number of requests allowed per second. --- ## ApiGatewayV2Authorizer Reference doc for the `sst.aws.ApiGatewayV2Authorizer` component. https://sst.dev/docs/component/aws/apigatewayv2-authorizer The `ApiGatewayV2Authorizer` component is internally used by the `ApiGatewayV2` component to add authorizers to [Amazon API Gateway HTTP API](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addAuthorizer` method of the `ApiGatewayV2` component. --- ## Constructor ```ts new ApiGatewayV2Authorizer(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`AuthorizerArgs`](#authorizerargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### id **Type** `Output` The ID of the authorizer. ### nodes **Type** `Object` - [`authorizer`](#nodes-authorizer) The underlying [resources](/docs/components/#nodes) this component creates. authorizer **Type** [`Authorizer`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/authorizer/) The API Gateway V2 authorizer. ## AuthorizerArgs ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the route. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### jwt? **Type** `Input` - [`audiences`](#jwt-audiences) - [`identitySource?`](#jwt-identitysource) - [`issuer`](#jwt-issuer) Create a JWT or JSON Web Token authorizer that can be used by the routes. Configure JWT auth. ```js { jwt: { issuer: "https://issuer.com/", audiences: ["https://api.example.com"], identitySource: "$request.header.AccessToken" } } ``` You can also use Cognito as the identity provider. ```js { jwt: { audiences: [userPoolClient.id], issuer: $interpolate`https://cognito-idp.${aws.getArnOutput(userPool).region}.amazonaws.com/${userPool.id}`, } } ``` Where `userPool` and `userPoolClient` are: ```js const userPool = new aws.cognito.UserPool(); const userPoolClient = new aws.cognito.UserPoolClient(); ``` audiences **Type** `Input[]>` List of the intended recipients of the JWT. A valid JWT must provide an `aud` that matches at least one entry in this list. identitySource? **Type** `Input` **Default** `"$request.header.Authorization"` Specifies where to extract the JWT from the request. issuer **Type** `Input` Base domain of the identity provider that issues JSON Web Tokens. ```js { issuer: "https://issuer.com/" } ``` ### lambda? **Type** `Input` - [`function`](#lambda-function) - [`identitySources?`](#lambda-identitysources) - [`payload?`](#lambda-payload) - [`response?`](#lambda-response) - [`ttl?`](#lambda-ttl) Create a Lambda authorizer that can be used by the routes. Configure Lambda auth. ```js { lambda: { function: "src/authorizer.index" } } ``` function **Type** `Input` The Lambda authorizer function. Takes the handler path or the function args. Add a simple authorizer. ```js { function: "src/authorizer.index" } ``` Customize the authorizer handler. ```js { function: { handler: "src/authorizer.index", memory: "2048 MB" } } ``` identitySources? **Type** `Input[]>` **Default** `["$request.header.Authorization"]` Specifies where to extract the identity from. ```js { identitySources: ["$request.header.RequestToken"] } ``` payload? **Type** `Input<"1.0" | "2.0">` **Default** `"2.0"` The JWT payload version. ```js { payload: "2.0" } ``` response? **Type** `Input<"simple" | "iam">` **Default** `"simple"` The response type. ```js { response: "iam" } ``` ttl? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds">` **Default** Not cached The time to live (TTL) for the authorizer. ```js { ttl: "300 seconds" } ``` ### name **Type** `string` The name of the authorizer. ```js { name: "myAuthorizer" } ``` ### transform? **Type** `Object` - [`authorizer?`](#transform-authorizer) [Transform](/docs/components#transform) how this component creates its underlying resources. authorizer? **Type** [`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/authorizer/#inputs)` | (args: `[`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/authorizer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway authorizer resource. ### type **Type** `"websocket" | "http"` The type of the API Gateway. --- ## ApiGatewayV2LambdaRoute Reference doc for the `sst.aws.ApiGatewayV2LambdaRoute` component. https://sst.dev/docs/component/aws/apigatewayv2-lambda-route The `ApiGatewayV2LambdaRoute` component is internally used by the `ApiGatewayV2` component to add routes to your [API Gateway HTTP API](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `route` method of the `ApiGatewayV2` component. --- ## Constructor ```ts new ApiGatewayV2LambdaRoute(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`integration`](#nodes-integration) - [`permission`](#nodes-permission) - [`route`](#nodes-route) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. integration **Type** [`Integration`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/) The API Gateway HTTP API integration. permission **Type** [`Permission`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/permission/) The Lambda permission. route **Type** `Output<`[`Route`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/)`>` The API Gateway HTTP API route. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function. ## Args ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the route. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### auth? **Type** `Input` - [`iam?`](#auth-iam) - [`jwt?`](#auth-jwt) `Input` - [`authorizer`](#auth-jwt-authorizer) - [`scopes?`](#auth-jwt-scopes) - [`lambda?`](#auth-lambda) **Default** `false` Enable auth for your HTTP API. By default, auth is disabled. ```js { auth: { iam: true } } ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. jwt? **Type** `Input` Enable JWT or JSON Web Token authorization for a given API route. When JWT auth is enabled, clients need to include a valid JWT in their requests. You can configure JWT auth. ```js { auth: { jwt: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. authorizer **Type** `Input` Authorizer ID of the JWT authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the JWT grants. If the JWT does not have the required scope, the request is rejected. By default it does not require any scopes. lambda? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { lambda: myAuthorizer.id } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. ### handler **Type** `Input` The route function. Takes the handler path, the function args, or a function ARN. ### handlerLink? **Type** `Input` The resources to link to the route function. ### route **Type** `Input` The path for the route. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`route?`](#transform-route) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API integration resource. route? **Type** [`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)` | (args: `[`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API route resource. --- ## ApiGatewayV2PrivateRoute Reference doc for the `sst.aws.ApiGatewayV2PrivateRoute` component. https://sst.dev/docs/component/aws/apigatewayv2-private-route The `ApiGatewayV2PrivateRoute` component is internally used by the `ApiGatewayV2` component to add routes to [Amazon API Gateway HTTP API](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `routePrivate` method of the `ApiGatewayV2` component. --- ## Constructor ```ts new ApiGatewayV2PrivateRoute(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`integration`](#nodes-integration) - [`route`](#nodes-route) The underlying [resources](/docs/components/#nodes) this component creates. integration **Type** [`Integration`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/) The API Gateway HTTP API integration. route **Type** `Output<`[`Route`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/)`>` The API Gateway HTTP API route. ## Args ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the route. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### arn **Type** `Input` The ARN of the AWS Load Balancer or Cloud Map service. ```js { arn: "arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/my-load-balancer/50dc6c495c0c9188" } ``` ### auth? **Type** `Input` - [`iam?`](#auth-iam) - [`jwt?`](#auth-jwt) `Input` - [`authorizer`](#auth-jwt-authorizer) - [`scopes?`](#auth-jwt-scopes) - [`lambda?`](#auth-lambda) **Default** `false` Enable auth for your HTTP API. By default, auth is disabled. ```js { auth: { iam: true } } ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. jwt? **Type** `Input` Enable JWT or JSON Web Token authorization for a given API route. When JWT auth is enabled, clients need to include a valid JWT in their requests. You can configure JWT auth. ```js { auth: { jwt: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. authorizer **Type** `Input` Authorizer ID of the JWT authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the JWT grants. If the JWT does not have the required scope, the request is rejected. By default it does not require any scopes. lambda? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { lambda: myAuthorizer.id } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. ### route **Type** `Input` The path for the route. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`route?`](#transform-route) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API integration resource. route? **Type** [`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)` | (args: `[`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API route resource. ### vpcLink **Type** `Input` The ID of the VPC link. ```js { vpcLink: "vpcl-0123456789abcdef" } ``` --- ## ApiGatewayV2UrlRoute Reference doc for the `sst.aws.ApiGatewayV2UrlRoute` component. https://sst.dev/docs/component/aws/apigatewayv2-url-route The `ApiGatewayV2UrlRoute` component is internally used by the `ApiGatewayV2` component to add routes to [Amazon API Gateway HTTP API](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `routeUrl` method of the `ApiGatewayV2` component. --- ## Constructor ```ts new ApiGatewayV2UrlRoute(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`integration`](#nodes-integration) - [`route`](#nodes-route) The underlying [resources](/docs/components/#nodes) this component creates. integration **Type** [`Integration`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/) The API Gateway HTTP API integration. route **Type** `Output<`[`Route`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/)`>` The API Gateway HTTP API route. ## Args ### api **Type** `Input` - [`executionArn`](#api-executionarn) - [`id`](#api-id) - [`name`](#api-name) The API Gateway to use for the route. executionArn **Type** `Input` The execution ARN of the API Gateway. id **Type** `Input` The ID of the API Gateway. name **Type** `Input` The name of the API Gateway. ### auth? **Type** `Input` - [`iam?`](#auth-iam) - [`jwt?`](#auth-jwt) `Input` - [`authorizer`](#auth-jwt-authorizer) - [`scopes?`](#auth-jwt-scopes) - [`lambda?`](#auth-lambda) **Default** `false` Enable auth for your HTTP API. By default, auth is disabled. ```js { auth: { iam: true } } ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. jwt? **Type** `Input` Enable JWT or JSON Web Token authorization for a given API route. When JWT auth is enabled, clients need to include a valid JWT in their requests. You can configure JWT auth. ```js { auth: { jwt: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. authorizer **Type** `Input` Authorizer ID of the JWT authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the JWT grants. If the JWT does not have the required scope, the request is rejected. By default it does not require any scopes. lambda? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { lambda: myAuthorizer.id } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. ### route **Type** `Input` The path for the route. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`route?`](#transform-route) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API integration resource. route? **Type** [`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)` | (args: `[`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API route resource. ### url **Type** `Input` The URL to route to. ```js { url: "https://example.com" } ``` --- ## ApiGatewayV2 Reference doc for the `sst.aws.ApiGatewayV2` component. https://sst.dev/docs/component/aws/apigatewayv2 The `ApiGatewayV2` component lets you add an [Amazon API Gateway HTTP API](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api.html) to your app. #### Create the API ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayV2("MyApi"); ``` #### Add a custom domain ```js {2} title="sst.config.ts" new sst.aws.ApiGatewayV2("MyApi", { domain: "api.example.com" }); ``` #### Add routes ```ts title="sst.config.ts" api.route("GET /", "src/get.handler"); api.route("POST /", "src/post.handler"); ``` #### Configure the routes You can configure the route. ```ts title="sst.config.ts" api.route("GET /", "src/get.handler", { auth: { iam: true } }); ``` #### Configure the route handler You can configure the route handler function. ```ts title="sst.config.ts" api.route("POST /", { handler: "src/post.handler", memory: "2048 MB" }); ``` #### Default props for all routes You can use the `transform` to set some default props for all your routes. For example, instead of setting the `memory` for each route. ```ts title="sst.config.ts" api.route("GET /", { handler: "src/get.handler", memory: "2048 MB" }); api.route("POST /", { handler: "src/post.handler", memory: "2048 MB" }); ``` You can set it through the `transform`. ```ts title="sst.config.ts" {6} const api = new sst.aws.ApiGatewayV2("MyApi", { transform: { route: { handler: (args, opts) => { // Set the default if it's not set by the route args.memory ??= "2048 MB"; } } } }); api.route("GET /", "src/get.handler"); api.route("POST /", "src/post.handler"); ``` With this we set the `memory` if it's not overridden by the route. --- ## Constructor ```ts new ApiGatewayV2(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`ApiGatewayV2Args`](#apigatewayv2args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ApiGatewayV2Args ### accessLog? **Type** `Input` - [`retention?`](#accesslog-retention) **Default** `{retention: "1 month"}` Configure the [API Gateway logs](https://docs.aws.amazon.com/apigateway/latest/developerguide/view-cloudwatch-log-events-in-cloudwatch-console.html) in CloudWatch. By default, access logs are enabled and kept for 1 month. ```js { accessLog: { retention: "forever" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `1 month` The duration the API Gateway logs are kept in CloudWatch. ### cors? **Type** `Input` - [`allowCredentials?`](#cors-allowcredentials) - [`allowHeaders?`](#cors-allowheaders) - [`allowMethods?`](#cors-allowmethods) - [`allowOrigins?`](#cors-alloworigins) - [`exposeHeaders?`](#cors-exposeheaders) - [`maxAge?`](#cors-maxage) **Default** `true` Customize the CORS (Cross-origin resource sharing) settings for your HTTP API. Disable CORS. ```js { cors: false } ``` Only enable the `GET` and `POST` methods for `https://example.com`. ```js { cors: { allowMethods: ["GET", "POST"], allowOrigins: ["https://example.com"] } } ``` allowCredentials? **Type** `Input` **Default** `false` Allow cookies or other credentials in requests to the HTTP API. ```js { cors: { allowCredentials: true } } ``` allowHeaders? **Type** `Input[]>` **Default** `["*"]` The HTTP headers that origins can include in requests to the HTTP API. ```js { cors: { allowHeaders: ["date", "keep-alive", "x-custom-header"] } } ``` allowMethods? **Type** `Input[]>` **Default** `["*"]` The HTTP methods that are allowed when calling the HTTP API. ```js { cors: { allowMethods: ["GET", "POST", "DELETE"] } } ``` Or the wildcard for all methods. ```js { cors: { allowMethods: ["*"] } } ``` allowOrigins? **Type** `Input[]>` **Default** `["*"]` The origins that can access the HTTP API. ```js { cors: { allowOrigins: ["https://www.example.com", "http://localhost:60905"] } } ``` Or the wildcard for all origins. ```js { cors: { allowOrigins: ["*"] } } ``` exposeHeaders? **Type** `Input[]>` **Default** `[]` The HTTP headers you want to expose in your function to an origin that calls the HTTP API. ```js { cors: { exposeHeaders: ["date", "keep-alive", "x-custom-header"] } } ``` maxAge? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days">` **Default** `"0 seconds"` The maximum amount of time the browser can cache results of a preflight request. By default the browser doesn't cache the results. The maximum value is `86400 seconds` or `1 day`. ```js { cors: { maxAge: "1 day" } } ``` ### domain? **Type** `Input` - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name?`](#domain-name) - [`nameId?`](#domain-nameid) - [`path?`](#domain-path) Set a custom domain for your HTTP API. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the API Gateway URL. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name? **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` nameId? **Type** `Input` Use an existing API Gateway domain name. By default, a new API Gateway domain name is created. If you'd like to use an existing domain name, set the `nameId` to the ID of the domain name and **do not** pass in `name`. ```js { domain: { nameId: "example.com" } } ``` path? **Type** `Input` The base mapping for the custom domain. This adds a suffix to the URL of the API. Given the following base path and domain name. ```js { domain: { name: "api.example.com", path: "v1" } } ``` The full URL of the API will be `https://api.example.com/v1/`. :::note There's an extra trailing slash when a base path is set. ::: By default there is no base path, so if the `name` is `api.example.com`, the full URL will be `https://api.example.com`. ### link? **Type** `Input` [Link resources](/docs/linking/) to all your API Gateway routes. Linked resources will be merged with the resources linked to each route. Takes a list of resources to link to all the routes. ```js { link: [bucket, stripeKey] } ``` ### transform? **Type** `Object` - [`api?`](#transform-api) - [`domainName?`](#transform-domainname) - [`logGroup?`](#transform-loggroup) - [`route?`](#transform-route) `Object` - [`args?`](#transform-route-args) - [`handler?`](#transform-route-handler) - [`stage?`](#transform-stage) - [`vpcLink?`](#transform-vpclink) [Transform](/docs/components#transform) how this component creates its underlying resources. api? **Type** [`ApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/api/#inputs)` | (args: `[`ApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/api/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API resource. domainName? **Type** [`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/domainname/#inputs)` | (args: `[`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/domainname/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API domain name resource. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch LogGroup resource used for access logs. route? **Type** `Object` Transform the routes. This is called for every route that is added. :::note This is applied right before the resource is created. ::: You can use this to set any default props for all the routes and their handler function. Like the other transforms, you can either pass in an object or a callback. Here we are setting a default memory of `2048 MB` for our routes. ```js { transform: { route: { handler: (args, opts) => { // Set the default if it's not set by the route args.memory ??= "2048 MB"; } } } } ``` Defaulting to IAM auth for all our routes. ```js { transform: { route: { args: (props) => { // Set the default if it's not set by the route props.auth ??= { iam: true }; } } } } ``` args? **Type** [`ApiGatewayV2RouteArgs`](#apigatewayv2routeargs)` | (args: `[`ApiGatewayV2RouteArgs`](#apigatewayv2routeargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the arguments for the route. handler? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the handler function of the route. stage? **Type** [`StageArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/stage/#inputs)` | (args: `[`StageArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/stage/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API stage resource. vpcLink? **Type** [`VpcLinkArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/vpclink/#inputs)` | (args: `[`VpcLinkArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/vpclink/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API VPC link resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`securityGroups`](#vpc-securitygroups) - [`subnets`](#vpc-subnets) Configure the API to connect to private resources in a virtual private cloud or VPC. This creates a VPC link for your HTTP API. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. The VPC link will be placed in the public subnets. ```js { vpc: myVpc } ``` The above is equivalent to: ```js { vpc: { securityGroups: myVpc.securityGroups, subnets: myVpc.publicSubnets } } ``` securityGroups **Type** `Input[]>` A list of VPC security group IDs. subnets **Type** `Input[]>` A list of VPC subnet IDs. ## Properties ### nodes **Type** `Object` - [`api`](#nodes-api) - [`logGroup`](#nodes-loggroup) - [`vpcLink`](#nodes-vpclink) - [`domainName`](#nodes-domainname) The underlying [resources](/docs/components/#nodes) this component creates. api **Type** [`Api`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/api/) The Amazon API Gateway HTTP API. logGroup **Type** [`LogGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/) The CloudWatch LogGroup for the access logs. vpcLink **Type** `undefined | `[`VpcLink`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/vpclink/) The API Gateway HTTP API VPC link. domainName **Type** `Output<`[`DomainName`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/domainname/)`>` The API Gateway HTTP API domain name. ### url **Type** `Output` The URL of the API. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated API Gateway URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the API. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated API Gateway URL. ## Methods ### addAuthorizer ```ts addAuthorizer(args) ``` #### Parameters - `args` [`ApiGatewayV2AuthorizerArgs`](#apigatewayv2authorizerargs) Configure the authorizer. **Returns** [`ApiGatewayV2Authorizer`](/docs/component/aws/apigatewayv2-authorizer) Add an authorizer to the API Gateway HTTP API. Add a Lambda authorizer. ```js title="sst.config.ts" api.addAuthorizer({ name: "myAuthorizer", lambda: { function: "src/authorizer.index" } }); ``` Add a JWT authorizer. ```js title="sst.config.ts" const authorizer = api.addAuthorizer({ name: "myAuthorizer", jwt: { issuer: "https://issuer.com/", audiences: ["https://api.example.com"], identitySource: "$request.header.AccessToken" } }); ``` Add a Cognito UserPool as a JWT authorizer. ```js title="sst.config.ts" const pool = new sst.aws.CognitoUserPool("MyUserPool"); const poolClient = userPool.addClient("Web"); const authorizer = api.addAuthorizer({ name: "myCognitoAuthorizer", jwt: { issuer: $interpolate`https://cognito-idp.${aws.getRegionOutput().region}.amazonaws.com/${pool.id}`, audiences: [poolClient.id] } }); ``` Now you can use the authorizer in your routes. ```js title="sst.config.ts" api.route("GET /", "src/get.handler", { auth: { jwt: { authorizer: authorizer.id } } }); ``` ### route ```ts route(rawRoute, handler, args?) ``` #### Parameters - `rawRoute` `string` The path for the route. - `handler` `Input` The function that'll be invoked. - `args?` [`ApiGatewayV2RouteArgs`](#apigatewayv2routeargs) Configure the route. **Returns** [`ApiGatewayV2LambdaRoute`](/docs/component/aws/apigatewayv2-lambda-route) Add a route to the API Gateway HTTP API. The route is a combination of - An HTTP method and a path, `{METHOD} /{path}`. - Or a `$default` route. :::caution [API Gateway has strict rate limits](https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html) for creating and updating resources. Creating one Lambda function for every endpoint can significantly slow down your deployments. Use a single Lambda and handle routing in code if you don't need specific API Gateway features. ::: :::tip The `$default` route is a default or catch-all route. It'll match if no other route matches. ::: A method could be one of `GET`, `POST`, `PUT`, `DELETE`, `PATCH`, `HEAD`, `OPTIONS`, or `ANY`. Here `ANY` matches any HTTP method. The path can be a combination of - Literal segments, `/notes`, `/notes/new`, etc. - Parameter segments, `/notes/{noteId}`, `/notes/{noteId}/attachments/{attachmentId}`, etc. - Greedy segments, `/{proxy+}`, `/notes/{proxy+}`, etc. The `{proxy+}` segment is a greedy segment that matches all child paths. It needs to be at the end of the path. :::tip The `{proxy+}` is a greedy segment, it matches all its child paths. ::: The `$default` is a reserved keyword for the default route. It'll be matched if no other route matches. When a request comes in, the API Gateway will look for the most specific match. If no route matches, the `$default` route will be invoked. :::note You cannot have duplicate routes. ::: Add a simple route. ```js title="sst.config.ts" api.route("GET /", "src/get.handler"); ``` Match any HTTP method. ```js title="sst.config.ts" api.route("ANY /", "src/route.handler"); ``` Add a default or fallback route. Here for every request other than `GET /`, the `$default` route will be invoked. ```js title="sst.config.ts" api.route("GET /", "src/get.handler"); api.route("$default", "src/default.handler"); ``` Add a parameterized route. ```js title="sst.config.ts" api.route("GET /notes/{id}", "src/get.handler"); ``` Add a greedy route. ```js title="sst.config.ts" api.route("GET /notes/{proxy+}", "src/greedy.handler"); ``` Enable auth for a route. ```js title="sst.config.ts" api.route("GET /", "src/get.handler") api.route("POST /", "src/post.handler", { auth: { iam: true } }); ``` Customize the route handler. ```js title="sst.config.ts" api.route("GET /", { handler: "src/get.handler", memory: "2048 MB" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" api.route("GET /", "arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ### routePrivate ```ts routePrivate(rawRoute, arn, args?) ``` #### Parameters - `rawRoute` `string` The path for the route. - `arn` `Input` The ARN of the AWS Load Balancer or Cloud Map service. - `args?` [`ApiGatewayV2RouteArgs`](#apigatewayv2routeargs) Configure the route. **Returns** [`ApiGatewayV2PrivateRoute`](/docs/component/aws/apigatewayv2-private-route) Adds a private route to the API Gateway HTTP API. To add private routes, you need to have a VPC link. Make sure to pass in a `vpc`. Learn more about [adding private routes](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-private.html). :::tip You need to pass `vpc` to add a private route. ::: A couple of things to note: 1. Your API Gateway HTTP API also needs to be in the **same VPC** as the service. 2. You also need to verify that your VPC's [**availability zones support VPC link**](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vpc-links.html#http-api-vpc-link-availability). 3. Run `aws ec2 describe-availability-zones` to get a list of AZs for your account. 4. Only list the AZ ID's that support VPC link. ```ts title="sst.config.ts" {4} vpc: { az: ["eu-west-3a", "eu-west-3c"] } ``` If the VPC picks an AZ automatically that doesn't support VPC link, you'll get the following error: ``` operation error ApiGatewayV2: BadRequestException: Subnet is in Availability Zone 'euw3-az2' where service is not available ``` Here are a few examples using the private route. Add a route to Application Load Balancer. ```js title="sst.config.ts" const loadBalancerArn = "arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/my-load-balancer/50dc6c495c0c9188"; api.routePrivate("GET /", loadBalancerArn); ``` Add a route to AWS Cloud Map service. ```js title="sst.config.ts" const serviceArn = "arn:aws:servicediscovery:us-east-2:123456789012:service/srv-id?stage=prod&deployment=green_deployment"; api.routePrivate("GET /", serviceArn); ``` Enable IAM authentication for a route. ```js title="sst.config.ts" api.routePrivate("GET /", serviceArn, { auth: { iam: true } }); ``` ### routeUrl ```ts routeUrl(rawRoute, url, args?) ``` #### Parameters - `rawRoute` `string` The path for the route. - `url` `Input` The URL to forward to. - `args?` [`ApiGatewayV2RouteArgs`](#apigatewayv2routeargs) Configure the route. **Returns** [`ApiGatewayV2UrlRoute`](/docs/component/aws/apigatewayv2-url-route) Add a URL route to the API Gateway HTTP API. Add a simple route. ```js title="sst.config.ts" api.routeUrl("GET /", "https://google.com"); ``` Enable auth for a route. ```js title="sst.config.ts" api.routeUrl("POST /", "https://google.com", { auth: { iam: true } }); ``` ## ApiGatewayV2AuthorizerArgs ### jwt? **Type** `Input` - [`audiences`](#jwt-audiences) - [`identitySource?`](#jwt-identitysource) - [`issuer`](#jwt-issuer) Create a JWT or JSON Web Token authorizer that can be used by the routes. Configure JWT auth. ```js { jwt: { issuer: "https://issuer.com/", audiences: ["https://api.example.com"], identitySource: "$request.header.AccessToken" } } ``` You can also use Cognito as the identity provider. ```js { jwt: { audiences: [userPoolClient.id], issuer: $interpolate`https://cognito-idp.${aws.getArnOutput(userPool).region}.amazonaws.com/${userPool.id}`, } } ``` Where `userPool` and `userPoolClient` are: ```js const userPool = new aws.cognito.UserPool(); const userPoolClient = new aws.cognito.UserPoolClient(); ``` audiences **Type** `Input[]>` List of the intended recipients of the JWT. A valid JWT must provide an `aud` that matches at least one entry in this list. identitySource? **Type** `Input` **Default** `"$request.header.Authorization"` Specifies where to extract the JWT from the request. issuer **Type** `Input` Base domain of the identity provider that issues JSON Web Tokens. ```js { issuer: "https://issuer.com/" } ``` ### lambda? **Type** `Input` - [`function`](#lambda-function) - [`identitySources?`](#lambda-identitysources) - [`payload?`](#lambda-payload) - [`response?`](#lambda-response) - [`ttl?`](#lambda-ttl) Create a Lambda authorizer that can be used by the routes. Configure Lambda auth. ```js { lambda: { function: "src/authorizer.index" } } ``` function **Type** `Input` The Lambda authorizer function. Takes the handler path or the function args. Add a simple authorizer. ```js { function: "src/authorizer.index" } ``` Customize the authorizer handler. ```js { function: { handler: "src/authorizer.index", memory: "2048 MB" } } ``` identitySources? **Type** `Input[]>` **Default** `["$request.header.Authorization"]` Specifies where to extract the identity from. ```js { identitySources: ["$request.header.RequestToken"] } ``` payload? **Type** `Input<"1.0" | "2.0">` **Default** `"2.0"` The JWT payload version. ```js { payload: "2.0" } ``` response? **Type** `Input<"simple" | "iam">` **Default** `"simple"` The response type. ```js { response: "iam" } ``` ttl? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds">` **Default** Not cached The time to live (TTL) for the authorizer. ```js { ttl: "300 seconds" } ``` ### name **Type** `string` The name of the authorizer. ```js { name: "myAuthorizer" } ``` ### transform? **Type** `Object` - [`authorizer?`](#transform-authorizer) [Transform](/docs/components#transform) how this component creates its underlying resources. authorizer? **Type** [`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/authorizer/#inputs)` | (args: `[`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/authorizer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway authorizer resource. ## ApiGatewayV2RouteArgs ### auth? **Type** `Input` - [`iam?`](#auth-iam) - [`jwt?`](#auth-jwt) `Input` - [`authorizer`](#auth-jwt-authorizer) - [`scopes?`](#auth-jwt-scopes) - [`lambda?`](#auth-lambda) **Default** `false` Enable auth for your HTTP API. By default, auth is disabled. ```js { auth: { iam: true } } ``` iam? **Type** `Input` Enable IAM authorization for a given API route. When IAM auth is enabled, clients need to use Signature Version 4 to sign their requests with their AWS credentials. jwt? **Type** `Input` Enable JWT or JSON Web Token authorization for a given API route. When JWT auth is enabled, clients need to include a valid JWT in their requests. You can configure JWT auth. ```js { auth: { jwt: { authorizer: myAuthorizer.id, scopes: ["read:profile", "write:profile"] } } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. authorizer **Type** `Input` Authorizer ID of the JWT authorizer. scopes? **Type** `Input[]>` Defines the permissions or access levels that the JWT grants. If the JWT does not have the required scope, the request is rejected. By default it does not require any scopes. lambda? **Type** `Input` Enable custom Lambda authorization for a given API route. Pass in the authorizer ID. ```js { auth: { lambda: myAuthorizer.id } } ``` Where `myAuthorizer` is created by calling the `addAuthorizer` method. ### transform? **Type** `Object` - [`integration?`](#transform-integration) - [`route?`](#transform-route-1) [Transform](/docs/components#transform) how this component creates its underlying resources. integration? **Type** [`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)` | (args: `[`IntegrationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/integration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API integration resource. route? **Type** [`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)` | (args: `[`RouteArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/route/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the API Gateway HTTP API route resource. --- ## AppSyncDataSource Reference doc for the `sst.aws.AppSyncDataSource` component. https://sst.dev/docs/component/aws/app-sync-data-source The `AppSyncDataSource` component is internally used by the `AppSync` component to add data sources to [AWS AppSync](https://docs.aws.amazon.com/appsync/latest/devguide/what-is-appsync.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addDataSource` method of the `AppSync` component. --- ## Constructor ```ts new AppSyncDataSource(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`DataSourceArgs`](#datasourceargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### name **Type** `Output` The name of the data source. ### nodes **Type** `Object` - [`dataSource`](#nodes-datasource) - [`function`](#nodes-function) - [`serviceRole`](#nodes-servicerole) The underlying [resources](/docs/components/#nodes) this component creates. dataSource **Type** [`DataSource`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/datasource/) The Amazon AppSync DataSource. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function used by the data source. serviceRole **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The DataSource service's IAM role. ## DataSourceArgs ### apiComponentName **Type** `string` The AppSync component name. ### apiId **Type** `Input` The AppSync GraphQL API ID. ### dynamodb? **Type** `Input` The ARN for the DynamoDB table. ```js { dynamodb: "arn:aws:dynamodb:us-east-1:123456789012:table/my-table" } ``` ### elasticSearch? **Type** `Input` The ARN for the Elasticsearch domain. ```js { elasticSearch: "arn:aws:es:us-east-1:123456789012:domain/my-domain" } ``` ### eventBridge? **Type** `Input` The ARN for the EventBridge event bus. ```js { eventBridge: "arn:aws:events:us-east-1:123456789012:event-bus/my-event-bus" } ``` ### http? **Type** `Input` The URL for the HTTP endpoint. ```js { http: "https://api.example.com" } ``` ### lambda? **Type** `Input` The handler for the Lambda function. ```js { lambda: "src/lambda.handler" } ``` You can pass in the full function props. ```js { lambda: { handler: "src/lambda.handler", timeout: "60 seconds" } } ``` You can also pass in the function ARN. ```js { lambda: "arn:aws:lambda:us-east-1:123456789012:function:my-function" } ``` ### name **Type** `string` The name of the data source. ```js { name: "lambdaDS" } ``` ### openSearch? **Type** `Input` The ARN for the OpenSearch domain. ```js { openSearch: "arn:aws:opensearch:us-east-1:123456789012:domain/my-domain" } ``` ### rds? **Type** `Input` - [`cluster`](#rds-cluster) - [`credentials`](#rds-credentials) Configure the RDS data source. ```js { rds: { cluster: "arn:aws:rds:us-east-1:123456789012:cluster:my-cluster", credentials: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-secret" } } ``` cluster **Type** `Input` The ARN for the RDS cluster. credentials **Type** `Input` The ARN for the credentials secret store. ### transform? **Type** `Object` - [`dataSource?`](#transform-datasource) - [`serviceRole?`](#transform-servicerole) [Transform](/docs/components#transform) how this component creates its underlying resources. dataSource? **Type** [`DataSourceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/datasource/#inputs)` | (args: `[`DataSourceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/datasource/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync DataSource resource. serviceRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync DataSource service role resource. --- ## AppSyncFunction Reference doc for the `sst.aws.AppSyncFunction` component. https://sst.dev/docs/component/aws/app-sync-function The `AppSyncFunction` component is internally used by the `AppSync` component to add functions to [AWS AppSync](https://docs.aws.amazon.com/appsync/latest/devguide/what-is-appsync.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addFunction` method of the `AppSync` component. --- ## Constructor ```ts new AppSyncFunction(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`FunctionArgs`](#functionargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. function **Type** [`Function`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/function/) The Amazon AppSync Function. ## FunctionArgs ### apiId **Type** `Input` The AppSync GraphQL API ID. ### code? **Type** `Input` The function code that contains the request and response functions. ```js { code: fs.readFileSync("functions.js") } ``` ### dataSource **Type** `Input` The data source this resolver is using. ```js { dataSource: "lambdaDS" } ``` ### name **Type** `string` The name of the AppSync function. ```js { name: "myFunction" } ``` ### requestMappingTemplate? **Type** `Input` The function request mapping template. ```js { requestTemplate: `{ "version": "2018-05-29", "operation": "Scan", }`, } ``` ### responseMappingTemplate? **Type** `Input` The function response mapping template. ```js { responseTemplate: `{ "users": $utils.toJson($context.result.items) }`, } ``` ### transform? **Type** `Object` - [`function?`](#transform-function) [Transform](/docs/components#transform) how this component creates its underlying resources. function? **Type** [`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/function/#inputs)` | (args: `[`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/function/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync Function resource. --- ## AppSyncResolver Reference doc for the `sst.aws.AppSyncResolver` component. https://sst.dev/docs/component/aws/app-sync-resolver The `AppSyncResolver` component is internally used by the `AppSync` component to add resolvers to [AWS AppSync](https://docs.aws.amazon.com/appsync/latest/devguide/what-is-appsync.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addResolver` method of the `AppSync` component. --- ## Constructor ```ts new AppSyncResolver(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`ResolverArgs`](#resolverargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`resolver`](#nodes-resolver) The underlying [resources](/docs/components/#nodes) this component creates. resolver **Type** [`Resolver`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/resolver/) The Amazon AppSync Resolver. ## ResolverArgs ### apiId **Type** `Input` The AppSync GraphQL API ID. ### code? **Type** `Input` The function code that contains the request and response functions. ```js { code: fs.readFileSync("functions.js") } ``` ### dataSource? **Type** `Input` The data source this resolver is using. This only applies for `unit` resolvers. ```js { dataSource: "lambdaDS" } ``` ### field **Type** `Input` The field name from the schema defined. ### functions? **Type** `Input[]>` The functions this resolver is using. This only applies for `pipeline` resolvers. ```js { functions: ["myFunction1", "myFunction2"] } ``` ### kind? **Type** `Input<"unit" | "pipeline">` **Default** `"unit"` The type of the resolver. ```js { kind: "pipeline" } ``` ### requestTemplate? **Type** `Input` For `unit` resolvers, this is the request mapping template. And for `pipeline` resolvers, this is the before mapping template. ```js { requestTemplate: `{ "version": "2017-02-28", "operation": "Scan" }` } ``` ### responseTemplate? **Type** `Input` For `unit` resolvers, this is the response mapping template. And for `pipeline` resolvers, this is the after mapping template. ```js { responseTemplate: `{ "users": $utils.toJson($context.result.items) }` } ``` ### transform? **Type** `Object` - [`resolver?`](#transform-resolver) [Transform](/docs/components#transform) how this component creates its underlying resources. resolver? **Type** [`ResolverArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/resolver/#inputs)` | (args: `[`ResolverArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/resolver/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync Resolver resource. ### type **Type** `Input` The type name from the schema defined. --- ## AppSync Reference doc for the `sst.aws.AppSync` component. https://sst.dev/docs/component/aws/app-sync The `AppSync` component lets you add an [Amazon AppSync GraphQL API](https://docs.aws.amazon.com/appsync/latest/devguide/what-is-appsync.html) to your app. #### Create a GraphQL API ```ts title="sst.config.ts" const api = new sst.aws.AppSync("MyApi", { schema: "schema.graphql", }); ``` #### Add a data source ```ts title="sst.config.ts" const lambdaDS = api.addDataSource({ name: "lambdaDS", lambda: "src/lambda.handler", }); ``` #### Add a resolver ```ts title="sst.config.ts" api.addResolver("Query user", { dataSource: lambdaDS.name, }); ``` --- ## Constructor ```ts new AppSync(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`AppSyncArgs`](#appsyncargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## AppSyncArgs ### domain? **Type** `Input` - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) Set a custom domain for your AppSync GraphQL API. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS AppSync. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the API Gateway URL. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` ### schema **Type** `Input` Path to the GraphQL schema file. This path is relative to your `sst.config.ts`. ```js { schema: "schema.graphql", } ``` ### transform? **Type** `Object` - [`api?`](#transform-api) - [`domainName?`](#transform-domainname) [Transform](/docs/components#transform) how this component creates its underlying resources. api? **Type** [`GraphQLApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/graphqlapi/#inputs)` | (args: `[`GraphQLApiArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/graphqlapi/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync GraphQL API resource. domainName? **Type** [`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/domainname/#inputs)` | (args: `[`DomainNameArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/domainname/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync domain name resource. ## Properties ### id **Type** `Output` The GraphQL API ID. ### nodes **Type** `Object` - [`api`](#nodes-api) The underlying [resources](/docs/components/#nodes) this component creates. api **Type** [`GraphQLApi`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/graphqlapi/) The Amazon AppSync GraphQL API. ### url **Type** `Output` The URL of the GraphQL API. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the GraphQL API. ## Methods ### addDataSource ```ts addDataSource(args) ``` #### Parameters - `args` [`AppSyncDataSourceArgs`](#appsyncdatasourceargs) Configure the data source. **Returns** [`AppSyncDataSource`](/docs/component/aws/app-sync-data-source) Add a data source to this AppSync API. Add a Lambda function as a data source. ```js title="sst.config.ts" api.addDataSource({ name: "lambdaDS", lambda: "src/lambda.handler" }); ``` Customize the Lambda function. ```js title="sst.config.ts" api.addDataSource({ name: "lambdaDS", lambda: { handler: "src/lambda.handler", timeout: "60 seconds" } }); ``` Add a data source with an existing Lambda function. ```js title="sst.config.ts" api.addDataSource({ name: "lambdaDS", lambda: "arn:aws:lambda:us-east-1:123456789012:function:my-function" }) ``` Add a DynamoDB table as a data source. ```js title="sst.config.ts" api.addDataSource({ name: "dynamoDS", dynamodb: "arn:aws:dynamodb:us-east-1:123456789012:table/my-table" }) ``` ### addFunction ```ts addFunction(args) ``` #### Parameters - `args` [`AppSyncFunctionArgs`](#appsyncfunctionargs) Configure the function. **Returns** [`AppSyncFunction`](/docs/component/aws/app-sync-function) Add a function to this AppSync API. Add a function using a Lambda data source. ```js title="sst.config.ts" api.addFunction({ name: "myFunction", dataSource: "lambdaDS", }); ``` Add a function using a DynamoDB data source. ```js title="sst.config.ts" api.addResolver("Query user", { name: "myFunction", dataSource: "dynamoDS", requestTemplate: `{ "version": "2017-02-28", "operation": "Scan", }`, responseTemplate: `{ "users": $utils.toJson($context.result.items) }`, }); ``` ### addResolver ```ts addResolver(operation, args) ``` #### Parameters - `operation` `string` The type and name of the operation. - `args` [`AppSyncResolverArgs`](#appsyncresolverargs) Configure the resolver. **Returns** [`AppSyncResolver`](/docs/component/aws/app-sync-resolver) Add a resolver to this AppSync API. Add a resolver using a Lambda data source. ```js title="sst.config.ts" api.addResolver("Query user", { dataSource: "lambdaDS", }); ``` Add a resolver using a DynamoDB data source. ```js title="sst.config.ts" api.addResolver("Query user", { dataSource: "dynamoDS", requestTemplate: `{ "version": "2017-02-28", "operation": "Scan", }`, responseTemplate: `{ "users": $utils.toJson($context.result.items) }`, }); ``` Add a pipeline resolver. ```js title="sst.config.ts" api.addResolver("Query user", { functions: [ "MyFunction1", "MyFunction2" ] code: ` export function request(ctx) { return {}; } export function response(ctx) { return ctx.result; } `, }); ``` ## AppSyncDataSourceArgs ### dynamodb? **Type** `Input` The ARN for the DynamoDB table. ```js { dynamodb: "arn:aws:dynamodb:us-east-1:123456789012:table/my-table" } ``` ### elasticSearch? **Type** `Input` The ARN for the Elasticsearch domain. ```js { elasticSearch: "arn:aws:es:us-east-1:123456789012:domain/my-domain" } ``` ### eventBridge? **Type** `Input` The ARN for the EventBridge event bus. ```js { eventBridge: "arn:aws:events:us-east-1:123456789012:event-bus/my-event-bus" } ``` ### http? **Type** `Input` The URL for the HTTP endpoint. ```js { http: "https://api.example.com" } ``` ### lambda? **Type** `Input` The handler for the Lambda function. ```js { lambda: "src/lambda.handler" } ``` You can pass in the full function props. ```js { lambda: { handler: "src/lambda.handler", timeout: "60 seconds" } } ``` You can also pass in the function ARN. ```js { lambda: "arn:aws:lambda:us-east-1:123456789012:function:my-function" } ``` ### name **Type** `string` The name of the data source. ```js { name: "lambdaDS" } ``` ### openSearch? **Type** `Input` The ARN for the OpenSearch domain. ```js { openSearch: "arn:aws:opensearch:us-east-1:123456789012:domain/my-domain" } ``` ### rds? **Type** `Input` - [`cluster`](#rds-cluster) - [`credentials`](#rds-credentials) Configure the RDS data source. ```js { rds: { cluster: "arn:aws:rds:us-east-1:123456789012:cluster:my-cluster", credentials: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-secret" } } ``` cluster **Type** `Input` The ARN for the RDS cluster. credentials **Type** `Input` The ARN for the credentials secret store. ### transform? **Type** `Object` - [`dataSource?`](#transform-datasource) - [`serviceRole?`](#transform-servicerole) [Transform](/docs/components#transform) how this component creates its underlying resources. dataSource? **Type** [`DataSourceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/datasource/#inputs)` | (args: `[`DataSourceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/datasource/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync DataSource resource. serviceRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync DataSource service role resource. ## AppSyncFunctionArgs ### code? **Type** `Input` The function code that contains the request and response functions. ```js { code: fs.readFileSync("functions.js") } ``` ### dataSource **Type** `Input` The data source this resolver is using. ```js { dataSource: "lambdaDS" } ``` ### name **Type** `string` The name of the AppSync function. ```js { name: "myFunction" } ``` ### requestMappingTemplate? **Type** `Input` The function request mapping template. ```js { requestTemplate: `{ "version": "2018-05-29", "operation": "Scan", }`, } ``` ### responseMappingTemplate? **Type** `Input` The function response mapping template. ```js { responseTemplate: `{ "users": $utils.toJson($context.result.items) }`, } ``` ### transform? **Type** `Object` - [`function?`](#transform-function) [Transform](/docs/components#transform) how this component creates its underlying resources. function? **Type** [`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/function/#inputs)` | (args: `[`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/function/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync Function resource. ## AppSyncResolverArgs ### code? **Type** `Input` The function code that contains the request and response functions. ```js { code: fs.readFileSync("functions.js") } ``` ### dataSource? **Type** `Input` The data source this resolver is using. This only applies for `unit` resolvers. ```js { dataSource: "lambdaDS" } ``` ### functions? **Type** `Input[]>` The functions this resolver is using. This only applies for `pipeline` resolvers. ```js { functions: ["myFunction1", "myFunction2"] } ``` ### kind? **Type** `Input<"unit" | "pipeline">` **Default** `"unit"` The type of the resolver. ```js { kind: "pipeline" } ``` ### requestTemplate? **Type** `Input` For `unit` resolvers, this is the request mapping template. And for `pipeline` resolvers, this is the before mapping template. ```js { requestTemplate: `{ "version": "2017-02-28", "operation": "Scan" }` } ``` ### responseTemplate? **Type** `Input` For `unit` resolvers, this is the response mapping template. And for `pipeline` resolvers, this is the after mapping template. ```js { responseTemplate: `{ "users": $utils.toJson($context.result.items) }` } ``` ### transform? **Type** `Object` - [`resolver?`](#transform-resolver) [Transform](/docs/components#transform) how this component creates its underlying resources. resolver? **Type** [`ResolverArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/resolver/#inputs)` | (args: `[`ResolverArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appsync/resolver/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AppSync Resolver resource. --- ## Astro Reference doc for the `sst.aws.Astro` component. https://sst.dev/docs/component/aws/astro The `Astro` component lets you deploy an [Astro](https://astro.build) site to AWS. #### Minimal example Deploy the Astro site that's in the project root. ```js title="sst.config.ts" new sst.aws.Astro("MyWeb"); ``` #### Change the path Deploys the Astro site in the `my-astro-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.Astro("MyWeb", { path: "my-astro-app/" }); ``` #### Add a custom domain Set a custom domain for your Astro site. ```js {2} title="sst.config.ts" new sst.aws.Astro("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.Astro("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your Astro site. This will grant permissions to the resources and allow you to access it in your site. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Astro("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your Astro site. ```astro title="src/pages/index.astro" --- console.log(Resource.MyBucket.name); --- ``` --- ## Constructor ```ts new Astro(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`AstroArgs`](#astroargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## AstroArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the Astro site assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your Astro site. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the Astro site to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your Astro site is run in dev mode; it's not deployed. ::: Instead of deploying your Astro site, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your Astro site. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set [environment variables](https://docs.astro.build/en/guides/environment-variables/) in your Astro site. :::tip You can also `link` resources to your Astro site and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: Recall that in Astro, you need to prefix your environment variables with `PUBLIC_` to access them on the client-side. [Read more here](https://docs.astro.build/en/guides/environment-variables/). ```js { environment: { API_URL: api.url, // Accessible on the client-side PUBLIC_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your Astro site has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your Astro site. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your Astro site is located. This path is relative to your `sst.config.ts`. By default it assumes your Astro site is in the root of your SST app. If your Astro site is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your Astro site needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] } ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your Astro site through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your Astro site **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the [`base`](https://docs.astro.build/en/reference/configuration-reference/#base) in your `astro.config.mjs`. :::caution If routing to a path, you need to set that as the base path in your Astro site as well. ::: ```js title="astro.config.mjs" {3} adapter: sst(), base: "/docs" }); ``` To serve your Astro site **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your Astro site **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set this as the `base` in your `astro.config.mjs`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the Astro site. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the Astro site. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## Aurora Reference doc for the `sst.aws.Aurora` component. https://sst.dev/docs/component/aws/aurora The `Aurora` component lets you add a Aurora Postgres or MySQL cluster to your app using [Amazon Aurora Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html). #### Create an Aurora Postgres cluster ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const database = new sst.aws.Aurora("MyDatabase", { engine: "postgres", vpc }); ``` #### Create an Aurora MySQL cluster ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const database = new sst.aws.Aurora("MyDatabase", { engine: "mysql", vpc }); ``` #### Change the scaling config ```js title="sst.config.ts" new sst.aws.Aurora("MyDatabase", { engine: "postgres", scaling: { min: "2 ACU", max: "128 ACU" }, vpc }); ``` #### Link to a resource You can link your database to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [database], vpc }); ``` Once linked, you can connect to it from your function code. ```ts title="app/page.tsx" {1,5-9} const sql = postgres({ username: Resource.MyDatabase.username, password: Resource.MyDatabase.password, database: Resource.MyDatabase.database, host: Resource.MyDatabase.host, port: Resource.MyDatabase.port }); ``` #### Enable the RDS Data API ```ts title="sst.config.ts" new sst.aws.Aurora("MyDatabase", { engine: "postgres", dataApi: true, vpc }); ``` When using the Data API, connecting to the database does not require a persistent connection, and works over HTTP. You also don't need the `sst tunnel` or a VPN to connect to it from your local machine. ```ts title="app/page.tsx" {1,6,7,8} drizzle(new RDSDataClient({}), { database: Resource.MyDatabase.database, secretArn: Resource.MyDatabase.secretArn, resourceArn: Resource.MyDatabase.clusterArn }); ``` #### Running locally By default, your Aurora database is deployed in `sst dev`. But let's say you are running Postgres locally. ```bash docker run \ --rm \ -p 5432:5432 \ -v $(pwd)/.sst/storage/postgres:/var/lib/postgresql/data \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=password \ -e POSTGRES_DB=local \ postgres:17 ``` You can connect to it in `sst dev` by configuring the `dev` prop. ```ts title="sst.config.ts" {4-9} new sst.aws.Aurora("MyDatabase", { engine: "postgres", vpc, dev: { username: "postgres", password: "password", database: "local", port: 5432 } }); ``` This will skip deploying the database and link to the locally running Postgres database instead. [Check out the full example](/docs/examples/#aws-aurora-local). --- ### Cost This component has one DB instance that is used for both writes and reads. The instance can scale from the minimum number of ACUs to the maximum number of ACUs. By default, this uses a `min` of 0 ACUs and a `max` of 4 ACUs. When the database is paused, you are not charged for the ACUs. Each ACU costs $0.12 per hour for both `postgres` and `mysql` engine. The storage costs $0.01 per GB per month for standard storage. So if your database is constantly using 1GB of memory or 0.5 ACUs, then you are charged $0.12 x 0.5 x 24 x 30 or **$43 per month**. And add the storage costs to this as well. The above are rough estimates for _us-east-1_, check out the [Amazon Aurora pricing](https://aws.amazon.com/rds/aurora/pricing) for more details. #### RDS Proxy If you enable the `proxy`, it uses _Aurora Capacity Units_ with a minumum of 8 ACUs at $0.015 per ACU hour. That works out to an **additional** $0.015 x 8 x 24 x 30 or **$86 per month**. Adjust this if you end up using more than 8 ACUs. The above are rough estimates for _us-east-1_, check out the [RDS Proxy pricing](https://aws.amazon.com/rds/proxy/pricing/) for more details. #### RDS Data API If you enable `dataApi`, you get charged an **additional** $0.35 per million requests for the first billion requests. After that, it's $0.20 per million requests. Check out the [RDS Data API pricing](https://aws.amazon.com/rds/aurora/pricing/#Data_API_costs) for more details. --- ## Constructor ```ts new Aurora(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`AuroraArgs`](#auroraargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## AuroraArgs ### dataApi? **Type** `Input` **Default** `false` Enable [RDS Data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html) for the database. The RDS Data API provides a secure HTTP endpoint and does not need a persistent connection. You also doesn't need the `sst tunnel` or a VPN to connect to it from your local machine. RDS Data API is [billed per request](#cost). Check out the [RDS Data API pricing](https://aws.amazon.com/rds/aurora/pricing/#Data_API_costs) for more details. ```js { dataApi: true } ``` ### database? **Type** `Input` **Default** Based on the name of the current app Name of a database that is automatically created inside the cluster. The name must begin with a letter and contain only lowercase letters, numbers, or underscores. By default, it takes the name of the app, and replaces the hyphens with underscores. :::danger Changing the database name will cause the database to be destroyed and recreated. ::: ```js { databaseName: "acme" } ``` ### dev? **Type** `Object` - [`database?`](#dev-database) - [`host?`](#dev-host) - [`password?`](#dev-password) - [`port?`](#dev-port) - [`username?`](#dev-username) Configure how this component works in `sst dev`. By default, your Aurora database is deployed in `sst dev`. But if you want to instead connect to a locally running database, you can configure the `dev` prop. This will skip deploying an Aurora database and link to the locally running database instead. Setting the `dev` prop also means that any linked resources will connect to the right database both in `sst dev` and `sst deploy`. ```ts { dev: { username: "postgres", password: "password", database: "postgres", host: "localhost", port: 5432 } } ``` database? **Type** `Input` **Default** Inherit from the top-level [`database`](#database). The database of the local database to connect to when running in dev. host? **Type** `Input` **Default** `"localhost"` The host of the local database to connect to when running in dev. password? **Type** `Input` **Default** Inherit from the top-level [`password`](#password). The password of the local database to connect to when running in dev. port? **Type** `Input` **Default** `5432` The port of the local database to connect to when running in dev. username? **Type** `Input` **Default** Inherit from the top-level [`username`](#username). The username of the local database to connect to when running in dev. ### engine **Type** `Input<"postgres" | "mysql">` The Aurora engine to use. :::danger Changing the engine will cause the database to be destroyed and recreated. ::: ```js { engine: "postgres" } ``` ### password? **Type** `Input` **Default** A random password is generated. The password of the master user. ```js { password: "Passw0rd!" } ``` You can use a [`Secret`](/docs/component/secret) to manage the password. ```js { password: (new sst.Secret("MyDBPassword")).value } ``` ### proxy? **Type** `Input` - [`credentials?`](#proxy-credentials) `Input[]>` - [`password`](#proxy-credentials-password) - [`username`](#proxy-credentials-username) **Default** `false` Enable [RDS Proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) for the database. Amazon RDS Proxy sits between your application and the database and manages connections to it. It's useful for serverless applications, or Lambda functions where each invocation might create a new connection. There's an [extra cost](#cost) attached to enabling this. Check out the [RDS Proxy pricing](https://aws.amazon.com/rds/proxy/pricing/) for more details. ```js { proxy: true } ``` credentials? **Type** `Input[]>` Add extra credentials the proxy can use to connect to the database. Your app will use the master `username` and `password`. So you don't need to specify them here. These credentials are for any other services that need to connect to your database directly. :::tip You need to create these credentials manually in the database. ::: These credentials are not automatically created. You'll need to create these credentials manually in the database. ```js { credentials: [ { username: "metabase", password: "Passw0rd!" } ] } ``` You can use a [`Secret`](/docs/component/secret) to manage the password. ```js { credentials: [ { username: "metabase", password: (new sst.Secret("MyDBPassword")).value } ] } ``` password **Type** `Input` The password of the user. username **Type** `Input` The username of the user. ### replicas? **Type** `Input` **Default** `0` The number of read-only Aurora replicas to create. By default, the cluster has one primary DB instance that is used for both writes and reads. You can add up to 15 read-only replicas to offload the read traffic from the primary instance. ```js { replicas: 2 } ``` ### scaling? **Type** `Input` - [`max?`](#scaling-max) - [`min?`](#scaling-min) - [`pauseAfter?`](#scaling-pauseafter) **Default** `{min: "0 ACU", max: "4 ACU"}` The Aurora Serverless v2 scaling config. By default, the cluster has one DB instance that is used for both writes and reads. The instance can scale from a minimum number of ACUs to the maximum number of ACUs. :::tip Pick the `min` and `max` ACUs based on the baseline and peak memory usage of your app. ::: An ACU or _Aurora Capacity Unit_ is roughly equivalent to 2 GB of memory and a corresponding amount of CPU and network resources. So pick the minimum and maximum based on the baseline and peak memory usage of your app. If you set a `min` of 0 ACUs, the database will be paused when there are no active connections in the `pauseAfter` specified time period. This is useful for dev environments since you are not charged when the database is paused. But it's not recommended for production environments because it takes around 15 seconds for the database to resume. max? **Type** `Input<"$\{number\} ACU">` **Default** `4 ACU` The maximum number of ACUs or _Aurora Capacity Units_. Ranges from 1 to 128, in increments of 0.5. Where each ACU is roughly equivalent to 2 GB of memory. ```js { scaling: { max: "128 ACU" } } ``` min? **Type** `Input<"$\{number\} ACU">` **Default** `0.5 ACU` The minimum number of ACUs or _Aurora Capacity Units_. Ranges from 0 to 256, in increments of 0.5. Where each ACU is roughly equivalent to 2 GB of memory. If you set this to 0 ACUs, the database will be paused when there are no active connections in the `pauseAfter` specified time period. :::note If you set a `min` ACU to 0, the database will be paused after the `pauseAfter` time period. ::: On the next database connection, the database will resume. It takes about 15 seconds for the database to resume. :::tip Avoid setting a low number of `min` ACUs for production workloads. ::: For your production workloads, setting a minimum of 0.5 ACUs might not be a great idea because: 1. It takes longer to scale from a low number of ACUs to a much higher number. 2. Query performance depends on the buffer cache. So if frequently accessed data cannot fit into the buffer cache, you might see uneven performance. 3. The max connections for a 0.5 ACU instance is capped at 2000. You can [read more here](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html#aurora-serverless-v2.setting-capacity.incompatible_parameters). ```js { scaling: { min: "2 ACU" } } ``` pauseAfter? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"5 minutes"` The amount of time before the database is paused when there are no active connections. Only applies when the `min` is set to 0 ACUs. :::note This only applies when the `min` is set to 0 ACUs. ::: Must be between `"5 minutes"` and `"60 minutes"` or `"1 hour"`. So if the `min` is set to 0 ACUs, by default, the database will be auto-paused after `"5 minutes"`. When the database is paused, you are not charged for the ACUs. On the next database connection, the database will resume. It takes about 15 seconds for the database to resume. :::tip Auto-pause is not recommended for production environments. ::: Auto-pause is useful for minimizing costs in the development environments where the database is not used frequently. It's not recommended for production environments. ```js { scaling: { pauseAfter: "20 minutes" } } ``` ### transform? **Type** `Object` - [`cluster?`](#transform-cluster) - [`clusterParameterGroup?`](#transform-clusterparametergroup) - [`instance?`](#transform-instance) - [`instanceParameterGroup?`](#transform-instanceparametergroup) - [`proxy?`](#transform-proxy) - [`subnetGroup?`](#transform-subnetgroup) [Transform](/docs/components#transform) how this component creates its underlying resources. cluster? **Type** [`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/cluster/#inputs)` | (args: `[`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/cluster/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS Cluster. clusterParameterGroup? **Type** [`ClusterParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterparametergroup/#inputs)` | (args: `[`ClusterParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterparametergroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS cluster parameter group. instance? **Type** [`ClusterInstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterinstance/#inputs)` | (args: `[`ClusterInstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterinstance/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the database instance in the RDS Cluster. instanceParameterGroup? **Type** [`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/parametergroup/#inputs)` | (args: `[`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/parametergroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS instance parameter group. proxy? **Type** [`ProxyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/proxy/#inputs)` | (args: `[`ProxyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/proxy/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS Proxy. subnetGroup? **Type** [`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)` | (args: `[`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS subnet group. ### username? **Type** `Input` **Default** `"postgres"` for Postgres, `"root"` for MySQL The username of the master user. :::danger Changing the username will cause the database to be destroyed and recreated. ::: ```js { username: "admin" } ``` ### version? **Type** `Input` **Default** `"17"` for Postgres, `"3.08.0"` for MySQL The version of the Aurora engine. The default is `"17"` for Postgres and `"3.08.0"` for MySQL. Check out the [available Postgres versions](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.apg) and [available MySQL versions](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.amy) in your region. :::tip Not all versions support scaling to 0 with auto-pause and resume. ::: Auto-pause and resume is only supported in the following versions: - Aurora PostgresSQL 16.3 and higher - Aurora PostgresSQL 15.7 and higher - Aurora PostgresSQL 14.12 and higher - Aurora PostgresSQL 13.15 and higher - Aurora MySQL 3.08.0 and higher :::caution Changing the version will cause the database to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { version: "17.3" } ``` ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`securityGroups`](#vpc-securitygroups) - [`subnets`](#vpc-subnets) The VPC to use for the database cluster. Create a VPC component. ```js const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. ```js { vpc: myVpc } ``` Or pass in a custom VPC configuration. ```js { vpc: { subnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"], securityGroups: ["sg-0399348378a4c256c"] } } ``` securityGroups **Type** `Input[]>` A list of VPC security group IDs. subnets **Type** `Input[]>` A list of subnet IDs in the VPC to deploy the Aurora cluster in. ## Properties ### clusterArn **Type** `Output` The ARN of the RDS Cluster. ### database **Type** `Output` The name of the database. ### host **Type** `Output` The host of the database. ### id **Type** `Output` The ID of the RDS Cluster. ### nodes **Type** `Object` - [`cluster`](#nodes-cluster) - [`instance`](#nodes-instance) cluster **Type** `undefined | `[`Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/cluster/) instance **Type** `undefined | `[`ClusterInstance`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterinstance/) ### password **Type** `Output` The password of the master user. ### port **Type** `Output` The port of the database. ### reader **Type** `Output` The reader endpoint of the database. ### secretArn **Type** `Output` The ARN of the master user secret. ### username **Type** `Output` The username of the master user. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `clusterArn` `string` The ARN of the RDS Cluster. - `database` `string` The name of the database. - `host` `string` The host of the database. - `password` `string` The password of the master user. - `port` `number` The port of the database. - `reader` `undefined | string` The reader endpoint of the database. - `secretArn` `string` The ARN of the master user secret. - `username` `string` The username of the master user. ## Methods ### static get ```ts Aurora.get(name, id, opts?) ``` #### Parameters - `name` `string` The name of the component. - `id` `Input` The ID of the existing Aurora cluster. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Aurora`](.) Reference an existing Aurora cluster with its RDS cluster ID. This is useful when you create a Aurora cluster in one stage and want to share it in another. It avoids having to create a new Aurora cluster in the other stage. :::tip You can use the `static get` method to share Aurora clusters across stages. ::: Imagine you create a cluster in the `dev` stage. And in your personal stage `frank`, instead of creating a new cluster, you want to share the same cluster from `dev`. ```ts title="sst.config.ts" const database = $app.stage === "frank" ? sst.aws.Aurora.get("MyDatabase", "app-dev-mydatabase") : new sst.aws.Aurora("MyDatabase"); ``` Here `app-dev-mydatabase` is the ID of the cluster created in the `dev` stage. You can find this by outputting the cluster ID in the `dev` stage. ```ts title="sst.config.ts" return database.id; ``` --- ## Auth Reference doc for the `sst.aws.Auth` component. https://sst.dev/docs/component/aws/auth The `Auth` component lets you create centralized auth servers on AWS. It deploys [OpenAuth](https://openauth.js.org) to [AWS Lambda](https://aws.amazon.com/lambda/) and uses [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) for storage. :::note `Auth` and OpenAuth are currently in beta. ::: #### Create an OpenAuth server ```ts title="sst.config.ts" const auth = new sst.aws.Auth("MyAuth", { issuer: "src/auth.handler" }); ``` Where the `issuer` function might look like this. ```ts title="src/auth.ts" const app = issuer({ subjects, providers: { code: CodeProvider() }, success: async (ctx, value) => {} }); ``` This `Auth` component will always use the [`DynamoStorage`](https://openauth.js.org/docs/storage/dynamo/) storage provider. Learn more on the [OpenAuth docs](https://openauth.js.org/docs/issuer/) on how to configure the `issuer` function. #### Add a custom domain Set a custom domain for your auth server. ```js {3} title="sst.config.ts" new sst.aws.Auth("MyAuth", { issuer: "src/auth.handler", domain: "auth.example.com" }); ``` #### Link to a resource You can link the auth server to other resources, like a function or your Next.js app, that needs authentication. ```ts title="sst.config.ts" {2} new sst.aws.Nextjs("MyWeb", { link: [auth] }); ``` Once linked, you can now use it to create an [OpenAuth client](https://openauth.js.org/docs/client/). ```ts title="app/page.tsx" {1,6} clientID: "nextjs", issuer: Resource.MyAuth.url }); ``` --- ## Constructor ```ts new Auth(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`AuthArgs`](#authargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## AuthArgs ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your Auth server. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "auth.example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "auth.example.com", dns: sst.cloudflare.dns() } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### issuer? **Type** `Input` The function that's running your OpenAuth server. ```js { issuer: "src/auth.handler" } ``` You can also pass in the full `FunctionArgs`. ```js { issuer: { handler: "src/auth.handler", link: [table] } } ``` Since the `issuer` function is a Hono app, you want to export it with the Lambda adapter. ```ts title="src/auth.ts" const app = issuer({ // ... }); ``` This `Auth` component will always use the [`DynamoStorage`](https://openauth.js.org/docs/storage/dynamo/) storage provider. :::note This will always use the `DynamoStorage` storage provider. ::: Learn more on the [OpenAuth docs](https://openauth.js.org/docs/issuer/) on how to configure the `issuer` function. ### transform? **Type** `Object` - [`router?`](#transform-router) [Transform](/docs/components#transform) how this component creates its underlying resources. router? **Type** [`RouterArgs`](/docs/component/aws/router#routerargs)` | (args: `[`RouterArgs`](/docs/component/aws/router#routerargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Router resource created for the custom domain. Attach a WAF to the CloudFront distribution. ```ts new sst.aws.Auth("MyAuth", { issuer: "src/auth.handler", domain: "auth.example.com", transform: { router: (args) => { args.transform = { cdn: { transform: { distribution: { webAclId: "arn:aws:wafv2:...", }, }, }, }; }, }, }); ``` ## Properties ### nodes **Type** `Object` - [`issuer`](#nodes-issuer) - [`router`](#nodes-router) - [`table`](#nodes-table) The underlying [resources](/docs/components/#nodes) this component creates. issuer **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Function component for the issuer. router **Type** `undefined | `[`Router`](/docs/component/aws/router) The Router component for the custom domain. table **Type** [`Dynamo`](/docs/component/aws/dynamo) The DynamoDB component. ### url **Type** `Output` The URL of the Auth component. If the `domain` is set, this is the URL of the Router created for the custom domain. If the `issuer` function is linked to a custom domain, this is the URL of the issuer. Otherwise, it's the auto-generated function URL for the issuer. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the Auth component. If the `domain` is set, this is the URL of the Router created for the custom domain. If the `issuer` function is linked to a custom domain, this is the URL of the issuer. Otherwise, it's the auto-generated function URL for the issuer. --- ## BucketNotification Reference doc for the `sst.aws.BucketNotification` component. https://sst.dev/docs/component/aws/bucket-notification The `BucketNotification` component is internally used by the `Bucket` component to add bucket notifications to [AWS S3 Bucket](https://aws.amazon.com/s3/). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `notify` method of the `Bucket` component. --- ## Constructor ```ts new BucketNotification(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`notification`](#nodes-notification) - [`functions`](#nodes-functions) The underlying [resources](/docs/components/#nodes) this component creates. notification **Type** [`BucketNotification`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketnotification/) The notification resource that's created. functions **Type** `Output<`[`Function`](/docs/component/aws/function)`[]>` The functions that will be notified. ## Args ### bucket **Type** `Input` - [`arn`](#bucket-arn) - [`name`](#bucket-name) The bucket to use. arn **Type** `Input` The ARN of the bucket. name **Type** `Input` The name of the bucket. ### notifications **Type** `Input[]>` - [`events?`](#notifications-events) - [`filterPrefix?`](#notifications-filterprefix) - [`filterSuffix?`](#notifications-filtersuffix) - [`function?`](#notifications-function) - [`name`](#notifications-name) - [`queue?`](#notifications-queue) - [`topic?`](#notifications-topic) A list of subscribers that'll be notified when events happen in the bucket. events? **Type** `Input[]>` **Default** All S3 events A list of S3 event types that'll trigger a notification. ```js { events: ["s3:ObjectCreated:*", "s3:ObjectRemoved:*"] } ``` filterPrefix? **Type** `Input` An S3 object key prefix that will trigger a notification. To be notified for all the objects in the `images/` folder. ```js { filterPrefix: "images/" } ``` filterSuffix? **Type** `Input` An S3 object key suffix that will trigger the notification. To be notified for all the objects with the `.jpg` suffix. ```js { filterSuffix: ".jpg" } ``` function? **Type** `Input` The function that'll be notified. ```js { name: "MySubscriber", function: "src/subscriber.handler" } ``` Customize the subscriber function. The `link` ensures the subscriber can access the bucket through the [SDK](/docs/reference/sdk/). ```js { name: "MySubscriber", function: { handler: "src/subscriber.handler", timeout: "60 seconds", link: [bucket] } } ``` Or pass in the ARN of an existing Lambda function. ```js { name: "MySubscriber", function: "arn:aws:lambda:us-east-1:123456789012:function:my-function" } ``` name **Type** `Input` The name of the subscriber. queue? **Type** `Input` The Queue that'll be notified. For example, let's say you have a queue. ```js title="sst.config.ts" const myQueue = new sst.aws.Queue("MyQueue"); ``` You can subscribe to this bucket with it. ```js { name: "MySubscriber", queue: myQueue } ``` Or pass in the ARN of an existing SQS queue. ```js { name: "MySubscriber", queue: "arn:aws:sqs:us-east-1:123456789012:my-queue" } ``` topic? **Type** `Input` The SNS topic that'll be notified. For example, let's say you have a topic. ```js title="sst.config.ts" const myTopic = new sst.aws.SnsTopic("MyTopic"); ``` You can subscribe to this bucket with it. ```js { name: "MySubscriber", topic: myTopic } ``` Or pass in the ARN of an existing SNS topic. ```js { name: "MySubscriber", topic: "arn:aws:sns:us-east-1:123456789012:my-topic" } ``` ### transform? **Type** `Object` - [`notification?`](#transform-notification) [Transform](/docs/components#transform) how this notification creates its underlying resources. notification? **Type** [`BucketNotificationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketnotification/#inputs)` | (args: `[`BucketNotificationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketnotification/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the S3 Bucket Notification resource. --- ## Bucket Reference doc for the `sst.aws.Bucket` component. https://sst.dev/docs/component/aws/bucket The `Bucket` component lets you add an [AWS S3 Bucket](https://aws.amazon.com/s3/) to your app. #### Minimal example ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` #### Public read access Enable `public` read access for all the files in the bucket. Useful for hosting public files. ```ts title="sst.config.ts" new sst.aws.Bucket("MyBucket", { access: "public" }); ``` #### Add a subscriber ```ts title="sst.config.ts" bucket.notify({ notifications: [ { name: "MySubscriber", function: "src/subscriber.handler" } ] }); ``` #### Link the bucket to a resource You can link the bucket to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [bucket] }); ``` Once linked, you can generate a pre-signed URL to upload files in your app. ```ts title="app/page.tsx" {1,7} const command = new PutObjectCommand({ Key: "file.txt", Bucket: Resource.MyBucket.name }); await getSignedUrl(new S3Client({}), command); ``` --- ## Constructor ```ts new Bucket(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`BucketArgs`](#bucketargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## BucketArgs ### access? **Type** `Input<"public" | "cloudfront">` Enable public read access for all the files in the bucket. By default, no access is granted. :::tip If you are using the `Router` to serve files from this bucket, you need to allow `cloudfront` access the bucket. ::: This adds a statement to the bucket policy that either allows `public` access or just `cloudfront` access. ```js { access: "public" } ``` ### cors? **Type** `Input` - [`allowHeaders?`](#cors-allowheaders) - [`allowMethods?`](#cors-allowmethods) - [`allowOrigins?`](#cors-alloworigins) - [`exposeHeaders?`](#cors-exposeheaders) - [`maxAge?`](#cors-maxage) **Default** `true` The CORS configuration for the bucket. Defaults to `true`, which is the same as: ```js { cors: { allowHeaders: ["*"], allowOrigins: ["*"], allowMethods: ["DELETE", "GET", "HEAD", "POST", "PUT"], exposeHeaders: ["ETag"], maxAge: "0 seconds" } } ``` allowHeaders? **Type** `Input[]>` **Default** `["*"]` The HTTP headers that origins can include in requests to the bucket. ```js { cors: { allowHeaders: ["date", "keep-alive", "x-custom-header"] } } ``` allowMethods? **Type** `Input[]>` **Default** `["DELETE" | "GET" | "HEAD" | "POST" | "PUT"]` The HTTP methods that are allowed when calling the bucket. ```js { cors: { allowMethods: ["GET", "POST", "DELETE"] } } ``` allowOrigins? **Type** `Input[]>` **Default** `["*"]` The origins that can access the bucket. ```js { cors: { allowOrigins: ["https://www.example.com", "http://localhost:60905"] } } ``` Or the wildcard for all origins. ```js { cors: { allowOrigins: ["*"] } } ``` exposeHeaders? **Type** `Input[]>` **Default** `["ETag"]` The HTTP headers you want to expose to an origin that calls the bucket. ```js { cors: { exposeHeaders: ["date", "keep-alive", "x-custom-header"] } } ``` maxAge? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days">` **Default** `"0 seconds"` The maximum amount of time the browser can cache results of a preflight request. By default the browser doesn't cache the results. The maximum value is `86400 seconds` or `1 day`. ```js { cors: { maxAge: "1 day" } } ``` ### enforceHttps? **Type** `Input` **Default** true Enforce HTTPS for all requests to the bucket. By default, the bucket policy will automatically block any HTTP requests. This is done using the `aws:SecureTransport` condition key. ```js { enforceHttps: false } ``` ### lifecycle? **Type** `Input[]>` - [`enabled?`](#lifecycle-enabled) - [`expiresAt?`](#lifecycle-expiresat) - [`expiresIn?`](#lifecycle-expiresin) - [`id?`](#lifecycle-id) - [`prefix?`](#lifecycle-prefix) The lifecycle configuration for the bucket. Delete objects in the "/tmp" directory after 30 days. ```js { lifecycle: [ { prefix: "/tmp", expiresIn: "30 days" } ] } ``` Use stable IDs to preserve rule identity when reordering. ```js { lifecycle: [ { id: "expire-tmp-files", prefix: "/tmp", expiresIn: "7 days" }, { id: "archive-old-logs", prefix: "/logs", expiresIn: "90 days" } ] } ``` enabled? **Type** `Input` **Default** `true` Whether the lifecycle rule is enabled. ```js { enabled: true } ``` expiresAt? **Type** `Input` Date after which the objects in the bucket should expire. Defaults to midnight UTC time. ```js { expiresAt: "2023-08-22" } ``` expiresIn? **Type** `Input<"$\{number\} day" | "$\{number\} days">` Days after which the objects in the bucket should expire. ```js { expiresIn: "30 days" } ``` id? **Type** `Input` The unique identifier for the lifecycle rule. This ID must be unique across all lifecycle rules in the bucket and cannot exceed 255 characters. Whitespace-only values are not allowed. If not provided, SST will generate a unique ID based on the bucket component name and rule index. Use stable IDs to ensure rule identity is preserved when reordering rules. ```js { id: "expire-tmp-files", prefix: "/tmp", expiresIn: "7 days" } ``` prefix? **Type** `Input` An S3 object key prefix that the lifecycle rule applies to. Applies to all the objects in the `images/` folder. ```js { prefix: "images/" } ``` ### policy? **Type** `Input[]>` - [`actions`](#policy-actions) - [`conditions?`](#policy-conditions) `Input[]>` - [`test`](#policy-conditions-test) - [`values`](#policy-conditions-values) - [`variable`](#policy-conditions-variable) - [`effect?`](#policy-effect) - [`paths?`](#policy-paths) - [`principals`](#policy-principals) `Input<"\*" | Input[]>` - [`identifiers`](#policy-principals-identifiers) - [`type`](#policy-principals-type) Configure the policy for the bucket. Restrict Access to Specific IP Addresses ```js { policy: [{ actions: ["s3:*"], principals: "*", conditions: [ { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] }] } ``` Allow Specific IAM User Access ```js { policy: [{ actions: ["s3:*"], principals: [{ type: "aws", identifiers: ["arn:aws:iam::123456789012:user/specific-user"] }], }] } ``` Cross-Account Access ```js { policy: [{ actions: ["s3:GetObject", "s3:ListBucket"], principals: [{ type: "aws", identifiers: ["123456789012"] }], }] } ``` actions **Type** `Input[]>` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `Input<"allow" | "deny">` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` paths? **Type** `Input[]>` **Default** `["", "*"]` The S3 file paths that the policy is applied to. The paths are specified using the [S3 path format](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html). The bucket arn will be prepended to the paths when constructing the policy. Apply the policy to the bucket itself. ```js { paths: [""] } ``` Apply to all files in the bucket. ```js { paths: ["*"] } ``` Apply to all files in the `images/` folder. ```js { paths: ["images/*"] } ``` principals **Type** `Input<"\*" | Input[]>` The principals that can perform the actions. Allow anyone to perform the actions. ```js { principals: "*" } ``` Allow anyone within an AWS account. ```js { principals: [{ type: "aws", identifiers: ["123456789012"] }] } ``` Allow specific IAM roles. ```js { principals: [{ type: "aws", identifiers: [ "arn:aws:iam::123456789012:role/MyRole", "arn:aws:iam::123456789012:role/MyOtherRole" ] }] } ``` Allow AWS CloudFront. ```js { principals: [{ type: "service", identifiers: ["cloudfront.amazonaws.com"] }] } ``` Allow OIDC federated users. ```js { principals: [{ type: "federated", identifiers: ["accounts.google.com"] }] } ``` Allow SAML federated users. ```js { principals: [{ type: "federated", identifiers: ["arn:aws:iam::123456789012:saml-provider/provider-name"] }] } ``` Allow Canonical User IDs. ```js { principals: [{ type: "canonical", identifiers: ["79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be"] }] } ``` Allow specific IAM users. identifiers **Type** `Input[]>` type **Type** `Input<"aws" | "service" | "federated" | "canonical">` ### transform? **Type** `Object` - [`bucket?`](#transform-bucket) - [`cors?`](#transform-cors) - [`lifecycle?`](#transform-lifecycle) - [`policy?`](#transform-policy) - [`publicAccessBlock?`](#transform-publicaccessblock) - [`versioning?`](#transform-versioning) [Transform](/docs/components#transform) how this component creates its underlying resources. bucket? **Type** [`BucketArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucket/#inputs)` | (args: `[`BucketArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucket/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the S3 Bucket resource. cors? **Type** [`BucketCorsConfigurationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketcorsconfiguration/#inputs)` | (args: `[`BucketCorsConfigurationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketcorsconfiguration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the S3 Bucket CORS configuration resource. lifecycle? **Type** [`BucketLifecycleConfigurationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketlifecycleconfiguration/#inputs)` | (args: `[`BucketLifecycleConfigurationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketlifecycleconfiguration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the S3 Bucket lifecycle resource. policy? **Type** [`BucketPolicyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketpolicy/#inputs)` | (args: `[`BucketPolicyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketpolicy/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the S3 Bucket Policy resource. publicAccessBlock? **Type** `false | `[`BucketPublicAccessBlockArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketpublicaccessblock/#inputs)` | (args: `[`BucketPublicAccessBlockArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketpublicaccessblock/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the public access block resource that's attached to the Bucket. Returns `false` if the public access block resource should not be created. versioning? **Type** [`BucketVersioningArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketversioning/#inputs)` | (args: `[`BucketVersioningArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketversioning/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the S3 Bucket versioning resource. ### versioning? **Type** `Input` **Default** `false` Enable versioning for the bucket. Bucket versioning enables you to store multiple versions of an object, protecting against accidental deletion or overwriting. ```js { versioning: true } ``` ## Properties ### arn **Type** `Output` The ARN of the S3 Bucket. ### domain **Type** `Output` The domain name of the bucket. Has the format `${bucketName}.s3.amazonaws.com`. ### name **Type** `Output` The generated name of the S3 Bucket. ### nodes **Type** `Object` - [`bucket`](#nodes-bucket) The underlying [resources](/docs/components/#nodes) this component creates. bucket **Type** `Output<`[`Bucket`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucket/)`>` The Amazon S3 bucket. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `name` `string` The generated name of the S3 Bucket. ## Methods ### notify ```ts notify(args) ``` #### Parameters - `args` [`BucketNotificationsArgs`](#bucketnotificationsargs) The config for the event notifications. **Returns** [`BucketNotification`](/docs/component/aws/bucket-notification) Subscribe to event notifications from this bucket. You can subscribe to these notifications with a function, a queue, or a topic. For exmaple, to notify a function: ```js title="sst.config.ts" {5} bucket.notify({ notifications: [ { name: "MySubscriber", function: "src/subscriber.handler" } ] }); ``` Or let's say you have a queue. ```js title="sst.config.ts" const myQueue = new sst.aws.Queue("MyQueue"); ``` You can notify it by passing in the queue. ```js title="sst.config.ts" {5} bucket.notify({ notifications: [ { name: "MySubscriber", queue: myQueue } ] }); ``` Or let's say you have a topic. ```js title="sst.config.ts" const myTopic = new sst.aws.SnsTopic("MyTopic"); ``` You can notify it by passing in the topic. ```js title="sst.config.ts" {5} bucket.notify({ notifications: [ { name: "MySubscriber", topic: myTopic } ] }); ``` You can also set it to only send notifications for specific S3 events. ```js {6} bucket.notify({ notifications: [ { name: "MySubscriber", function: "src/subscriber.handler", events: ["s3:ObjectCreated:*", "s3:ObjectRemoved:*"] } ] }); ``` And you can add filters to be only notified from specific files in the bucket. ```js {6} bucket.notify({ notifications: [ { name: "MySubscriber", function: "src/subscriber.handler", filterPrefix: "images/" } ] }); ``` ### static get ```ts Bucket.get(name, bucketName, opts?) ``` #### Parameters - `name` `string` The name of the component. - `bucketName` `string` The name of the existing S3 Bucket. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Bucket`](.) Reference an existing bucket with the given bucket name. This is useful when you create a bucket in one stage and want to share it in another stage. It avoids having to create a new bucket in the other stage. :::tip You can use the `static get` method to share buckets across stages. ::: Imagine you create a bucket in the `dev` stage. And in your personal stage `frank`, instead of creating a new bucket, you want to share the bucket from `dev`. ```ts title="sst.config.ts" const bucket = $app.stage === "frank" ? sst.aws.Bucket.get("MyBucket", "app-dev-mybucket-12345678") : new sst.aws.Bucket("MyBucket"); ``` Here `app-dev-mybucket-12345678` is the auto-generated bucket name for the bucket created in the `dev` stage. You can find this by outputting the bucket name in the `dev` stage. ```ts title="sst.config.ts" return { bucket: bucket.name }; ``` ## BucketNotificationsArgs ### notifications **Type** `Input[]>` - [`events?`](#notifications-events) - [`filterPrefix?`](#notifications-filterprefix) - [`filterSuffix?`](#notifications-filtersuffix) - [`function?`](#notifications-function) - [`name`](#notifications-name) - [`queue?`](#notifications-queue) - [`topic?`](#notifications-topic) A list of subscribers that'll be notified when events happen in the bucket. events? **Type** `Input[]>` **Default** All S3 events A list of S3 event types that'll trigger a notification. ```js { events: ["s3:ObjectCreated:*", "s3:ObjectRemoved:*"] } ``` filterPrefix? **Type** `Input` An S3 object key prefix that will trigger a notification. To be notified for all the objects in the `images/` folder. ```js { filterPrefix: "images/" } ``` filterSuffix? **Type** `Input` An S3 object key suffix that will trigger the notification. To be notified for all the objects with the `.jpg` suffix. ```js { filterSuffix: ".jpg" } ``` function? **Type** `Input` The function that'll be notified. ```js { name: "MySubscriber", function: "src/subscriber.handler" } ``` Customize the subscriber function. The `link` ensures the subscriber can access the bucket through the [SDK](/docs/reference/sdk/). ```js { name: "MySubscriber", function: { handler: "src/subscriber.handler", timeout: "60 seconds", link: [bucket] } } ``` Or pass in the ARN of an existing Lambda function. ```js { name: "MySubscriber", function: "arn:aws:lambda:us-east-1:123456789012:function:my-function" } ``` name **Type** `Input` The name of the subscriber. queue? **Type** `Input` The Queue that'll be notified. For example, let's say you have a queue. ```js title="sst.config.ts" const myQueue = new sst.aws.Queue("MyQueue"); ``` You can subscribe to this bucket with it. ```js { name: "MySubscriber", queue: myQueue } ``` Or pass in the ARN of an existing SQS queue. ```js { name: "MySubscriber", queue: "arn:aws:sqs:us-east-1:123456789012:my-queue" } ``` topic? **Type** `Input` The SNS topic that'll be notified. For example, let's say you have a topic. ```js title="sst.config.ts" const myTopic = new sst.aws.SnsTopic("MyTopic"); ``` You can subscribe to this bucket with it. ```js { name: "MySubscriber", topic: myTopic } ``` Or pass in the ARN of an existing SNS topic. ```js { name: "MySubscriber", topic: "arn:aws:sns:us-east-1:123456789012:my-topic" } ``` ### transform? **Type** `Object` - [`notification?`](#transform-notification) [Transform](/docs/components#transform) how this notification creates its underlying resources. notification? **Type** [`BucketNotificationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketnotification/#inputs)` | (args: `[`BucketNotificationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketnotification/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the S3 Bucket Notification resource. --- ## BusLambdaSubscriber Reference doc for the `sst.aws.BusLambdaSubscriber` component. https://sst.dev/docs/component/aws/bus-lambda-subscriber The `BusLambdaSubscriber` component is internally used by the `Bus` component to add subscriptions to [Amazon EventBridge Event Bus](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribe` method of the `Bus` component. --- ## Constructor ```ts new BusLambdaSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`permission`](#nodes-permission) - [`rule`](#nodes-rule) - [`target`](#nodes-target) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. permission **Type** [`Permission`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/permission/) The Lambda permission. rule **Type** [`EventRule`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/) The EventBus rule. target **Type** [`EventTarget`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/) The EventBus target. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function that'll be notified. ## Args ### bus **Type** `Input` - [`arn`](#bus-arn) - [`name`](#bus-name) The bus to use. arn **Type** `Input` The ARN of the bus. name **Type** `Input` The name of the bus. ### pattern? **Type** `Input` - [`detail?`](#pattern-detail) - [`detailType?`](#pattern-detailtype) - [`source?`](#pattern-source) Filter the messages that'll be processed by the subscriber. If any single property in the pattern doesn't match an attribute assigned to the message, then the pattern rejects the message. :::tip Learn more about [event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html). ::: For example, if your EventBus message contains this in a JSON format. ```js { source: "my.source", detail: { price_usd: 210.75 }, "detail-type": "orderPlaced" } ``` Then this pattern accepts the message. ```js { pattern: { source: ["my.source", "my.source2"] } } ``` detail? **Type** `Record` An object of `detail` values to match against, where the key is the name and the value is the pattern to match. The `detail` contains the actual data associated with the event. ```js { pattern: { detail: { price_usd: [{numeric: [">=", 100]}] } } } ``` detailType? **Type** `any[]` A list of `detail-type` values to match against. The `detail-type` typically defines the kind of event that is emitted. ```js { pattern: { detailType: ["orderPlaced"] } } ``` source? **Type** `any[]` A list of `source` values to match against. The `source` indicates where the event originated. ```js { pattern: { source: ["my.source", "my.source2"] } } ``` ### subscriber **Type** `Input` The subscriber function. ### transform? **Type** `Object` - [`rule?`](#transform-rule) - [`target?`](#transform-target) [Transform](/docs/components#transform) how this subscription creates its underlying resources. rule? **Type** [`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)` | (args: `[`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBus rule resource. target? **Type** [`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)` | (args: `[`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBus target resource. --- ## BusQueueSubscriber Reference doc for the `sst.aws.BusQueueSubscriber` component. https://sst.dev/docs/component/aws/bus-queue-subscriber The `BusQueueSubscriber` component is internally used by the `Bus` component to add subscriptions to [Amazon EventBridge Event Bus](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribeQueue` method of the `Bus` component. --- ## Constructor ```ts new BusQueueSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`policy`](#nodes-policy) - [`rule`](#nodes-rule) - [`target`](#nodes-target) The underlying [resources](/docs/components/#nodes) this component creates. policy **Type** [`QueuePolicy`](https://www.pulumi.com/registry/packages/aws/api-docs/sqs/queuepolicy/) The SQS Queue policy. rule **Type** [`EventRule`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/) The EventBus rule. target **Type** [`EventTarget`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/) The EventBus target. ## Args ### bus **Type** `Input` - [`arn`](#bus-arn) - [`name`](#bus-name) The bus to use. arn **Type** `Input` The ARN of the bus. name **Type** `Input` The name of the bus. ### pattern? **Type** `Input` - [`detail?`](#pattern-detail) - [`detailType?`](#pattern-detailtype) - [`source?`](#pattern-source) Filter the messages that'll be processed by the subscriber. If any single property in the pattern doesn't match an attribute assigned to the message, then the pattern rejects the message. :::tip Learn more about [event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html). ::: For example, if your EventBus message contains this in a JSON format. ```js { source: "my.source", detail: { price_usd: 210.75 }, "detail-type": "orderPlaced" } ``` Then this pattern accepts the message. ```js { pattern: { source: ["my.source", "my.source2"] } } ``` detail? **Type** `Record` An object of `detail` values to match against, where the key is the name and the value is the pattern to match. The `detail` contains the actual data associated with the event. ```js { pattern: { detail: { price_usd: [{numeric: [">=", 100]}] } } } ``` detailType? **Type** `any[]` A list of `detail-type` values to match against. The `detail-type` typically defines the kind of event that is emitted. ```js { pattern: { detailType: ["orderPlaced"] } } ``` source? **Type** `any[]` A list of `source` values to match against. The `source` indicates where the event originated. ```js { pattern: { source: ["my.source", "my.source2"] } } ``` ### queue **Type** `Input` The ARN of the SQS Queue. ### transform? **Type** `Object` - [`rule?`](#transform-rule) - [`target?`](#transform-target) [Transform](/docs/components#transform) how this subscription creates its underlying resources. rule? **Type** [`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)` | (args: `[`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBus rule resource. target? **Type** [`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)` | (args: `[`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBus target resource. --- ## Bus Reference doc for the `sst.aws.Bus` component. https://sst.dev/docs/component/aws/bus The `Bus` component lets you add an [Amazon EventBridge Event Bus](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html) to your app. #### Create a bus ```ts const bus = new sst.aws.Bus("MyBus"); ``` #### Add a subscriber ```ts bus.subscribe("MySubscriber", "src/subscriber.handler"); ``` #### Customize the subscriber ```ts bus.subscribe("MySubscriber", { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` #### Link the bus to a resource You can link the bus to other resources, like a function or your Next.js app. ```ts new sst.aws.Nextjs("MyWeb", { link: [bus] }); ``` Once linked, you can publish messages to the bus from your app. ```ts title="app/page.tsx" {1,9} const eb = new EventBridgeClient({}); await eb.send(new PutEventsCommand({ Entries: [ { EventBusName: Resource.MyBus.name, Source: "my.source", Detail: JSON.stringify({ foo: "bar" }) } ] })); ``` --- ## Constructor ```ts new Bus(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`BusArgs`](#busargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## BusArgs ### logging? **Type** `Input` - [`detail?`](#logging-detail) - [`level`](#logging-level) Configure logging for the EventBus. ```js new sst.aws.Bus("MyBus", { logging: { level: "error", detail: true, }, }); ``` detail? **Type** `Input` **Default** `false` Whether to include the event detail in the log. level **Type** `Input<"error" | "info" | "trace">` The level of logging. ### transform? **Type** `Object` - [`bus?`](#transform-bus) [Transform](/docs/components#transform) how this component creates its underlying resources. bus? **Type** [`EventBusArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventbus/#inputs)` | (args: `[`EventBusArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventbus/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBus resource. ## Properties ### arn **Type** `Output` The ARN of the EventBus. ### name **Type** `Output` The name of the EventBus. ### nodes **Type** `Object` - [`bus`](#nodes-bus) The underlying [resources](/docs/components/#nodes) this component creates. bus **Type** [`EventBus`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventbus/) The Amazon EventBus resource. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `arn` `string` The ARN of the EventBus. - `name` `string` The name of the EventBus. ## Methods ### subscribe ```ts subscribe(name, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscription. - `subscriber` `Input` The function that'll be notified. - `args?` [`BusSubscriberArgs`](#bussubscriberargs) Configure the subscription. **Returns** `Output<`[`BusLambdaSubscriber`](/docs/component/aws/bus-lambda-subscriber)`>` Subscribe to this EventBus with a function. ```js title="sst.config.ts" bus.subscribe("MySubscription", "src/subscriber.handler"); ``` You can add a pattern to the subscription. ```js bus.subscribe("MySubscription", "src/subscriber.handler", { pattern: { source: ["my.source", "my.source2"], price_usd: [{numeric: [">=", 100]}] } }); ``` To customize the subscriber function: ```js bus.subscribe("MySubscription", { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" bus.subscribe("MySubscription", "arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ### subscribeQueue ```ts subscribeQueue(name, queue, args?) ``` #### Parameters - `name` `string` The name of the subscription. - `queue` `Input` The queue that'll be notified. - `args?` [`BusSubscriberArgs`](#bussubscriberargs) Configure the subscription. **Returns** `Output<`[`BusQueueSubscriber`](/docs/component/aws/bus-queue-subscriber)`>` Subscribe to this EventBus with an SQS Queue. For example, let's say you have a queue. ```js title="sst.config.ts" const queue = new sst.aws.Queue("MyQueue"); ``` You can subscribe to this bus with it. ```js title="sst.config.ts" bus.subscribeQueue("MySubscription", queue); ``` You can also add a filter to the subscription. ```js bus.subscribeQueue("MySubscription", queue, { filter: { price_usd: [{numeric: [">=", 100]}] } }); ``` Or pass in the ARN of an existing SQS queue. ```js bus.subscribeQueue("MySubscription", "arn:aws:sqs:us-east-1:123456789012:my-queue"); ``` ### static get ```ts Bus.get(name, busName, opts?) ``` #### Parameters - `name` `string` The name of the component. - `busName` `Input` The name of the existing EventBus. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Bus`](.) Reference an existing EventBus with its ARN. This is useful when you create a bus in one stage and want to share it in another stage. It avoids having to create a new bus in the other stage. :::tip You can use the `static get` method to share EventBus across stages. ::: Imagine you create a bus in the `dev` stage. And in your personal stage `frank`, instead of creating a new bus, you want to share the bus from `dev`. ```ts title="sst.config.ts" const bus = $app.stage === "frank" ? sst.aws.Bus.get("MyBus", "app-dev-MyBus") : new sst.aws.Bus("MyBus"); ``` Here `app-dev-MyBus` is the name of the bus created in the `dev` stage. You can find this by outputting the bus name in the `dev` stage. ```ts title="sst.config.ts" return bus.name; ``` ### static subscribe ```ts Bus.subscribe(name, busArn, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscription. - `busArn` `Input` The ARN of the EventBus to subscribe to. - `subscriber` `Input` The function that'll be notified. - `args?` [`BusSubscriberArgs`](#bussubscriberargs) Configure the subscription. **Returns** `Output<`[`BusLambdaSubscriber`](/docs/component/aws/bus-lambda-subscriber)`>` Subscribe to an EventBus that was not created in your app with a function. For example, let's say you have an existing EventBus with the following ARN. ```js title="sst.config.ts" const busArn = "arn:aws:events:us-east-1:123456789012:event-bus/my-bus"; ``` You can subscribe to it by passing in the ARN. ```js title="sst.config.ts" sst.aws.Bus.subscribe("MySubscription", busArn, "src/subscriber.handler"); ``` To add a pattern to the subscription. ```js sst.aws.Bus.subscribe("MySubscription", busArn, "src/subscriber.handler", { pattern: { price_usd: [{numeric: [">=", 100]}] } }); ``` Or customize the subscriber function. ```js sst.aws.Bus.subscribe("MySubscription", busArn, { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` ### static subscribeQueue ```ts Bus.subscribeQueue(name, busArn, queue, args?) ``` #### Parameters - `name` `string` The name of the subscription. - `busArn` `Input` The ARN of the EventBus to subscribe to. - `queue` `Input` The queue that'll be notified. - `args?` [`BusSubscriberArgs`](#bussubscriberargs) Configure the subscription. **Returns** `Output<`[`BusQueueSubscriber`](/docs/component/aws/bus-queue-subscriber)`>` Subscribe to an existing EventBus with an SQS Queue. For example, let's say you have an existing EventBus and an SQS Queue. ```js title="sst.config.ts" const busArn = "arn:aws:events:us-east-1:123456789012:event-bus/MyBus"; const queue = new sst.aws.Queue("MyQueue"); ``` You can subscribe to the bus with the queue. ```js title="sst.config.ts" sst.aws.Bus.subscribeQueue("MySubscription", busArn, queue); ``` Add a filter to the subscription. ```js title="sst.config.ts" sst.aws.Bus.subscribeQueue(MySubscription, busArn, queue, { filter: { price_usd: [{numeric: [">=", 100]}] } }); ``` Or pass in the ARN of an existing SQS queue. ```js sst.aws.Bus.subscribeQueue("MySubscription", busArn, "arn:aws:sqs:us-east-1:123456789012:my-queue"); ``` ## BusSubscriberArgs ### pattern? **Type** `Input` - [`detail?`](#pattern-detail) - [`detailType?`](#pattern-detailtype) - [`source?`](#pattern-source) Filter the messages that'll be processed by the subscriber. If any single property in the pattern doesn't match an attribute assigned to the message, then the pattern rejects the message. :::tip Learn more about [event patterns](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-patterns.html). ::: For example, if your EventBus message contains this in a JSON format. ```js { source: "my.source", detail: { price_usd: 210.75 }, "detail-type": "orderPlaced" } ``` Then this pattern accepts the message. ```js { pattern: { source: ["my.source", "my.source2"] } } ``` detail? **Type** `Record` An object of `detail` values to match against, where the key is the name and the value is the pattern to match. The `detail` contains the actual data associated with the event. ```js { pattern: { detail: { price_usd: [{numeric: [">=", 100]}] } } } ``` detailType? **Type** `any[]` A list of `detail-type` values to match against. The `detail-type` typically defines the kind of event that is emitted. ```js { pattern: { detailType: ["orderPlaced"] } } ``` source? **Type** `any[]` A list of `source` values to match against. The `source` indicates where the event originated. ```js { pattern: { source: ["my.source", "my.source2"] } } ``` ### transform? **Type** `Object` - [`rule?`](#transform-rule) - [`target?`](#transform-target) [Transform](/docs/components#transform) how this subscription creates its underlying resources. rule? **Type** [`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)` | (args: `[`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBus rule resource. target? **Type** [`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)` | (args: `[`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBus target resource. --- ## Cdn Reference doc for the `sst.aws.Cdn` component. https://sst.dev/docs/component/aws/cdn The `Cdn` component is internally used by other components to deploy a CDN to AWS. It uses [Amazon CloudFront](https://aws.amazon.com/cloudfront/) and [Amazon Route 53](https://aws.amazon.com/route53/) to manage custom domains. :::note This component is not intended to be created directly. ::: You'll find this component exposed in the `transform` of other components. And you can customize the args listed here. For example: ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { transform: { cdn: (args) => { args.wait = false; } } }); ``` --- ## Constructor ```ts new Cdn(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`CdnArgs`](#cdnargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## CdnArgs ### comment? **Type** `Input` A comment to describe the distribution. It cannot be longer than 128 characters. ### customErrorResponses? **Type** `Input[]>` One or more custom error responses. ### defaultCacheBehavior **Type** `Input<`[`DistributionDefaultCacheBehavior`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/distribution/#distributiondefaultcachebehavior)`>` The default cache behavior for this distribution. ### defaultRootObject? **Type** `Input` An object you want CloudFront to return when a user requests the root URL. For example, the `index.html`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your distribution. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### orderedCacheBehaviors? **Type** `Input[]>` An ordered list of cache behaviors for this distribution. Listed in order of precedence. The first cache behavior will have precedence 0. ### originGroups? **Type** `Input[]>` One or more origin groups for this distribution. ### origins **Type** `Input[]>` One or more origins for this distribution. ### tags? **Type** `Input>>` Tags to apply to the distribution. ### transform? **Type** `Object` - [`distribution`](#transform-distribution) [Transform](/docs/components#transform) how this component creates its underlying resources. distribution **Type** [`DistributionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/distribution/#inputs)` | (args: `[`DistributionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/distribution/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront distribution resource. ### wait? **Type** `Input` **Default** `true` Whether to wait for the CloudFront distribution to be deployed before completing the deployment of the app. This is necessary if you need to use the distribution URL in other resources. ### webAclArn? **Type** `Input` The ARN of a WAF WebACL to associate with the CloudFront distribution. ```ts { webAclArn: "arn:aws:wafv2:us-east-1:123456789012:global/webacl/my-acl/abc123" } ``` ## Properties ### domainUrl **Type** `Output` If the custom domain is enabled, this is the URL of the distribution with the custom domain. ### nodes **Type** `Object` - [`distribution`](#nodes-distribution) The underlying [resources](/docs/components/#nodes) this component creates. distribution **Type** `Output<`[`Distribution`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/distribution/)`>` The Amazon CloudFront distribution. ### url **Type** `Output` The CloudFront URL of the distribution. ## Methods ### static get ```ts Cdn.get(name, distributionID, opts?) ``` #### Parameters - `name` `string` The name of the component. - `distributionID` `Input` The id of the existing CDN distribution. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Cdn`](.) Reference an existing CDN with the given distribution ID. This is useful when you create a Router in one stage and want to share it in another. It avoids having to create a new Router in the other stage. :::tip You can use the `static get` method to share Routers across stages. ::: ## CdnDomainArgs ### aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` ### cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` ### dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` ### name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` ### redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` --- ## Cluster.v1 Reference doc for the `sst.aws.Cluster.v1` component. https://sst.dev/docs/component/aws/cluster-v1 The `Cluster` component lets you create a cluster of containers and add services to them. It uses [Amazon ECS](https://aws.amazon.com/ecs/) on [AWS Fargate](https://aws.amazon.com/fargate/). For existing usage, rename `sst.aws.Cluster` to `sst.aws.Cluster.v1`. For new Clusters, use the latest [`Cluster`](/docs/component/aws/cluster) component instead. :::caution This component has been deprecated . ::: #### Create a Cluster ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster.v1("MyCluster", { vpc }); ``` #### Add a service ```ts title="sst.config.ts" cluster.addService("MyService"); ``` #### Add a public custom domain ```ts title="sst.config.ts" cluster.addService("MyService", { public: { domain: "example.com", ports: [ { listen: "80/http" }, { listen: "443/https", forward: "80/http" }, ] } }); ``` #### Enable auto-scaling ```ts title="sst.config.ts" cluster.addService("MyService", { scaling: { min: 4, max: 16, cpuUtilization: 50, memoryUtilization: 50, } }); ``` #### Link resources [Link resources](/docs/linking/) to your service. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); cluster.addService("MyService", { link: [bucket], }); ``` If your service is written in Node.js, you can use the [SDK](/docs/reference/sdk/) to access the linked resources. ```ts title="app.ts" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new Cluster.v1(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`ClusterArgs`](#clusterargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ClusterArgs ### transform? **Type** `Object` - [`cluster?`](#transform-cluster) [Transform](/docs/components#transform) how this component creates its underlying resources. cluster? **Type** [`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/cluster/#inputs)` | (args: `[`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/cluster/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Cluster resource. ### vpc **Type** `Input` - [`id`](#vpc-id) - [`privateSubnets`](#vpc-privatesubnets) - [`publicSubnets`](#vpc-publicsubnets) - [`securityGroups`](#vpc-securitygroups) The VPC to use for the cluster. ```js { vpc: { id: "vpc-0d19d2b8ca2b268a1", publicSubnets: ["subnet-0b6a2b73896dc8c4c", "subnet-021389ebee680c2f0"], privateSubnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"], securityGroups: ["sg-0399348378a4c256c"], } } ``` Or create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. ```js { vpc: myVpc } ``` id **Type** `Input` The ID of the VPC. privateSubnets **Type** `Input[]>` A list of private subnet IDs in the VPC. The service will be placed in the private subnets. publicSubnets **Type** `Input[]>` A list of public subnet IDs in the VPC. If a service has public ports configured, its load balancer will be placed in the public subnets. securityGroups **Type** `Input[]>` A list of VPC security group IDs for the service. ## Properties ### nodes **Type** `Object` - [`cluster`](#nodes-cluster) The underlying [resources](/docs/components/#nodes) this component creates. cluster **Type** [`Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/cluster/) The Amazon ECS Cluster. ## Methods ### addService ```ts addService(name, args?) ``` #### Parameters - `name` `string` Name of the service. - `args?` [`ClusterServiceArgs`](#clusterserviceargs) Configure the service. **Returns** [`Service`](/docs/component/aws/service-v1) Add a service to the cluster. ```ts title="sst.config.ts" cluster.addService("MyService"); ``` Set a custom domain for the service. ```js {2} title="sst.config.ts" cluster.addService("MyService", { domain: "example.com" }); ``` #### Enable auto-scaling ```ts title="sst.config.ts" cluster.addService("MyService", { scaling: { min: 4, max: 16, cpuUtilization: 50, memoryUtilization: 50, } }); ``` ## ClusterServiceArgs ### architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The CPU architecture of the container in this service. ```js { architecture: "arm64" } ``` ### cpu? **Type** `"0.25 vCPU" | "0.5 vCPU" | "1 vCPU" | "2 vCPU" | "4 vCPU" | "8 vCPU" | "16 vCPU"` **Default** `"0.25 vCPU"` The amount of CPU allocated to the container in this service. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { cpu: "1 vCPU" } ``` ### dev? **Type** `Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your service is run locally; it's not deployed. ::: Instead of deploying your service, this starts it locally. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` The command that `sst dev` runs to start this in dev mode. This is the command you run when you want to run your service locally. directory? **Type** `Input` **Default** Uses the `image.dockerfile` path Change the directory from where the `command` is run. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### environment? **Type** `Input>>` Key-value pairs of values that are set as [container environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html). The keys need to: - Start with a letter - Be at least 2 characters long - Contain only letters, numbers, or underscores ```js { environment: { DEBUG: "true" } } ``` ### image? **Type** `Input` - [`args?`](#image-args) - [`context?`](#image-context) - [`dockerfile?`](#image-dockerfile) **Default** `{}` Configure the docker build command for building the image. Prior to building the image, SST will automatically add the `.sst` directory to the `.dockerignore` if not already present. ```js { image: { context: "./app", dockerfile: "Dockerfile", args: { MY_VAR: "value" } } } ``` args? **Type** `Input>>` Key-value pairs of [build args](https://docs.docker.com/build/guide/build-args/) to pass to the docker build command. ```js { args: { MY_VAR: "value" } } ``` context? **Type** `Input` **Default** `"."` The path to the [Docker build context](https://docs.docker.com/build/building/context/#local-context). The path is relative to your project's `sst.config.ts`. To change where the docker build context is located. ```js { context: "./app" } ``` dockerfile? **Type** `Input` **Default** `"Dockerfile"` The path to the [Dockerfile](https://docs.docker.com/reference/cli/docker/image/build/#file). The path is relative to the build `context`. To use a different Dockerfile. ```js { dockerfile: "Dockerfile.prod" } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your service. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your app using the [SDK](/docs/reference/sdk/). Takes a list of components to link to the service. ```js { link: [bucket, stripeKey] } ``` ### logging? **Type** `Input` - [`retention?`](#logging-retention) **Default** `{ retention: "1 month" }` Configure the service's logs in CloudWatch. ```js { logging: { retention: "forever" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `"1 month"` The duration the logs are kept in CloudWatch. ### memory? **Type** `"$\{number\} GB"` **Default** `"0.5 GB"` The amount of memory allocated to the container in this service. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { memory: "2 GB" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the service needs to access. These permissions are used to create the service's [task role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html). :::tip If you `link` the service to a resource, the permissions to access it are automatically added. ::: Allow the service to read and write to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Allow the service to perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Granting the service permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### public? **Type** `Input` - [`domain?`](#public-domain) `Input` - [`cert?`](#public-domain-cert) - [`dns?`](#public-domain-dns) - [`name`](#public-domain-name) - [`ports`](#public-ports) `Input` - [`forward?`](#public-ports-forward) - [`listen`](#public-ports-listen) Configure a public endpoint for the service. When configured, a load balancer will be created to route traffic to the containers. By default, the endpoint is an auto-generated load balancer URL. You can also add a custom domain for the public endpoint. ```js { public: { domain: "example.com", ports: [ { listen: "80/http" }, { listen: "443/https", forward: "80/http" } ] } } ``` domain? **Type** `Input` Set a custom domain for your public endpoint. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the load balancer endpoint. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` ports **Type** `Input` Configure the mapping for the ports the public endpoint listens to and forwards to the service. This supports two types of protocols: 1. Application Layer Protocols: `http` and `https`. This'll create an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html). 2. Network Layer Protocols: `tcp`, `udp`, `tcp_udp`, and `tls`. This'll create a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html). :::note If you are listening on `https` or `tls`, you need to specify a custom `public.domain`. ::: You can **not** configure both application and network layer protocols for the same service. Here we are listening on port `80` and forwarding it to the service on port `8080`. ```js { public: { ports: [ { listen: "80/http", forward: "8080/http" } ] } } ``` The `forward` port and protocol defaults to the `listen` port and protocol. So in this case both are `80/http`. ```js { public: { ports: [ { listen: "80/http" } ] } } ``` forward? **Type** `Input<"$\{number\}/https" | "$\{number\}/http" | "$\{number\}/tcp" | "$\{number\}/udp" | "$\{number\}/tcp_udp" | "$\{number\}/tls">` **Default** The same port and protocol as `listen`. The port and protocol of the container the service forwards the traffic to. Uses the format `{port}/{protocol}`. listen **Type** `Input<"$\{number\}/https" | "$\{number\}/http" | "$\{number\}/tcp" | "$\{number\}/udp" | "$\{number\}/tcp_udp" | "$\{number\}/tls">` The port and protocol the service listens on. Uses the format `{port}/{protocol}`. ### scaling? **Type** `Input` - [`cpuUtilization?`](#scaling-cpuutilization) - [`max?`](#scaling-max) - [`memoryUtilization?`](#scaling-memoryutilization) - [`min?`](#scaling-min) **Default** `{ min: 1, max: 1 }` Configure the service to automatically scale up or down based on the CPU or memory utilization of a container. By default, scaling is disabled and the service will run in a single container. ```js { scaling: { min: 4, max: 16, cpuUtilization: 50, memoryUtilization: 50 } } ``` cpuUtilization? **Type** `Input` **Default** `70` The target CPU utilization percentage to scale up or down. It'll scale up when the CPU utilization is above the target and scale down when it's below the target. ```js { scaling: { cpuUtilization: 50 } } ``` max? **Type** `Input` **Default** `1` The maximum number of containers to scale up to. ```js { scaling: { max: 16 } } ``` memoryUtilization? **Type** `Input` **Default** `70` The target memory utilization percentage to scale up or down. It'll scale up when the memory utilization is above the target and scale down when it's below the target. ```js { scaling: { memoryUtilization: 50 } } ``` min? **Type** `Input` **Default** `1` The minimum number of containers to scale down to. ```js { scaling: { min: 4 } } ``` ### storage? **Type** `"$\{number\} GB"` **Default** `"21 GB"` The amount of ephemeral storage (in GB) allocated to a container in this service. ```js { storage: "100 GB" } ``` ### transform? **Type** `Object` - [`image?`](#transform-image) - [`listener?`](#transform-listener) - [`loadBalancer?`](#transform-loadbalancer) - [`loadBalancerSecurityGroup?`](#transform-loadbalancersecuritygroup) - [`logGroup?`](#transform-loggroup) - [`service?`](#transform-service) - [`target?`](#transform-target) - [`taskDefinition?`](#transform-taskdefinition) - [`taskRole?`](#transform-taskrole) [Transform](/docs/components#transform) how this component creates its underlying resources. image? **Type** [`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)` | (args: `[`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Docker Image resource. listener? **Type** [`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)` | (args: `[`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer listener resource. loadBalancer? **Type** [`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)` | (args: `[`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer resource. loadBalancerSecurityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Security Group resource for the Load Balancer. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch log group resource. service? **Type** [`ServiceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/#inputs)` | (args: `[`ServiceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Service resource. target? **Type** [`TargetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/targetgroup/#inputs)` | (args: `[`TargetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/targetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer target group resource. taskDefinition? **Type** [`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)` | (args: `[`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task Definition resource. taskRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task IAM Role resource. --- ## Cluster Reference doc for the `sst.aws.Cluster` component. https://sst.dev/docs/component/aws/cluster The `Cluster` component lets you create an [ECS cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) for your app. add `Service` and `Task` components to it. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); ``` Once created, you can add the following to it: 1. `Service`: These are containers that are always running, like web or application servers. They automatically restart if they fail. 2. `Task`: These are containers that are used for long running asynchronous work, like data processing. --- ## Constructor ```ts new Cluster(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`ClusterArgs`](#clusterargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ClusterArgs ### transform? **Type** `Object` - [`cluster?`](#transform-cluster) [Transform](/docs/components#transform) how this component creates its underlying resources. cluster? **Type** [`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/cluster/#inputs)` | (args: `[`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/cluster/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Cluster resource. ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`cloudmapNamespaceId?`](#vpc-cloudmapnamespaceid) - [`cloudmapNamespaceName?`](#vpc-cloudmapnamespacename) - [`containerSubnets?`](#vpc-containersubnets) - [`id`](#vpc-id) - [`loadBalancerSubnets`](#vpc-loadbalancersubnets) - [`publicSubnets?`](#vpc-publicsubnets) - [`securityGroups`](#vpc-securitygroups) The VPC to use for the cluster. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` By default, both the load balancer and the services are deployed in public subnets. The above is equivalent to: ```js { vpc: { id: myVpc.id, securityGroups: myVpc.securityGroups, containerSubnets: myVpc.publicSubnets, loadBalancerSubnets: myVpc.publicSubnets, cloudmapNamespaceId: myVpc.nodes.cloudmapNamespace.id, cloudmapNamespaceName: myVpc.nodes.cloudmapNamespace.name } } ``` cloudmapNamespaceId? **Type** `Input` The ID of the Cloud Map namespace to use for the service. cloudmapNamespaceName? **Type** `Input` The name of the Cloud Map namespace to use for the service. containerSubnets? **Type** `Input[]>` A list of subnet IDs in the VPC to place the containers in. id **Type** `Input` The ID of the VPC. loadBalancerSubnets **Type** `Input[]>` A list of subnet IDs in the VPC to place the load balancer in. publicSubnets? **Type** `Input[]>` A list of public subnet IDs in the VPC. securityGroups **Type** `Input[]>` A list of VPC security group IDs for the service. ## Properties ### id **Type** `Output` The cluster ID. ### nodes **Type** `Object` - [`cluster`](#nodes-cluster) The underlying [resources](/docs/components/#nodes) this component creates. cluster **Type** `Output<`[`Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/cluster/)`>` The Amazon ECS Cluster. ## Methods ### static get ```ts Cluster.get(name, args, opts?) ``` #### Parameters - `name` `string` The name of the component. - `args` [`ClusterGetArgs`](#clustergetargs) The arguments to get the cluster. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Cluster`](.) Reference an existing ECS Cluster with the given ID. This is useful when you create a cluster in one stage and want to share it in another. It avoids having to create a new cluster in the other stage. :::tip You can use the `static get` method to share cluster across stages. ::: Imagine you create a cluster in the `dev` stage. And in your personal stage `frank`, instead of creating a new cluster, you want to share the same cluster from `dev`. ```ts title="sst.config.ts" const cluster = $app.stage === "frank" ? sst.aws.Cluster.get("MyCluster", { id: "arn:aws:ecs:us-east-1:123456789012:cluster/app-dev-MyCluster", vpc, }) : new sst.aws.Cluster("MyCluster", { vpc }); ``` Here `arn:aws:ecs:us-east-1:123456789012:cluster/app-dev-MyCluster` is the ID of the cluster created in the `dev` stage. You can find these by outputting the cluster ID in the `dev` stage. ```ts title="sst.config.ts" return { id: cluster.id, }; ``` ## ClusterGetArgs ### id **Type** `Input` The ID of the cluster. ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`cloudmapNamespaceId?`](#vpc-cloudmapnamespaceid-1) - [`cloudmapNamespaceName?`](#vpc-cloudmapnamespacename-1) - [`containerSubnets?`](#vpc-containersubnets-1) - [`id`](#vpc-id-1) - [`loadBalancerSubnets`](#vpc-loadbalancersubnets-1) - [`publicSubnets?`](#vpc-publicsubnets-1) - [`securityGroups`](#vpc-securitygroups-1) The VPC used for the cluster. cloudmapNamespaceId? **Type** `Input` The ID of the Cloud Map namespace to use for the service. cloudmapNamespaceName? **Type** `Input` The name of the Cloud Map namespace to use for the service. containerSubnets? **Type** `Input[]>` A list of subnet IDs in the VPC to place the containers in. id **Type** `Input` The ID of the VPC. loadBalancerSubnets **Type** `Input[]>` A list of subnet IDs in the VPC to place the load balancer in. publicSubnets? **Type** `Input[]>` A list of public subnet IDs in the VPC. securityGroups **Type** `Input[]>` A list of VPC security group IDs for the service. --- ## CognitoIdentityPool Reference doc for the `sst.aws.CognitoIdentityPool` component. https://sst.dev/docs/component/aws/cognito-identity-pool The `CognitoIdentityPool` component lets you add a [Amazon Cognito identity pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html) to your app. #### Create the identity pool ```ts title="sst.config.ts" new sst.aws.CognitoIdentityPool("MyIdentityPool", { userPools: [ { userPool: "us-east-1_QY6Ly46JH", client: "6va5jg3cgtrd170sgokikjm5m6" } ] }); ``` #### Configure permissions for authenticated users ```ts title="sst.config.ts" new sst.aws.CognitoIdentityPool("MyIdentityPool", { userPools: [ { userPool: "us-east-1_QY6Ly46JH", client: "6va5jg3cgtrd170sgokikjm5m6" } ], permissions: { authenticated: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } }); ``` --- ## Constructor ```ts new CognitoIdentityPool(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`CognitoIdentityPoolArgs`](#cognitoidentitypoolargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## CognitoIdentityPoolArgs ### permissions? **Type** `Input` - [`authenticated?`](#permissions-authenticated) `Input` - [`actions`](#permissions-authenticated-actions) - [`conditions?`](#permissions-authenticated-conditions) `Input[]>` - [`test`](#permissions-authenticated-conditions-test) - [`values`](#permissions-authenticated-conditions-values) - [`variable`](#permissions-authenticated-conditions-variable) - [`effect?`](#permissions-authenticated-effect) - [`resources`](#permissions-authenticated-resources) - [`unauthenticated?`](#permissions-unauthenticated) `Input` - [`actions`](#permissions-unauthenticated-actions) - [`conditions?`](#permissions-unauthenticated-conditions) `Input[]>` - [`test`](#permissions-unauthenticated-conditions-test) - [`values`](#permissions-unauthenticated-conditions-values) - [`variable`](#permissions-unauthenticated-conditions-variable) - [`effect?`](#permissions-unauthenticated-effect) - [`resources`](#permissions-unauthenticated-resources) The permissions to attach to the authenticated and unauthenticated roles. This allows the authenticated and unauthenticated users to access other AWS resources. ```js { permissions: { authenticated: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ], unauthenticated: [ { actions: ["s3:GetObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } } ``` authenticated? **Type** `Input` Attaches the given list of permissions to the authenticated users. actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` unauthenticated? **Type** `Input` Attaches the given list of permissions to the unauthenticated users. actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### transform? **Type** `Object` - [`authenticatedRole?`](#transform-authenticatedrole) - [`identityPool?`](#transform-identitypool) - [`unauthenticatedRole?`](#transform-unauthenticatedrole) [Transform](/docs/components#transform) how this component creates its underlying resources. authenticatedRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the authenticated IAM role resource. identityPool? **Type** [`IdentityPoolArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identitypool/#inputs)` | (args: `[`IdentityPoolArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identitypool/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cognito identity pool resource. unauthenticatedRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the unauthenticated IAM role resource. ### userPools? **Type** `Input[]>` - [`client`](#userpools-client) - [`userPool`](#userpools-userpool) Configure Cognito User Pools as identity providers to your identity pool. ```ts { userPools: [ { userPool: "us-east-1_QY6Ly46JH", client: "6va5jg3cgtrd170sgokikjm5m6" } ] } ``` client **Type** `Input` The Cognito User Pool client ID. userPool **Type** `Input` The Cognito user pool ID. ## Properties ### id **Type** `Output` The Cognito identity pool ID. ### nodes **Type** `Object` - [`authenticatedRole`](#nodes-authenticatedrole) - [`identityPool`](#nodes-identitypool) - [`unauthenticatedRole`](#nodes-unauthenticatedrole) The underlying [resources](/docs/components/#nodes) this component creates. authenticatedRole **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The authenticated IAM role. identityPool **Type** [`IdentityPool`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identitypool/) The Amazon Cognito identity pool. unauthenticatedRole **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The unauthenticated IAM role. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `id` `string` The Cognito identity pool ID. ## Methods ### static get ```ts CognitoIdentityPool.get(name, identityPoolID, opts?) ``` #### Parameters - `name` `string` The name of the component. - `identityPoolID` `Input` The ID of the existing Identity Pool. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`CognitoIdentityPool`](.) Reference an existing Identity Pool with the given ID. This is useful when you create a Identity Pool in one stage and want to share it in another. It avoids having to create a new Identity Pool in the other stage. :::tip You can use the `static get` method to share Identity Pools across stages. ::: Imagine you create a Identity Pool in the `dev` stage. And in your personal stage `frank`, instead of creating a new pool, you want to share the same pool from `dev`. ```ts title="sst.config.ts" const identityPool = $app.stage === "frank" ? sst.aws.CognitoIdentityPool.get("MyIdentityPool", "us-east-1:02facf30-e2f3-49ec-9e79-c55187415cf8") : new sst.aws.CognitoIdentityPool("MyIdentityPool"); ``` Here `us-east-1:02facf30-e2f3-49ec-9e79-c55187415cf8` is the ID of the Identity Pool created in the `dev` stage. You can find this by outputting the Identity Pool ID in the `dev` stage. ```ts title="sst.config.ts" return { identityPool: identityPool.id }; ``` --- ## CognitoIdentityProvider Reference doc for the `sst.aws.CognitoIdentityProvider` component. https://sst.dev/docs/component/aws/cognito-identity-provider The `CognitoIdentityProvider` component is internally used by the `CognitoUserPool` component to add identity providers to your [Amazon Cognito user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addIdentityProvider` method of the `CognitoUserPool` component. --- ## Constructor ```ts new CognitoIdentityProvider(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`identityProvider`](#nodes-identityprovider) The underlying [resources](/docs/components/#nodes) this component creates. identityProvider **Type** [`IdentityProvider`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identityprovider/) The Cognito identity provider. ### providerName **Type** `Output` The Cognito identity provider name. ## Args ### attributes? **Type** `Input>>` Define a mapping between identity provider attributes and user pool attributes. ```ts { email: "email", username: "sub" } ``` ### details **Type** `Input>>` Configure the identity provider details, including the scopes, URLs, and identifiers. ```ts { authorize_scopes: "email profile", client_id: "your-client-id", client_secret: "your-client-secret" } ``` ### transform? **Type** `Object` - [`identityProvider?`](#transform-identityprovider) [Transform](/docs/components#transform) how this component creates its underlying resources. identityProvider? **Type** [`IdentityProviderArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identityprovider/#inputs)` | (args: `[`IdentityProviderArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identityprovider/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cognito identity provider resource. ### type **Type** `Input<"oidc" | "saml" | "google" | "facebook" | "apple" | "amazon">` The type of identity provider. ### userPool **Type** `Input` The Cognito user pool ID. --- ## CognitoUserPoolClient Reference doc for the `sst.aws.CognitoUserPoolClient` component. https://sst.dev/docs/component/aws/cognito-user-pool-client The `CognitoUserPoolClient` component is internally used by the `CognitoUserPool` component to add clients to your [Amazon Cognito user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addClient` method of the `CognitoUserPool` component. --- ## Constructor ```ts new CognitoUserPoolClient(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### id **Type** `Output` The Cognito User Pool client ID. ### nodes **Type** `Object` - [`client`](#nodes-client) The underlying [resources](/docs/components/#nodes) this component creates. client **Type** [`UserPoolClient`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpoolclient/) The Cognito User Pool client. ### secret **Type** `Output` The Cognito User Pool client secret. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `id` `string` The Cognito User Pool client ID. - `secret` `string` The Cognito User Pool client secret. ## Args ### callbackUrls? **Type** `Input[]>` List of allowed callback URLs for the identity providers. ### providers? **Type** `Input[]>` **Default** `["COGNITO"]` A list of identity providers that are supported for this client. :::tip Reference federated identity providers using their `providerName` property. ::: If you are using a federated identity provider. ```js title="sst.config.ts" const provider = userPool.addIdentityProvider("MyProvider", { type: "oidc", details: { authorize_scopes: "email profile", client_id: "your-client-id", client_secret: "your-client-secret" }, }); ``` Make sure to pass in `provider.providerName` instead of hardcoding it to `"MyProvider"`. ```ts title="sst.config.ts" {2} userPool.addClient("Web", { providers: [provider.providerName] }); ``` This ensures the client is created after the provider. ### transform? **Type** `Object` - [`client?`](#transform-client) [Transform](/docs/components#transform) how this component creates its underlying resources. client? **Type** [`UserPoolClientArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpoolclient/#inputs)` | (args: `[`UserPoolClientArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpoolclient/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cognito User Pool client resource. ### userPool **Type** `Input` The Cognito user pool ID. --- ## CognitoUserPool Reference doc for the `sst.aws.CognitoUserPool` component. https://sst.dev/docs/component/aws/cognito-user-pool The `CognitoUserPool` component lets you add a [Amazon Cognito User Pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html) to your app. #### Create the user pool ```ts title="sst.config.ts" const userPool = new sst.aws.CognitoUserPool("MyUserPool"); ``` #### Login using email ```ts title="sst.config.ts" new sst.aws.CognitoUserPool("MyUserPool", { usernames: ["email"] }); ``` #### Add a hosted UI domain Use a Cognito prefix domain for the hosted UI. ```ts title="sst.config.ts" new sst.aws.CognitoUserPool("MyUserPool", { domain: { prefix: "my-app-dev" } }); ``` Or use your own custom domain. ```ts title="sst.config.ts" new sst.aws.CognitoUserPool("MyUserPool", { domain: "auth.example.com" }); ``` #### Configure triggers ```ts title="sst.config.ts" new sst.aws.CognitoUserPool("MyUserPool", { triggers: { preAuthentication: "src/preAuthentication.handler", postAuthentication: "src/postAuthentication.handler", }, }); ``` #### Add Google identity provider ```ts title="sst.config.ts" const GoogleClientId = new sst.Secret("GOOGLE_CLIENT_ID"); const GoogleClientSecret = new sst.Secret("GOOGLE_CLIENT_SECRET"); userPool.addIdentityProvider({ type: "google", details: { authorize_scopes: "email profile", client_id: GoogleClientId.value, client_secret: GoogleClientSecret.value, }, attributes: { email: "email", name: "name", username: "sub", }, }); ``` #### Add a client ```ts title="sst.config.ts" userPool.addClient("Web"); ``` --- ## Constructor ```ts new CognitoUserPool(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`CognitoUserPoolArgs`](#cognitouserpoolargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## CognitoUserPoolArgs ### advancedSecurity? **Type** `Input<"audit" | "enforced">` **Default** Advanced security is disabled. Enable advanced security features. Learn more about [advanced security](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html). ```ts { advancedSecurity: "enforced" } ``` ### aliases? **Type** `Input[]>` **Default** User can only sign in with their username. Configure the different ways a user can sign in besides using their username. :::note You cannot change the aliases property once the User Pool has been created. Learn more about [aliases](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html#user-pool-settings-aliases). ::: ```ts { aliases: ["email"] } ``` ### domain? **Type** `Input` - [`prefix`](#domain-prefix) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) Configure a domain for the User Pool's hosted UI. You can use either a Cognito-provided prefix domain or your own custom domain. Add a Cognito prefix domain. ```ts { domain: { prefix: "my-app-dev" } } ``` This creates a domain at `my-app-dev.auth.{region}.amazoncognito.com`. Add a custom domain. By default, creates an ACM certificate and configures DNS records using Route 53. ```ts { domain: "auth.example.com" } ``` Use a domain hosted on Cloudflare. ```ts { domain: { name: "auth.example.com", dns: sst.cloudflare.dns() } } ``` prefix **Type** `Input` Use an Amazon Cognito prefix domain. Creates a domain at `{prefix}.auth.{region}.amazoncognito.com`. Cannot contain "aws", "amazon", or "cognito". cert? **Type** `Input` ARN of an existing ACM certificate in `us-east-1`. By default, a certificate is created and validated automatically. dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider for automatic certificate validation and record creation. Set to `false` for manual DNS setup. name **Type** `Input` The custom domain name. Must be a subdomain (e.g., `auth.example.com`). ### mfa? **Type** `Input<"on" | "optional">` **Default** MFA is disabled. Configure the multi-factor authentication (MFA) settings for the User Pool. If you enable MFA using `on` or `optional`, you need to configure either `sms` or `softwareToken` as well. ```ts { mfa: "on" } ``` ### sms? **Type** `Input` - [`externalId`](#sms-externalid) - [`snsCallerArn`](#sms-snscallerarn) - [`snsRegion?`](#sms-snsregion) **Default** No SMS settings. Configure the SMS settings for the User Pool. ```ts { sms: { externalId: "1234567890", snsCallerArn: "arn:aws:iam::1234567890:role/CognitoSnsCaller", snsRegion: "us-east-1", } } ``` externalId **Type** `Input` The external ID used in IAM role trust relationships. Learn more about [external IDs](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html). snsCallerArn **Type** `Input` The ARN of the IAM role that Amazon Cognito can assume to access the Amazon SNS snsRegion? **Type** `Input` The AWS Region that Amazon Cognito uses to send SMS messages. ### smsAuthenticationMessage? **Type** `Input` **Default** The default message template. The message template for SMS messages sent to users who are being authenticated. The template must include the `{####}` placeholder, which will be replaced with the verification code. ```ts { smsAuthenticationMessage: "Your authentication code is {####}" } ``` ### softwareToken? **Type** `Input` **Default** `false` Enable software token MFA for the User Pool. ```ts { softwareToken: true } ``` ### transform? **Type** `Object` - [`domain?`](#transform-domain) - [`userPool?`](#transform-userpool) [Transform](/docs/components#transform) how this component creates its underlying resources. domain? **Type** [`UserPoolDomainArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpooldomain/#inputs)` | (args: `[`UserPoolDomainArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpooldomain/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cognito User Pool domain resource. userPool? **Type** [`UserPoolArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpool/#inputs)` | (args: `[`UserPoolArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpool/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cognito User Pool resource. ### triggers? **Type** `Input` - [`createAuthChallenge?`](#triggers-createauthchallenge) - [`customEmailSender?`](#triggers-customemailsender) - [`customMessage?`](#triggers-custommessage) - [`customSmsSender?`](#triggers-customsmssender) - [`defineAuthChallenge?`](#triggers-defineauthchallenge) - [`kmsKey?`](#triggers-kmskey) - [`postAuthentication?`](#triggers-postauthentication) - [`postConfirmation?`](#triggers-postconfirmation) - [`preAuthentication?`](#triggers-preauthentication) - [`preSignUp?`](#triggers-presignup) - [`preTokenGeneration?`](#triggers-pretokengeneration) - [`preTokenGenerationVersion?`](#triggers-pretokengenerationversion) - [`userMigration?`](#triggers-usermigration) - [`verifyAuthChallengeResponse?`](#triggers-verifyauthchallengeresponse) **Default** No triggers Configure triggers for this User Pool ```js { triggers: { preAuthentication: "src/preAuthentication.handler", postAuthentication: "src/postAuthentication.handler" } } ``` createAuthChallenge? **Type** `Input` Triggered after the user successfully responds to the previous challenge, and a new challenge needs to be created. Takes the handler path, the function args, or a function ARN. customEmailSender? **Type** `Input` Triggered during events like user sign-up, password recovery, email/phone number verification, and when an admin creates a user. Use this trigger to customize the email provider. Takes the handler path, the function args, or a function ARN. customMessage? **Type** `Input` Triggered during events like user sign-up, password recovery, email/phone number verification, and when an admin creates a user. Use this trigger to customize the message that is sent to your users. Takes the handler path, the function args, or a function ARN. customSmsSender? **Type** `Input` Triggered when an SMS message needs to be sent, such as for MFA or verification codes. Use this trigger to customize the SMS provider. Takes the handler path, the function args, or a function ARN. defineAuthChallenge? **Type** `Input` Triggered after each challenge response to determine the next action. Evaluates whether the user has completed the authentication process or if additional challenges are needed. ARN of the lambda function to name a custom challenge. Takes the handler path, the function args, or a function ARN. kmsKey? **Type** `Input` The ARN of the AWS KMS key used for encryption. When `customEmailSender` or `customSmsSender` are configured, Cognito encrypts the verification code and temporary passwords before sending them to your Lambda functions. postAuthentication? **Type** `Input` Triggered after a successful authentication event. Use this to perform custom actions, such as logging or modifying user attributes, after the user is authenticated. Takes the handler path, the function args, or a function ARN. postConfirmation? **Type** `Input` Triggered after a user is successfully confirmed; sign-up or email/phone number verification. Use this to perform additional actions, like sending a welcome email or initializing user data, after user confirmation. Takes the handler path, the function args, or a function ARN. preAuthentication? **Type** `Input` Triggered before the authentication process begins. Use this to implement custom validation or checks (like checking if the user is banned) before continuing authentication. Takes the handler path, the function args, or a function ARN. preSignUp? **Type** `Input` Triggered before the user sign-up process completes. Use this to perform custom validation, auto-confirm users, or auto-verify attributes based on custom logic. Takes the handler path, the function args, or a function ARN. preTokenGeneration? **Type** `Input` Triggered before tokens are generated in the authentication process. Use this to customize or add claims to the tokens that will be generated and returned to the user. Takes the handler path, the function args, or a function ARN. preTokenGenerationVersion? **Type** `"v2" | "v1"` **Default** `"v1"` The version of the preTokenGeneration trigger to use. Higher versions have access to more information that support new features. userMigration? **Type** `Input` Triggered when a user attempts to sign in but does not exist in the current user pool. Use this to import and validate users from an existing user directory into the Cognito User Pool during sign-in. Takes the handler path, the function args, or a function ARN. verifyAuthChallengeResponse? **Type** `Input` Triggered after the user responds to a custom authentication challenge. Use this to verify the user's response to the challenge and determine whether to continue authenticating the user. Takes the handler path, the function args, or a function ARN. ### usernames? **Type** `Input[]>` **Default** User can only sign in with their username. Allow users to be able to sign up and sign in with an email addresses or phone number as their username. :::note You cannot change the usernames property once the User Pool has been created. Learn more about [aliases](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html#user-pool-settings-aliases). ::: ```ts { usernames: ["email"] } ``` ### verify? **Type** `Input` - [`emailMessage?`](#verify-emailmessage) - [`emailSubject?`](#verify-emailsubject) - [`smsMessage?`](#verify-smsmessage) Configure the verification message sent to users who are being authenticated. emailMessage? **Type** `Input` **Default** `"The verification code to your new account is {####}"` The template for email messages sent to users who are being authenticated. The template must include the `{####}` placeholder, which will be replaced with the verification code. ```ts { verify: { emailMessage: "The verification code to your new Awesome account is {####}" } } ``` emailSubject? **Type** `Input` **Default** `"Verify your new account"` Subject line for Email messages sent to users who are being authenticated. ```ts { verify: { emailSubject: "Verify your new Awesome account" } } ``` smsMessage? **Type** `Input` **Default** `"The verification code to your new account is {####}"` The template for SMS messages sent to users who are being authenticated. The template must include the `{####}` placeholder, which will be replaced with the verification code. ```ts { verify: { smsMessage: "The verification code to your new Awesome account is {####}" } } ``` ## Properties ### arn **Type** `Output` The Cognito User Pool ARN. ### domainUrl **Type** `undefined | Output` If a `domain` is configured, this is the full URL of the hosted UI. ### id **Type** `Output` The Cognito User Pool ID. ### nodes **Type** `Object` - [`userPool`](#nodes-userpool) The underlying [resources](/docs/components/#nodes) this component creates. userPool **Type** `Output<`[`UserPool`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpool/)`>` The Amazon Cognito User Pool. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `id` `string` The Cognito User Pool ID. ## Methods ### addClient ```ts addClient(name, args?) ``` #### Parameters - `name` `string` Name of the client. - `args?` [`CognitoUserPoolClientArgs`](#cognitouserpoolclientargs) Configure the client. **Returns** [`CognitoUserPoolClient`](/docs/component/aws/cognito-user-pool-client) Add a client to the User Pool. ```ts userPool.addClient("Web"); ``` ### addIdentityProvider ```ts addIdentityProvider(name, args) ``` #### Parameters - `name` `string` Name of the identity provider. - `args` [`CognitoIdentityProviderArgs`](#cognitoidentityproviderargs) Configure the identity provider. **Returns** [`CognitoIdentityProvider`](/docs/component/aws/cognito-identity-provider) Add a federated identity provider to the User Pool. For example, add a GitHub (OIDC) identity provider. ```ts title="sst.config.ts" const GithubClientId = new sst.Secret("GITHUB_CLIENT_ID"); const GithubClientSecret = new sst.Secret("GITHUB_CLIENT_SECRET"); userPool.addIdentityProvider("GitHub", { type: "oidc", details: { authorize_scopes: "read:user user:email", client_id: GithubClientId.value, client_secret: GithubClientSecret.value, oidc_issuer: "https://github.com/", }, attributes: { email: "email", username: "sub", }, }); ``` Or add a Google identity provider. ```ts title="sst.config.ts" const GoogleClientId = new sst.Secret("GOOGLE_CLIENT_ID"); const GoogleClientSecret = new sst.Secret("GOOGLE_CLIENT_SECRET"); userPool.addIdentityProvider("Google", { type: "google", details: { authorize_scopes: "email profile", client_id: GoogleClientId.value, client_secret: GoogleClientSecret.value, }, attributes: { email: "email", name: "name", username: "sub", }, }); ``` ### static get ```ts CognitoUserPool.get(name, userPoolID, opts?) ``` #### Parameters - `name` `string` The name of the component. - `userPoolID` `Input` The ID of the existing User Pool. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`CognitoUserPool`](.) Reference an existing User Pool with the given ID. This is useful when you create a User Pool in one stage and want to share it in another. It avoids having to create a new User Pool in the other stage. :::tip You can use the `static get` method to share User Pools across stages. ::: Imagine you create a User Pool in the `dev` stage. And in your personal stage `frank`, instead of creating a new pool, you want to share the same pool from `dev`. ```ts title="sst.config.ts" const userPool = $app.stage === "frank" ? sst.aws.CognitoUserPool.get("MyUserPool", "us-east-1_gcF5PjhQK") : new sst.aws.CognitoUserPool("MyUserPool"); ``` Here `us-east-1_gcF5PjhQK` is the ID of the User Pool created in the `dev` stage. You can find this by outputting the User Pool ID in the `dev` stage. ```ts title="sst.config.ts" return { userPool: userPool.id }; ``` ## CognitoIdentityProviderArgs ### attributes? **Type** `Input>>` Define a mapping between identity provider attributes and user pool attributes. ```ts { email: "email", username: "sub" } ``` ### details **Type** `Input>>` Configure the identity provider details, including the scopes, URLs, and identifiers. ```ts { authorize_scopes: "email profile", client_id: "your-client-id", client_secret: "your-client-secret" } ``` ### transform? **Type** `Object` - [`identityProvider?`](#transform-identityprovider) [Transform](/docs/components#transform) how this component creates its underlying resources. identityProvider? **Type** [`IdentityProviderArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identityprovider/#inputs)` | (args: `[`IdentityProviderArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identityprovider/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cognito identity provider resource. ### type **Type** `Input<"oidc" | "saml" | "google" | "facebook" | "apple" | "amazon">` The type of identity provider. ## CognitoUserPoolClientArgs ### callbackUrls? **Type** `Input[]>` List of allowed callback URLs for the identity providers. ### providers? **Type** `Input[]>` **Default** `["COGNITO"]` A list of identity providers that are supported for this client. :::tip Reference federated identity providers using their `providerName` property. ::: If you are using a federated identity provider. ```js title="sst.config.ts" const provider = userPool.addIdentityProvider("MyProvider", { type: "oidc", details: { authorize_scopes: "email profile", client_id: "your-client-id", client_secret: "your-client-secret" }, }); ``` Make sure to pass in `provider.providerName` instead of hardcoding it to `"MyProvider"`. ```ts title="sst.config.ts" {2} userPool.addClient("Web", { providers: [provider.providerName] }); ``` This ensures the client is created after the provider. ### transform? **Type** `Object` - [`client?`](#transform-client) [Transform](/docs/components#transform) how this component creates its underlying resources. client? **Type** [`UserPoolClientArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpoolclient/#inputs)` | (args: `[`UserPoolClientArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpoolclient/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cognito User Pool client resource. --- ## CronV2 Reference doc for the `sst.aws.CronV2` component. https://sst.dev/docs/component/aws/cron-v2 The `CronV2` component lets you add cron jobs to your app using [Amazon EventBridge Scheduler](https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html). The cron job can invoke a `Function` or a container `Task`. #### Cron job function Pass in a `schedule` and a `function` that'll be executed. ```ts title="sst.config.ts" new sst.aws.CronV2("MyCronJob", { function: "src/cron.handler", schedule: "rate(1 minute)" }); ``` #### Cron job container task Create a container task and pass in a `schedule` and a `task` that'll be executed. ```ts title="sst.config.ts" {5} const cluster = new sst.aws.Cluster("MyCluster"); const task = new sst.aws.Task("MyTask", { cluster }); new sst.aws.CronV2("MyCronJob", { task, schedule: "rate(1 day)" }); ``` #### Set a timezone ```ts title="sst.config.ts" new sst.aws.CronV2("MyCronJob", { function: "src/cron.handler", schedule: "cron(15 10 * * ? *)", timezone: "America/New_York" }); ``` #### Configure retries ```ts title="sst.config.ts" new sst.aws.CronV2("MyCronJob", { function: "src/cron.handler", schedule: "rate(1 minute)", retries: 3 }); ``` #### One-time schedule ```ts title="sst.config.ts" new sst.aws.CronV2("MyCronJob", { function: "src/cron.handler", schedule: "at(2025-06-01T10:00:00)" }); ``` #### Customize the function ```js title="sst.config.ts" new sst.aws.CronV2("MyCronJob", { schedule: "rate(1 minute)", function: { handler: "src/cron.handler", timeout: "60 seconds" } }); ``` --- ## Constructor ```ts new CronV2(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`CronV2Args`](#cronv2args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## CronV2Args ### dlq? **Type** `Input` The ARN of an SQS queue to use as a dead-letter queue. When all retry attempts are exhausted, failed events are sent to this queue. ```ts { dlq: myQueue.arn } ``` ### enabled? **Type** `Input` **Default** true Configures whether the cron job is enabled. When disabled, the cron job won't run. ```ts { enabled: false } ``` ### event? **Type** `Input` The event that'll be passed to the function or task. ```ts { event: { foo: "bar", } } ``` For Lambda functions, the event will be passed to the function as an event. ```ts function handler(event) { console.log(event.foo); } ``` For ECS Fargate tasks, the event will be passed to the task as the `SST_EVENT` environment variable. ```ts const event = JSON.parse(process.env.SST_EVENT); console.log(event.foo); ``` ### function? **Type** `Input` The function that'll be executed when the cron job runs. ```ts { function: "src/cron.handler" } ``` You can pass in the full function props. ```ts { function: { handler: "src/cron.handler", timeout: "60 seconds" } } ``` You can also pass in a function ARN. ```ts { function: "arn:aws:lambda:us-east-1:000000000000:function:my-sst-app-jayair-MyFunction", } ``` ### retries? **Type** `Input` **Default** `0` The number of retry attempts for failed invocations. Between 0 and 185. ```ts { retries: 3 } ``` ### schedule **Type** `Input<"rate($\{string\})" | "cron($\{string\})" | "at($\{string\})">` The schedule for the cron job. :::note The cron job continues to run even after you exit `sst dev`. ::: You can use a [rate expression](https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents-expressions.html). ```ts { schedule: "rate(5 minutes)" // schedule: "rate(1 minute)" // schedule: "rate(5 minutes)" // schedule: "rate(1 hour)" // schedule: "rate(5 hours)" // schedule: "rate(1 day)" // schedule: "rate(5 days)" } ``` Or a [cron expression](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html#eb-cron-expressions). ```ts { schedule: "cron(15 10 * * ? *)", // 10:15 AM (UTC) every day } ``` Or an [at expression](https://docs.aws.amazon.com/scheduler/latest/UserGuide/schedule-types.html#one-time) for a one-time schedule. ```ts { schedule: "at(2025-06-01T10:00:00)", } ``` ### task? **Type** [`Task`](/docs/component/aws/task) The task that'll be executed when the cron job runs. For example, let's say you have a task. ```js title="sst.config.ts" const cluster = new sst.aws.Cluster("MyCluster"); const task = new sst.aws.Task("MyTask", { cluster }); ``` You can then pass in the task to the cron job. ```js title="sst.config.ts" new sst.aws.CronV2("MyCronJob", { task, schedule: "rate(1 minute)" }); ``` ### timezone? **Type** `Input` **Default** `"UTC"` The IANA timezone for the cron schedule. When set, the cron expression is evaluated in this timezone, with automatic DST handling. ```ts { timezone: "America/New_York" } ``` ### transform? **Type** `Object` - [`role?`](#transform-role) - [`schedule?`](#transform-schedule) [Transform](/docs/components#transform) how this component creates its underlying resources. role? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the IAM Role resource. schedule? **Type** [`ScheduleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/scheduler/schedule/#inputs)` | (args: `[`ScheduleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/scheduler/schedule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBridge Scheduler Schedule resource. ## Properties ### nodes **Type** `Object` - [`role`](#nodes-role) - [`schedule`](#nodes-schedule) - [`function`](#nodes-function) - [`job`](#nodes-job) The underlying [resources](/docs/components/#nodes) this component creates. role **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The IAM Role resource. schedule **Type** [`Schedule`](https://www.pulumi.com/registry/packages/aws/api-docs/scheduler/schedule/) The EventBridge Scheduler Schedule resource. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda Function that'll be invoked when the cron job runs. job **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda Function that'll be invoked when the cron job runs. --- ## Cron Reference doc for the `sst.aws.Cron` component. https://sst.dev/docs/component/aws/cron The `Cron` component has been deprecated. Use [`CronV2`](https://sst.dev/docs/component/aws/cron-v2) instead. :::caution This component has been deprecated. ::: The `Cron` component lets you add cron jobs to your app using [Amazon Event Bus](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html). The cron job can invoke a `Function` or a container `Task`. #### Cron job function Pass in a `schedule` and a `function` that'll be executed. ```ts title="sst.config.ts" new sst.aws.Cron("MyCronJob", { function: "src/cron.handler", schedule: "rate(1 minute)" }); ``` #### Cron job container task Create a container task and pass in a `schedule` and a `task` that'll be executed. ```ts title="sst.config.ts" {5} const myCluster = new sst.aws.Cluster("MyCluster"); const myTask = new sst.aws.Task("MyTask", { cluster: myCluster }); new sst.aws.Cron("MyCronJob", { task: myTask, schedule: "rate(1 day)" }); ``` #### Customize the function ```js title="sst.config.ts" new sst.aws.Cron("MyCronJob", { schedule: "rate(1 minute)", function: { handler: "src/cron.handler", timeout: "60 seconds" } }); ``` --- ## Constructor ```ts new Cron(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`CronArgs`](#cronargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## CronArgs ### enabled? **Type** `Input` **Default** true Configures whether the cron job is enabled. When disabled, the cron job won't run. ```ts { enabled: false } ``` ### event? **Type** `Input` The event that'll be passed to the function or task. ```ts { event: { foo: "bar", } } ``` For Lambda functions, the event will be passed to the function as an event. ```ts function handler(event) { console.log(event.foo); } ``` For ECS Fargate tasks, the event will be passed to the task as the `SST_EVENT` environment variable. ```ts const event = JSON.parse(process.env.SST_EVENT); console.log(event.foo); ``` ### function? **Type** `Input` The function that'll be executed when the cron job runs. ```ts { function: "src/cron.handler" } ``` You can pass in the full function props. ```ts { function: { handler: "src/cron.handler", timeout: "60 seconds" } } ``` You can also pass in a function ARN. ```ts { function: "arn:aws:lambda:us-east-1:000000000000:function:my-sst-app-jayair-MyFunction", } ``` ### schedule **Type** `Input<"rate($\{string\})" | "cron($\{string\})">` The schedule for the cron job. :::note The cron job continues to run even after you exit `sst dev`. ::: You can use a [rate expression](https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents-expressions.html). ```ts { schedule: "rate(5 minutes)" // schedule: "rate(1 minute)" // schedule: "rate(5 minutes)" // schedule: "rate(1 hour)" // schedule: "rate(5 hours)" // schedule: "rate(1 day)" // schedule: "rate(5 days)" } ``` Or a [cron expression](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule-schedule.html#eb-cron-expressions). ```ts { schedule: "cron(15 10 * * ? *)", // 10:15 AM (UTC) every day } ``` ### task? **Type** [`Task`](/docs/component/aws/task) The task that'll be executed when the cron job runs. For example, let's say you have a task. ```js title="sst.config.ts" const myCluster = new sst.aws.Cluster("MyCluster"); const myTask = new sst.aws.Task("MyTask", { cluster: myCluster }); ``` You can then pass in the task to the cron job. ```js title="sst.config.ts" new sst.aws.Cron("MyCronJob", { task: myTask, schedule: "rate(1 minute)" }); ``` ### transform? **Type** `Object` - [`rule?`](#transform-rule) - [`target?`](#transform-target) [Transform](/docs/components#transform) how this component creates its underlying resources. rule? **Type** [`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)` | (args: `[`EventRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBridge Rule resource. target? **Type** [`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)` | (args: `[`EventTargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EventBridge Target resource. ## Properties ### nodes **Type** `Object` - [`rule`](#nodes-rule) - [`target`](#nodes-target) - [`function`](#nodes-function) - [`job`](#nodes-job) The underlying [resources](/docs/components/#nodes) this component creates. rule **Type** [`EventRule`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventrule/) The EventBridge Rule resource. target **Type** [`EventTarget`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/eventtarget/) The EventBridge Target resource. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda Function that'll be invoked when the cron job runs. job **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda Function that'll be invoked when the cron job runs. --- ## AWS DNS Adapter Reference doc for the `sst.aws.dns` adapter. https://sst.dev/docs/component/aws/dns The AWS DNS Adapter is used to create DNS records to manage domains hosted on [Route 53](https://aws.amazon.com/route53/). This adapter is passed in as `domain.dns` when setting a custom domain. ```ts { domain: { name: "example.com", dns: sst.aws.dns() } } ``` You can also specify a hosted zone ID if you have multiple hosted zones with the same domain. ```ts { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` --- ## Functions ### dns ```ts dns(args?) ``` #### Parameters - `args?` [`DnsArgs`](#dnsargs) **Returns** `Object` ## DnsArgs ### override? **Type** `Input` **Default** `false` Set to `true` if you want to let the new DNS records replace the existing ones. :::tip Use this to migrate over your domain without any downtime. ::: This is useful if your domain is currently used by another app and you want to switch it to your current app. Without setting this, you'll first have to remove the existing DNS records and then add the new one. This can cause downtime. You can avoid this by setting this to `true` and the existing DNS records will be replaced without any downtime. Just make sure that when you remove your old app, you don't remove the DNS records. ```js { override: true } ``` ### transform? **Type** `Object` - [`record?`](#transform-record) [Transform](/docs/components#transform) how this component creates its underlying resources. record? **Type** [`RecordArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/route53/record/#inputs)` | (args: `[`RecordArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/route53/record/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Route 53 record resource. ### zone? **Type** `Input` Set the hosted zone ID if you have multiple hosted zones that have the same domain in Route 53. The 14 letter ID of the [Route 53 hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-working-with.html) that contains the `domainName`. You can find the hosted zone ID in the Route 53 part of the AWS Console. ```js { zone: "Z2FDTNDATAQYW2" } ``` --- ## Dsql Reference doc for the `sst.aws.Dsql` component. https://sst.dev/docs/component/aws/dsql The `Dsql` component lets you add an [Amazon Aurora DSQL](https://aws.amazon.com/rds/aurora/dsql/) cluster to your app. #### Single-region cluster ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster"); ``` Once linked, you can connect to it from your function code. ```ts title="src/lambda.ts" const client = new AuroraDSQLClient({ host: Resource.MyCluster.endpoint, user: "admin", }); await client.connect(); const result = await client.query("SELECT NOW() as now"); await client.end(); ``` #### Multi-region cluster ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster", { regions: { witness: "us-west-2", peer: "us-east-2" } }); ``` [Check out the full example](/docs/examples/#aws-dsql-multiregion). #### With private VPC endpoints ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Dsql("MyCluster", { vpc: { instance: vpc, endpoints: { connection: true } } }); ``` [Check out the full example](/docs/examples/#aws-dsql-vpc). #### With backups ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster", { backup: true }); ``` #### Link to a function ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [cluster] }); ``` You can also use Drizzle ORM to query your DSQL cluster. [Check out the Drizzle example](/docs/examples/#aws-dsql-drizzle). --- ### Cost Aurora DSQL is serverless and uses a pay-per-use pricing model. You are charged for database activity measured in _Distributed Processing Units_ (DPUs) at $8 per million DPUs, and storage at $0.33 per GB-month. When idle, usage scales to zero and you incur no DPU charges. There is a free tier of 100,000 DPUs and 1 GB of storage per month. For example, a single-region cluster averaging 1.3M DPUs per month with 15 GB of storage costs roughly 1.3 x $8 + 15 x $0.33 or **$15 per month**. Check out the [Aurora DSQL pricing](https://aws.amazon.com/rds/aurora/dsql/pricing/) for more details. --- ## Constructor ```ts new Dsql(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`DsqlArgs`](#dsqlargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## DsqlArgs ### backup? **Type** `boolean | Object` - [`retention?`](#backup-retention) - [`schedule?`](#backup-schedule) Configure automatic backups for the cluster using AWS Backup. Set to `true` to use the defaults, or pass an object to customize the schedule and retention. :::tip If multi-region is enabled, backups are scheduled in the current region and copied to the peer region. ::: Omit or set to `false` to skip backup creation entirely. Enable with defaults (daily at 5 AM UTC, 7-day retention). ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster", { backup: true }); ``` Custom schedule and retention. ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster", { backup: { schedule: "cron(0 2 ? * * *)", retention: "90 days" } }); ``` retention? **Type** `Input<"$\{number\} day" | "$\{number\} days">` **Default** `"7 days"` How long to retain backups. Use a day duration like `"7 days"`. schedule? **Type** `Input` **Default** `"cron(0 5 ? * * *)"` The schedule for the backups as an [AWS Backup cron expression](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_BackupRule.html). This uses the same 6-field `cron(...)` format as EventBridge and is evaluated in UTC. Back up every day at midnight UTC. ```ts schedule: "cron(0 0 ? * * *)" ``` Back up every Monday at 3 AM UTC. ```ts schedule: "cron(0 3 ? * MON *)" ``` ### regions? **Type** `Object` - [`peer`](#regions-peer) - [`witness`](#regions-witness) Configure multi-region cluster peering. Creates a cluster in the current region and a peer cluster in another region, linked via a witness region. The witness must differ from both cluster regions. Learn more about [AWS DSQL regions](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/what-is-aurora-dsql.html#region-availability). ```ts const cluster = new sst.aws.Dsql("MyCluster", { regions: { witness: "us-west-2", peer: "us-east-2" } }); ``` peer **Type** `Input` The AWS region for the peer cluster. witness **Type** `Input` The witness region. Must differ from both cluster regions. ### transform? **Type** `Object` - [`backupPlan?`](#transform-backupplan) - [`backupSelection?`](#transform-backupselection) - [`backupVault?`](#transform-backupvault) - [`cluster?`](#transform-cluster) - [`connectionEndpoint?`](#transform-connectionendpoint) - [`endpointSecurityGroup?`](#transform-endpointsecuritygroup) - [`managementEndpoint?`](#transform-managementendpoint) - [`peerCluster?`](#transform-peercluster) [Transform](/docs/components#transform) how this component creates its underlying resources. backupPlan? **Type** [`PlanArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/backup/plan/#inputs)` | (args: `[`PlanArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/backup/plan/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Backup plan resource. backupSelection? **Type** [`SelectionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/backup/selection/#inputs)` | (args: `[`SelectionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/backup/selection/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Backup selection resource. backupVault? **Type** [`VaultArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/backup/vault/#inputs)` | (args: `[`VaultArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/backup/vault/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Backup vault resource. cluster? **Type** [`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/dsql/cluster/#inputs)` | (args: `[`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/dsql/cluster/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the DSQL cluster resource. connectionEndpoint? **Type** [`VpcEndpointArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpcendpoint/#inputs)` | (args: `[`VpcEndpointArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpcendpoint/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 VPC endpoint resource for DSQL connections. endpointSecurityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 security group resource for the DSQL VPC endpoints. managementEndpoint? **Type** [`VpcEndpointArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpcendpoint/#inputs)` | (args: `[`VpcEndpointArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpcendpoint/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 VPC endpoint resource for DSQL management operations. peerCluster? **Type** [`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/dsql/cluster/#inputs)` | (args: `[`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/dsql/cluster/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the peer DSQL cluster resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Object` - [`endpoints?`](#vpc-endpoints) `Object` - [`connection?`](#vpc-endpoints-connection) - [`management?`](#vpc-endpoints-management) - [`instance`](#vpc-instance) Create AWS PrivateLink interface endpoints in a VPC for private connectivity. This allows lambdas placed inside a VPC without NAT gateways to connect to the DSQL instance. :::note Currently only single region VPC is supported. ::: ```ts title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Dsql("MyCluster", { vpc: myVpc }); ``` #### Customize VPC endpoints ```ts title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Dsql("MyCluster", { vpc: { instance: vpc, endpoints: { management: true, connection: true, } } }); ``` endpoints? **Type** `Object` connection? **Type** `boolean` **Default** `true` Endpoint for PostgreSQL client connections. management? **Type** `boolean` **Default** `false` Endpoint for control plane ops (create, get, update, delete clusters). instance **Type** [`Vpc`](/docs/component/aws/vpc) ## Properties ### endpoint **Type** `Output` The endpoint of the cluster. ### nodes **Type** `Object` - [`cluster`](#nodes-cluster) - [`peerCluster`](#nodes-peercluster) The underlying [resources](/docs/components/#nodes) this component creates. cluster **Type** [`Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/dsql/cluster/) The DSQL cluster. peerCluster **Type** `undefined | `[`Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/dsql/cluster/) The peer DSQL cluster (multi-region only). ### peer **Type** `Object` - [`endpoint`](#peer-endpoint) - [`region`](#peer-region) The peer cluster info. Only available for multi-region clusters. endpoint **Type** `Output` The endpoint of the peer cluster. region **Type** `Output` The region of the peer cluster. ### region **Type** `Output` The region of the cluster. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `endpoint` `string` The endpoint of the cluster. - `peer` `undefined | Object` The peer cluster info. Only available for multi-region clusters. - `region` `string` The region of the cluster. ## Methods ### static get ```ts Dsql.get(name, args, opts?) ``` #### Parameters - `name` `string` - `args` `Object` - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Dsql`](.) Reference an existing DSQL cluster by identifier. Useful for sharing a cluster across stages without creating a new one. :::tip You can use the `static get` method to share a cluster across stages. ::: #### Single-region cluster ```ts title="sst.config.ts" const cluster = $app.stage === "frank" ? sst.aws.Dsql.get("MyCluster", { id: "kzttrvbdg4k2o5ze2m2rrwdj7u" }) : new sst.aws.Dsql("MyCluster"); ``` #### Multi-region cluster ```ts title="sst.config.ts" const cluster = sst.aws.Dsql.get("MyCluster", { id: "app-dev-mycluster", peer: { id: "kzttrvbdg4k2o5ze2m2rrwdj7u", region: "us-east-2", } }); ``` --- ## DynamoLambdaSubscriber Reference doc for the `sst.aws.DynamoLambdaSubscriber` component. https://sst.dev/docs/component/aws/dynamo-lambda-subscriber The `DynamoLambdaSubscriber` component is internally used by the `Dynamo` component to add stream subscriptions to [Amazon DynamoDB](https://aws.amazon.com/dynamodb/). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribe` method of the `Dynamo` component. --- ## Constructor ```ts new DynamoLambdaSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`eventSourceMapping`](#nodes-eventsourcemapping) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. eventSourceMapping **Type** [`EventSourceMapping`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/) The Lambda event source mapping. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function that'll be notified. ## Args ### dynamo **Type** `Input` - [`streamArn`](#dynamo-streamarn) The DynamoDB table to use. streamArn **Type** `Input` The ARN of the stream. ### filters? **Type** `Input>[]>` Filter the records processed by the `subscriber` function. :::tip You can pass in up to 5 different filters. ::: You can pass in up to 5 different filter policies. These will logically ORed together. Meaning that if any single policy matches, the record will be processed. :::tip Learn more about the [filter rule syntax](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax). ::: For example, if your DynamoDB table's stream contains the follow record. ```js { eventID: "1", eventVersion: "1.0", dynamodb: { ApproximateCreationDateTime: "1678831218.0", Keys: { CustomerName: { "S": "AnyCompany Industries" }, NewImage: { AccountManager: { S: "Pat Candella" }, PaymentTerms: { S: "60 days" }, CustomerName: { S: "AnyCompany Industries" } }, SequenceNumber: "111", SizeBytes: 26, StreamViewType: "NEW_IMAGE" } } } ``` To process only those records where the `CustomerName` is `AnyCompany Industries`. ```js { filters: [ { dynamodb: { Keys: { CustomerName: { S: ["AnyCompany Industries"] } } } } ] } ``` ### subscriber **Type** `Input` The subscriber function. ### transform? **Type** `Object` - [`eventSourceMapping?`](#transform-eventsourcemapping) [Transform](/docs/components#transform) how this subscription creates its underlying resources. eventSourceMapping? **Type** [`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)` | (args: `[`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Event Source Mapping resource. --- ## Dynamo Reference doc for the `sst.aws.Dynamo` component. https://sst.dev/docs/component/aws/dynamo The `Dynamo` component lets you add an [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) table to your app. #### Minimal example ```ts title="sst.config.ts" const table = new sst.aws.Dynamo("MyTable", { fields: { userId: "string", noteId: "string" }, primaryIndex: { hashKey: "userId", rangeKey: "noteId" } }); ``` #### Add a global index Optionally add a global index to the table. ```ts {8-10} title="sst.config.ts" new sst.aws.Dynamo("MyTable", { fields: { userId: "string", noteId: "string", createdAt: "number", }, primaryIndex: { hashKey: "userId", rangeKey: "noteId" }, globalIndexes: { CreatedAtIndex: { hashKey: "userId", rangeKey: "createdAt" } } }); ``` #### Add a composite key global index Use multi-attribute composite keys in a global index. This is useful when you want to combine multiple attributes into a single partition or sort key. ```ts {8-12} title="sst.config.ts" new sst.aws.Dynamo("MyTable", { fields: { region: "string", category: "string", createdAt: "number", }, primaryIndex: { hashKey: "region", rangeKey: "createdAt" }, globalIndexes: { RegionCategoryIndex: { hashKey: ["region", "category"], rangeKey: "createdAt" } } }); ``` #### Add a local index Optionally add a local index to the table. ```ts {8-10} title="sst.config.ts" new sst.aws.Dynamo("MyTable", { fields: { userId: "string", noteId: "string", createdAt: "number", }, primaryIndex: { hashKey: "userId", rangeKey: "noteId" }, localIndexes: { CreatedAtIndex: { rangeKey: "createdAt" } } }); ``` #### Subscribe to a DynamoDB Stream To subscribe to a [DynamoDB Stream](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html), start by enabling it. ```ts {7} title="sst.config.ts" const table = new sst.aws.Dynamo("MyTable", { fields: { userId: "string", noteId: "string" }, primaryIndex: { hashKey: "userId", rangeKey: "noteId" }, stream: "new-and-old-images" }); ``` Then, subscribing to it. ```ts title="sst.config.ts" table.subscribe("MySubscriber", "src/subscriber.handler"); ``` #### Link the table to a resource You can link the table to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [table] }); ``` Once linked, you can query the table through your app. ```ts title="app/page.tsx" {1,8} const client = new DynamoDBClient(); await client.send(new QueryCommand({ TableName: Resource.MyTable.name, KeyConditionExpression: "userId = :userId", ExpressionAttributeValues: { ":userId": "my-user-id" } })); ``` --- ## Constructor ```ts new Dynamo(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`DynamoArgs`](#dynamoargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## DynamoArgs ### deletionProtection? **Type** `Input` Enable deletion protection for the table. When enabled, the table cannot be deleted. ```js { deletionProtection: true, } ``` ### fields **Type** `Input>` An object defining the fields of the table that'll be used to create indexes. The key is the name of the field and the value is the type. :::note You don't need to define all your fields here, just the ones you want to use for indexes. ::: While you can have fields field types other than `string`, `number`, and `binary`; you can only use these types for your indexes. :::caution Field types cannot be changed after table creation. Any changes to field types will be ignored. ::: ```js { fields: { userId: "string", noteId: "string" } } ``` ### globalIndexes? **Type** `Input>>` - [`hashKey`](#globalindexes-hashkey) - [`projection?`](#globalindexes-projection) - [`rangeKey?`](#globalindexes-rangekey) Configure the table's global secondary indexes. You can have up to 20 global secondary indexes per table. And each global secondary index should have a unique name. ```js { globalIndexes: { CreatedAtIndex: { hashKey: "userId", rangeKey: "createdAt" } } } ``` Use an array to create a composite key with multiple attributes. ```js { globalIndexes: { RegionCategoryIndex: { hashKey: ["region", "category"], rangeKey: "createdAt" } } } ``` hashKey **Type** `Input` The hash key field of the index. This field needs to be defined in the `fields`. You can also pass in an array of field names to create a composite key with up to 4 attributes using the [multi-attribute keys](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.DesignPattern.MultiAttributeKeys.html) pattern. ```js { hashKey: ["region", "category"] } ``` projection? **Type** `Input[] | "all" | "keys-only">` **Default** `"all"` The fields to project into the index. Project only the key fields: `userId` and `createdAt`. ```js { hashKey: "userId", rangeKey: "createdAt", projection: "keys-only" } ``` Project the `noteId` field in addition to the key fields. ```js { hashKey: "userId", rangeKey: "createdAt", projection: ["noteId"] } ``` rangeKey? **Type** `Input` The range key field of the index. This field needs to be defined in the `fields`. You can also pass in an array of field names to create a composite key with up to 4 attributes using the [multi-attribute keys](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.DesignPattern.MultiAttributeKeys.html) pattern. ```js { rangeKey: ["createdAt", "status"] } ``` ### localIndexes? **Type** `Input>>` - [`projection?`](#localindexes-projection) - [`rangeKey`](#localindexes-rangekey) Configure the table's local secondary indexes. Unlike global indexes, local indexes use the same `hashKey` as the `primaryIndex` of the table. You can have up to 5 local secondary indexes per table. And each local secondary index should have a unique name. ```js { localIndexes: { CreatedAtIndex: { rangeKey: "createdAt" } } } ``` projection? **Type** `Input[] | "all" | "keys-only">` **Default** `"all"` The fields to project into the index. Project only the key field: `createdAt`. ```js { rangeKey: "createdAt", projection: "keys-only" } ``` Project the `noteId` field in addition to the key field. ```js { rangeKey: "createdAt", projection: ["noteId"] } ``` rangeKey **Type** `Input` The range key field of the index. This field needs to be defined in the `fields`. ### primaryIndex **Type** `Input` - [`hashKey`](#primaryindex-hashkey) - [`rangeKey?`](#primaryindex-rangekey) Define the table's primary index. You can only have one primary index. ```js { primaryIndex: { hashKey: "userId", rangeKey: "noteId" } } ``` hashKey **Type** `Input` The hash key field of the index. This field needs to be defined in the `fields`. rangeKey? **Type** `Input` The range key field of the index. This field needs to be defined in the `fields`. ### stream? **Type** `Input<"keys-only" | "new-image" | "old-image" | "new-and-old-images">` **Default** Disabled Enable [DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) for the table. :::note Streams are not enabled by default since there's a cost attached to storing them. ::: When an item in the table is modified, the stream captures the information and sends it to your subscriber function. :::tip The `new-and-old-images` stream type is a good default option since it has both the new and old items. ::: You can configure what will be written to the stream: - `new-image`: The entire item after it was modified. - `old-image`: The entire item before it was modified. - `new-and-old-images`: Both the new and the old items. A good default to use since it contains all the data. - `keys-only`: Only the keys of the fields of the modified items. If you are worried about the costs, you can use this since it stores the least amount of data. ```js { stream: "new-and-old-images" } ``` ### transform? **Type** `Object` - [`table?`](#transform-table) [Transform](/docs/components#transform) how this component creates its underlying resources. table? **Type** [`TableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/dynamodb/table/#inputs)` | (args: `[`TableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/dynamodb/table/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the DynamoDB Table resource. ### ttl? **Type** `Input` The field in the table to store the _Time to Live_ or TTL timestamp in. This field should be of type `number`. When the TTL timestamp is reached, the item will be deleted. Read more about [Time to Live](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html). Here the TTL field in our table is called `expireAt`. ```js { ttl: "expireAt" } ``` ## Properties ### arn **Type** `Output` The ARN of the DynamoDB Table. ### name **Type** `Output` The name of the DynamoDB Table. ### nodes **Type** `Object` - [`table`](#nodes-table) The underlying [resources](/docs/components/#nodes) this component creates. table **Type** `Output<`[`Table`](https://www.pulumi.com/registry/packages/aws/api-docs/dynamodb/table/)`>` The Amazon DynamoDB Table. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `name` `string` The name of the DynamoDB Table. ## Methods ### subscribe ```ts subscribe(name, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `subscriber` `Input` The function that'll be notified. - `args?` [`DynamoSubscriberArgs`](#dynamosubscriberargs) Configure the subscription. **Returns** `Output<`[`DynamoLambdaSubscriber`](/docs/component/aws/dynamo-lambda-subscriber)`>` Subscribe to the DynamoDB Stream of this table. :::note You'll first need to enable the `stream` before subscribing to it. ::: ```js title="sst.config.ts" table.subscribe("MySubscriber", "src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" table.subscribe("MySubscriber", "src/subscriber.handler", { filters: [ { dynamodb: { Keys: { CustomerName: { S: ["AnyCompany Industries"] } } } } ] }); ``` Customize the subscriber function. ```js title="sst.config.ts" table.subscribe("MySubscriber", { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" table.subscribe("MySubscriber", "arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ### static get ```ts Dynamo.get(name, tableName, opts?) ``` #### Parameters - `name` `string` The name of the component. - `tableName` `Input` The name of the DynamoDB Table. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Dynamo`](.) Reference an existing DynamoDB Table with the given table name. This is useful when you create a table in one stage and want to share it in another stage. It avoid having to create a new table in the other stage. :::tip You can use the `static get` method to share a table across stages. ::: Imagine you create a table in the `dev` stage. And in your personal stage `frank`, instead of creating a new table, you want to share the table from `dev`. ```ts title=sst.config.ts" const table = $app.stage === "frank" ? sst.aws.Dynamo.get("MyTable", "app-dev-mytable") : new sst.aws.Dynamo("MyTable"); ``` Here `app-dev-mytable` is the name of the DynamoDB Table created in the `dev` stage. You can find this by outputting the table name in the `dev` stage. ```ts title="sst.config.ts" return { table: table.name }; ``` ### static subscribe ```ts Dynamo.subscribe(name, streamArn, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `streamArn` `Input` The ARN of the DynamoDB Stream to subscribe to. - `subscriber` `Input` The function that'll be notified. - `args?` [`DynamoSubscriberArgs`](#dynamosubscriberargs) Configure the subscription. **Returns** `Output<`[`DynamoLambdaSubscriber`](/docs/component/aws/dynamo-lambda-subscriber)`>` Subscribe to the DynamoDB stream of a table that was not created in your app. For example, let's say you have a DynamoDB stream ARN of an existing table. ```js title="sst.config.ts" const streamArn = "arn:aws:dynamodb:us-east-1:123456789012:table/MyTable/stream/2024-02-25T23:17:55.264"; ``` You can subscribe to it by passing in the ARN. ```js title="sst.config.ts" sst.aws.Dynamo.subscribe("MySubscriber", streamArn, "src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" sst.aws.Dynamo.subscribe("MySubscriber", streamArn, "src/subscriber.handler", { filters: [ { dynamodb: { Keys: { CustomerName: { S: ["AnyCompany Industries"] } } } } ] }); ``` Customize the subscriber function. ```js title="sst.config.ts" sst.aws.Dynamo.subscribe("MySubscriber", streamArn, { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` ## DynamoSubscriberArgs ### filters? **Type** `Input>[]>` Filter the records processed by the `subscriber` function. :::tip You can pass in up to 5 different filters. ::: You can pass in up to 5 different filter policies. These will logically ORed together. Meaning that if any single policy matches, the record will be processed. :::tip Learn more about the [filter rule syntax](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax). ::: For example, if your DynamoDB table's stream contains the follow record. ```js { eventID: "1", eventVersion: "1.0", dynamodb: { ApproximateCreationDateTime: "1678831218.0", Keys: { CustomerName: { "S": "AnyCompany Industries" }, NewImage: { AccountManager: { S: "Pat Candella" }, PaymentTerms: { S: "60 days" }, CustomerName: { S: "AnyCompany Industries" } }, SequenceNumber: "111", SizeBytes: 26, StreamViewType: "NEW_IMAGE" } } } ``` To process only those records where the `CustomerName` is `AnyCompany Industries`. ```js { filters: [ { dynamodb: { Keys: { CustomerName: { S: ["AnyCompany Industries"] } } } } ] } ``` ### transform? **Type** `Object` - [`eventSourceMapping?`](#transform-eventsourcemapping) [Transform](/docs/components#transform) how this subscription creates its underlying resources. eventSourceMapping? **Type** [`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)` | (args: `[`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Event Source Mapping resource. --- ## Efs Reference doc for the `sst.aws.Efs` component. https://sst.dev/docs/component/aws/efs The `Efs` component lets you add [Amazon Elastic File System (EFS)](https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html) to your app. #### Create the file system ```js title="sst.config.ts" {2} const vpc = new sst.aws.Vpc("MyVpc"); const efs = new sst.aws.Efs("MyEfs", { vpc }); ``` This needs a VPC. #### Attach it to a Lambda function ```ts title="sst.config.ts" {4} new sst.aws.Function("MyFunction", { vpc, handler: "lambda.handler", volume: { efs, path: "/mnt/efs" } }); ``` This is now mounted at `/mnt/efs` in the Lambda function. #### Attach it to a container ```ts title="sst.config.ts" {7} const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, public: { ports: [{ listen: "80/http" }], }, volumes: [ { efs, path: "/mnt/efs" } ] }); ``` Mounted at `/mnt/efs` in the container. --- ### Cost By default this component uses _Regional (Multi-AZ) with Elastic Throughput_. The pricing is pay-per-use. - For storage: $0.30 per GB per month - For reads: $0.03 per GB per month - For writes: $0.06 per GB per month The above are rough estimates for _us-east-1_, check out the [EFS pricing](https://aws.amazon.com/efs/pricing/) for more details. --- ## Constructor ```ts new Efs(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`EfsArgs`](#efsargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## EfsArgs ### performance? **Type** `Input<"general-purpose" | "max-io">` **Default** `"general-purpose"` The performance mode for the EFS file system. The `max-io` mode can support higher throughput, but with slightly higher latency. It's recommended for larger workloads like data analysis or meadia processing. Both the modes are priced the same, but `general-purpose` is recommended for most use cases. ```ts { performance: "max-io" } ``` ### throughput? **Type** `Input<"provisioned" | "bursting" | "elastic">` **Default** `"elastic"` The throughput mode for the EFS file system. The default `elastic` mode scales up or down based on the workload. However, if you know your access patterns, you can use `provisioned` to have a fixed throughput. Or you can use `bursting` to scale with the amount of storage you're using. It also supports bursting to higher levels for up to 12 hours per day. ```ts { throughput: "bursting" } ``` ### transform? **Type** `Object` - [`accessPoint?`](#transform-accesspoint) - [`fileSystem?`](#transform-filesystem) - [`securityGroup?`](#transform-securitygroup) [Transform](/docs/components#transform) how this component creates its underlying resources. accessPoint? **Type** [`AccessPointArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/efs/accesspoint/#inputs)` | (args: `[`AccessPointArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/efs/accesspoint/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EFS access point. fileSystem? **Type** [`FileSystemArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/efs/filesystem/#inputs)` | (args: `[`FileSystemArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/efs/filesystem/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EFS file system. securityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the security group for the EFS mount targets. ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`id`](#vpc-id) - [`subnets`](#vpc-subnets) The VPC to use for the EFS file system. Create a VPC component. ```js const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. ```js { vpc: myVpc } ``` Or pass in a custom VPC configuration. ```js { vpc: { subnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"] } } ``` id **Type** `Input` The ID of the VPC. subnets **Type** `Input[]>` A list of subnet IDs in the VPC to create the EFS mount targets in. ## Properties ### accessPoint **Type** `Output` The ID of the EFS access point. ### id **Type** `Output` The ID of the EFS file system. ### nodes **Type** `Object` - [`accessPoint`](#nodes-accesspoint) - [`fileSystem`](#nodes-filesystem) The underlying [resources](/docs/components/#nodes) this component creates. accessPoint **Type** `Output<`[`AccessPoint`](https://www.pulumi.com/registry/packages/aws/api-docs/efs/accesspoint/)`>` The Amazon EFS access point. fileSystem **Type** `Output<`[`FileSystem`](https://www.pulumi.com/registry/packages/aws/api-docs/efs/filesystem/)`>` The Amazon EFS file system. ## Methods ### static get ```ts Efs.get(name, fileSystemID, opts?) ``` #### Parameters - `name` `string` The name of the component. - `fileSystemID` `Input` The ID of the existing EFS file system. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Efs`](.) Reference an existing EFS file system with the given file system ID. This is useful when you create a EFS file system in one stage and want to share it in another. It avoids having to create a new EFS file system in the other stage. :::tip You can use the `static get` method to share EFS file systems across stages. ::: Imagine you create a EFS file system in the `dev` stage. And in your personal stage `frank`, instead of creating a new file system, you want to share the same file system from `dev`. ```ts title="sst.config.ts" const efs = $app.stage === "frank" ? sst.aws.Efs.get("MyEfs", "app-dev-myefs") : new sst.aws.Efs("MyEfs", { vpc }); ``` Here `app-dev-myefs` is the ID of the file system created in the `dev` stage. You can find this by outputting the file system ID in the `dev` stage. ```ts title="sst.config.ts" return { id: efs.id }; ``` --- ## Email Reference doc for the `sst.aws.Email` component. https://sst.dev/docs/component/aws/email The `Email` component lets you send emails in your app. It uses [Amazon Simple Email Service](https://aws.amazon.com/ses/). You can configure it to send emails from a specific email address or from any email addresses in a domain. :::tip New AWS SES accounts are in _sandbox mode_ and need to [request production access](https://docs.aws.amazon.com/ses/latest/dg/request-production-access.html). ::: By default, new AWS SES accounts are in the _sandbox mode_ and can only send email to verified email addresses and domains. It also limits your account to a sending quota. To remove these restrictions, you need to [request production access](https://docs.aws.amazon.com/ses/latest/dg/request-production-access.html). #### Sending from an email address For using an email address as the sender, you need to verify the email address. ```ts title="sst.config.ts" const email = new sst.aws.Email("MyEmail", { sender: "spongebob@example.com", }); ``` #### Sending from a domain When you use a domain as the sender, you'll need to verify that you own the domain. ```ts title="sst.config.ts" new sst.aws.Email("MyEmail", { sender: "example.com" }); ``` #### Configuring DMARC ```ts title="sst.config.ts" new sst.aws.Email("MyEmail", { sender: "example.com", dmarc: "v=DMARC1; p=quarantine; adkim=s; aspf=s;" }); ``` #### Link to a resource You can link it to a function or your Next.js app to send emails. ```ts {3} title="sst.config.ts" new sst.aws.Function("MyApi", { handler: "sender.handler", link: [email] }); ``` Now in your function you can use the AWS SES SDK to send emails. ```ts title="sender.ts" {1, 8} const client = new SESv2Client(); await client.send( new SendEmailCommand({ FromEmailAddress: Resource.MyEmail.sender, Destination: { ToAddresses: ["patrick@example.com"] }, Content: { Simple: { Subject: { Data: "Hello World!" }, Body: { Text: { Data: "Sent from my SST app." } } } } }) ); ``` --- ## Constructor ```ts new Email(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`EmailArgs`](#emailargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## EmailArgs ### dmarc? **Type** `Input` **Default** `"v=DMARC1; p=none;"` The DMARC policy for the domain. This'll create a DNS record with the given DMARC policy. Only specify this if you are using a domain name as the `sender`. ```js { dmarc: "v=DMARC1; p=quarantine; adkim=s; aspf=s;" } ``` ### dns? **Type** `Input` **Default** `sst.aws.dns` The DNS adapter you want to use for managing DNS records. Only specify this if you are using a domain name as the `sender`. :::note If `dns` is set to `false`, you have to add the DNS records manually to verify the domain. ::: Specify the hosted zone ID for the domain. ```js { dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } ``` Domain is hosted on Cloudflare. ```js { dns: sst.cloudflare.dns() } ``` ### events? **Type** `Input` - [`bus?`](#events-bus) - [`name`](#events-name) - [`topic?`](#events-topic) - [`types`](#events-types) **Default** No event notifications Configure event notifications for this Email component. ```js { events: { name: "OnBounce", types: ["bounce"], topic: "arn:aws:sns:us-east-1:123456789012:MyTopic" } } ``` bus? **Type** `Input` The ARN of the EventBridge bus to send events to. name **Type** `Input` The name of the event. topic? **Type** `Input` The ARN of the SNS topic to send events to. types **Type** `Input[]>` The types of events to send. ### sender **Type** `Input` The email address or domain name that you want to send emails from. :::note You'll need to verify the email address or domain you are using. ::: Using an email address as the sender. You'll need to verify the email address. When you deploy your app, you will receive an email from AWS SES with a link to verify the email address. ```ts { sender: "john.smith@gmail.com" } ``` Using a domain name as the sender. You'll need to verify that you own the domain. Once you verified, you can send emails from any email addresses in the domain. :::tip SST can automatically verify the domain for the `dns` adapter that's specified. ::: To verify the domain, you need to add the verification records to your domain's DNS. This can be done automatically for the supported `dns` adapters. ```ts { sender: "example.com" } ``` If the domain is hosted on Cloudflare. ```ts { sender: "example.com", dns: sst.cloudflare.dns() } ``` ### transform? **Type** `Object` - [`configurationSet?`](#transform-configurationset) - [`identity?`](#transform-identity) [Transform](/docs/components#transform) how this component creates its underlying resources. configurationSet? **Type** [`ConfigurationSetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sesv2/configurationset/#inputs)` | (args: `[`ConfigurationSetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sesv2/configurationset/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the SES configuration set resource. identity? **Type** [`EmailIdentityArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sesv2/emailidentity/#inputs)` | (args: `[`EmailIdentityArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sesv2/emailidentity/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the SES identity resource. ## Properties ### configSet **Type** `Output` The name of the configuration set. ### nodes **Type** `Object` - [`configurationSet`](#nodes-configurationset) - [`identity`](#nodes-identity) The underlying [resources](/docs/components/#nodes) this component creates. configurationSet **Type** [`ConfigurationSet`](https://www.pulumi.com/registry/packages/aws/api-docs/sesv2/configurationset/) The Amazon SES configuration set. identity **Type** [`EmailIdentity`](https://www.pulumi.com/registry/packages/aws/api-docs/sesv2/emailidentity/) The Amazon SES identity. ### sender **Type** `Output` The sender email address or domain name. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `configSet` `string` The name of the configuration set. - `sender` `string` The sender email address or domain name. ## Methods ### static get ```ts Email.get(name, sender, opts?) ``` #### Parameters - `name` `string` The name of the component. - `sender` `Input` The email address or domain name of the existing SES identity. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Email`](.) Reference an existing Email component with the given Amazon SES identity. This is useful when you create an SES identity in one stage and want to share it in another stage. It avoids having to create a new Email component in the other stage. Imagine you create an Email component in the `dev` stage. And in your personal stage `frank`, instead of creating a new component, you want to share the one from `dev`. ```ts title="sst.config.ts" const email = $app.stage === "frank" ? sst.aws.Email.get("MyEmail", "spongebob@example.com") : new sst.aws.Email("MyEmail", { sender: "spongebob@example.com", }); ``` --- ## Function Reference doc for the `sst.aws.Function` component. https://sst.dev/docs/component/aws/function The `Function` component lets you add serverless functions to your app. It uses [AWS Lambda](https://aws.amazon.com/lambda/). #### Supported runtimes Currently supports **Node.js** and **Golang** functions. **Python** and **Rust** are community supported. Other runtimes are on the roadmap. #### Minimal example **Node** Pass in the path to your handler function. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler" }); ``` [Learn more below](#handler). **Python** Pass in the path to your handler function. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { runtime: "python3.11", handler: "functions/src/functions/api.handler" }); ``` You need to have uv installed and your handler function needs to be in a uv workspace. [Learn more below](#handler). **Go** Pass in the directory to your Go module. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { runtime: "go", handler: "./src" }); ``` [Learn more below](#handler). **Rust** Pass in the directory where your Cargo.toml lives. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { runtime: "rust", handler: "./crates/api/" }); ``` [Learn more below](#handler). #### Set additional config Pass in additional Lambda config. ```ts {3,4} title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", timeout: "3 minutes", memory: "1024 MB" }); ``` #### Link resources [Link resources](/docs/linking/) to the function. This will grant permissions to the resources and allow you to access it in your handler. ```ts {5} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your handler. **Node** ```ts title="src/lambda.ts" import { Resource } from "sst"; console.log(Resource.MyBucket.name); ``` **Python** ```ts title="functions/src/functions/api.py" from sst import Resource def handler(event, context): print(Resource.MyBucket.name) ``` Where the `sst` package can be added to your `pyproject.toml`. ```toml title="functions/pyproject.toml" [tool.uv.sources] sst = { git = "https://github.com/sst/sst.git", subdirectory = "sdk/python", branch = "dev" } ``` **Go** ```go title="src/main.go" import ( "github.com/sst/sst/v3/sdk/golang/resource" ) resource.Get("MyBucket", "name") ``` **Rust** ```rust title="src/main.rs" use sst_sdk::Resource; #[derive(serde::Deserialize, Debug)] struct Bucket { name: String, } let resource = Resource::init().unwrap(); let Bucket { name } = resource.get("Bucket").unwrap(); ``` #### Set environment variables Set environment variables that you can read in your function. For example, using `process.env` in your Node.js functions. ```ts {4} title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", environment: { DEBUG: "true" } }); ``` #### Enable function URLs Enable function URLs to invoke the function over HTTP. ```ts {3} title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", url: true }); ``` #### Bundling Customize how SST uses [esbuild](https://esbuild.github.io/) to bundle your Node.js functions with the `nodejs` property. ```ts title="sst.config.ts" {3-5} new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", nodejs: { install: ["pg"] } }); ``` Or override it entirely by passing in your own function `bundle`. --- ## Constructor ```ts new Function(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`FunctionArgs`](#functionargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## FunctionArgs ### architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the Lambda function. ```js { architecture: "arm64" } ``` ### bundle? **Type** `Input` Path to the source code directory for the function. By default, the handler is bundled with [esbuild](https://esbuild.github.io/). Use `bundle` to skip bundling. :::caution Use `bundle` only when you want to bundle the function yourself. ::: If the `bundle` option is specified, the `handler` needs to be in the root of the bundle. Here, the entire `packages/functions/src` directory is zipped. And the handler is in the `src` directory. ```js { bundle: "packages/functions/src", handler: "index.handler" } ``` ### concurrency? **Type** `Input` - [`provisioned?`](#concurrency-provisioned) - [`reserved?`](#concurrency-reserved) **Default** No concurrency settings set Configure the concurrency settings for the function. ```js { concurrency: { provisioned: 10, reserved: 50 } } ``` provisioned? **Type** `Input` **Default** No provisioned concurrency Provisioned concurrency ensures a specific number of Lambda instances are always ready to handle requests, reducing cold start times. Enabling this will incur extra charges. :::note Enabling provisioned concurrency will incur extra charges. ::: Note that `versioning` needs to be enabled for provisioned concurrency. ```js { concurrency: { provisioned: 10 } } ``` reserved? **Type** `Input` **Default** No reserved concurrency Reserved concurrency limits the maximum number of concurrent executions for a function, ensuring critical functions always have capacity. It does not incur extra charges. :::note Setting this to `0` will disable the function from being triggered. ::: ```js { concurrency: { reserved: 50 } } ``` ### copyFiles? **Type** `Input` - [`from`](#copyfiles-from) - [`to?`](#copyfiles-to) Add additional files to copy into the function package. Takes a list of objects with `from` and `to` paths. These will be copied over before the function package is zipped up. Copying over a single file from the `src` directory to the `src/` directory of the function package. ```js { copyFiles: [{ from: "src/index.js" }] } ``` Copying over a single file from the `src` directory to the `core/src` directory in the function package. ```js { copyFiles: [{ from: "src/index.js", to: "core/src/index.js" }] } ``` Copying over a couple of files. ```js { copyFiles: [ { from: "src/this.js", to: "core/src/this.js" }, { from: "src/that.js", to: "core/src/that.js" } ] } ``` from **Type** `Input` Source path relative to the `sst.config.ts`. to? **Type** `Input` **Default** The `from` path in the function package Destination path relative to function root in the package. By default, it creates the same directory structure as the `from` path and copies the file. ### description? **Type** `Input` A description for the function. This is displayed in the AWS Console. ```js { description: "Handler function for my nightly cron job." } ``` ### dev? **Type** `Input` **Default** `true` Disable running this function [_Live_](/docs/live/) in `sst dev`. By default, the functions in your app are run locally in `sst dev`. To do this, a _stub_ version of your function is deployed, instead of the real function. :::note In `sst dev` a _stub_ version of your function is deployed. ::: This shows under the **Functions** tab in the multiplexer sidebar where your invocations are logged. You can turn this off by setting `dev` to `false`. Read more about [Live](/docs/live/) and [`sst dev`](/docs/reference/cli/#dev). ```js { dev: false } ``` ### durable? **Type** `boolean | Object` - [`retention?`](#durable-retention) - [`timeout?`](#durable-timeout) Configure the lambda function as a [AWS durable function](https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html). :::caution This property is meant to be used internally by [Workflow](/docs/component/aws/workflow). Prefer the component if you want to use the [SDK](/docs/component/aws/workflow#sdk) or if you are not very familiar with durable functions limitations. ::: retention? **Type** `Input<"$\{number\} day" | "$\{number\} days">` **Default** `30 days` Number of days to retain the function's execution state. timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days">` **Default** `14 days` Maximum execution time for the durable function. ### environment? **Type** `Input>>` Key-value pairs of values that are set as [Lambda environment variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html). The keys need to: - Start with a letter - Be at least 2 characters long - Contain only letters, numbers, or underscores They can be accessed in your function using `process.env.`. :::note The total size of the environment variables cannot exceed 4 KB. ::: ```js { environment: { DEBUG: "true" } } ``` ### handler **Type** `Input` Path to the handler for the function. - For Node.js this is in the format `{path}/{file}.{method}`. - For Python this is also `{path}/{file}.{method}`. - For Golang this is `{path}` to the Go module. - For Rust this is `{path}` to the Rust crate. ##### Node.js For example with Node.js you might have. ```js { handler: "packages/functions/src/main.handler" } ``` Where `packages/functions/src` is the path. And `main` is the file, where you might have a `main.ts` or `main.js`. And `handler` is the method exported in that file. :::note You don't need to specify the file extension. ::: If `bundle` is specified, the handler needs to be in the root of the bundle directory. ```js { bundle: "packages/functions/src", handler: "index.handler" } ``` ##### Python For Python, [uv](https://docs.astral.sh/uv/) is used to package the function. You need to have it installed. :::note You need uv installed for Python functions. ::: The functions need to be in a [uv workspace](https://docs.astral.sh/uv/concepts/projects/workspaces/#workspace-sources). ```js { handler: "functions/src/functions/api.handler" } ``` The project structure might look something like this. Where there is a `pyproject.toml` file in the root and the `functions/` directory is a uv workspace with its own `pyproject.toml`. ```txt ├── sst.config.ts ├── pyproject.toml └── functions ├── pyproject.toml └── src └── functions ├── __init__.py └── api.py ``` To make sure that the right runtime is used in `sst dev`, make sure to set the version of Python in your `pyproject.toml` to match the runtime you are using. ```toml title="functions/pyproject.toml" requires-python = "==3.11.*" ``` You can refer to [this example of deploying a Python function](/docs/examples/#aws-lambda-python). ##### Golang For Golang the handler looks like. ```js { handler: "packages/functions/go/some_module" } ``` Where `packages/functions/go/some_module` is the path to the Go module. This includes the name of the module in your `go.mod`. So in this case your `go.mod` might be in `packages/functions/go` and `some_module` is the name of the module. You can refer to [this example of deploying a Go function](/docs/examples/#aws-lambda-go). ##### Rust For Rust, the handler looks like. ```js { handler: "crates/api" } ``` Where `crates/api` is the path to the Rust crate. This means there is a `Cargo.toml` file in `crates/api`, and the main() function handles the lambda. ### hook? **Type** `Object` - [`postbuild`](#hook-postbuild) Hook into the Lambda function build process. postbuild ```ts postbuild(dir) ``` **Parameters** - `dir` `string` The directory where the function code is generated. **Returns** `Promise` Specify a callback that'll be run after the Lambda function is built. :::note This is not called in `sst dev`. ::: Useful for modifying the generated Lambda function code before it's deployed to AWS. It can also be used for uploading the generated sourcemaps to a service like Sentry. ### layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the function. :::note Layers are only added when the function is deployed. ::: These are only added when the function is deployed. In `sst dev`, your functions are run locally, so the layers are not used. Instead you should use a local version of what's in the layer. ```js { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your function. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your function using the [SDK](/docs/reference/sdk/). Takes a list of components to link to the function. ```js { link: [bucket, stripeKey] } ``` ### logging? **Type** `Input` - [`format?`](#logging-format) - [`logGroup?`](#logging-loggroup) - [`retention?`](#logging-retention) **Default** `{retention: "1 month", format: "text"}` Configure the function logs in CloudWatch. Or pass in `false` to disable writing logs. ```js { logging: false } ``` When set to `false`, the function is not given permissions to write to CloudWatch. Logs. format? **Type** `Input<"json" | "text">` **Default** `"text"` The [log format](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs-advanced.html) of the Lambda function. ```js { logging: { format: "json" } } ``` logGroup? **Type** `Input` **Default** Creates a log group Assigns the given CloudWatch log group name to the function. This allows you to pass in a previously created log group. By default, the function creates a new log group when it's created. ```js { logging: { logGroup: "/existing/log-group" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `1 month` The duration the function logs are kept in CloudWatch. Not application when an existing log group is provided. ```js { logging: { retention: "forever" } } ``` ### memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated for the function. Takes values between 128 MB and 10240 MB in 1 MB increments. The amount of memory affects the amount of virtual CPU available to the function. :::tip While functions with less memory are cheaper, larger functions can process faster. And might end up being more [cost effective](https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html). ::: ```js { memory: "10240 MB" } ``` ### name? **Type** `Input` The name for the function. By default, the name is generated from the app name, stage name, and component name. This is displayed in the AWS Console for this function. :::caution To avoid the name from thrashing, you want to make sure that it includes the app and stage name. ::: If you are going to set the name, you need to make sure: 1. It's unique across your app. 2. Uses the app and stage name, so it doesn't thrash when you deploy to different stages. Also, changing the name after your've deployed it once will create a new function and delete the old one. ```js { name: `${$app.name}-${$app.stage}-my-function` } ``` ### nodejs? **Type** `Input` - [`banner?`](#nodejs-banner) - [`esbuild?`](#nodejs-esbuild) - [`format?`](#nodejs-format) - [`install?`](#nodejs-install) - [`loader?`](#nodejs-loader) - [`minify?`](#nodejs-minify) - [`sourcemap?`](#nodejs-sourcemap) - [`splitting?`](#nodejs-splitting) Configure how your function is bundled. By default, SST will bundle your function code using [esbuild](https://esbuild.github.io/). This tree shakes your code to only include what's used; reducing the size of your function package and improving cold starts. banner? **Type** `Input` Use this to insert a string at the beginning of the generated JS file. ```js { nodejs: { banner: "console.log('Function starting')" } } ``` esbuild? **Type** `Input` This allows you to customize esbuild config that is used. :::tip Check out the _JS tab_ in the code snippets in the esbuild docs for the [`BuildOptions`](https://esbuild.github.io/api/#build). ::: format? **Type** `Input<"cjs" | "esm">` **Default** `"esm"` Configure the format of the generated JS code; ESM or CommonJS. ```js { nodejs: { format: "cjs" } } ``` install? **Type** `Input>` Dependencies that need to be excluded from the function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to `install`. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. :::caution If you don't specify a version, the package still needs to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { nodejs: { install: { pg: "8.13.1" } } } ``` Passing `["packageName"]` is the same as passing `{ packageName: "*" }`. loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { nodejs: { loader: { ".png": "file" } } } ``` minify? **Type** `Input` **Default** `true` Disable if the function code is minified when bundled. ```js { nodejs: { minify: false } } ``` sourcemap? **Type** `Input` **Default** `false` Configure if source maps are added to the function bundle when **deployed**. Since they increase payload size and potentially cold starts, they are not added by default. However, they are always generated during `sst dev`. :::tip[SST Console] For the [Console](/docs/console/), source maps are always generated and uploaded to your bootstrap bucket. These are then downloaded and used to display Issues in the console. ::: ```js { nodejs: { sourcemap: true } } ``` splitting? **Type** `Input` **Default** `false` If enabled, modules that are dynamically imported will be bundled in their own files with common dependencies placed in shared chunks. This can help reduce cold starts as your function grows in size. ```js { nodejs: { splitting: true } } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the function needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow the function to read and write to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Allow the function to perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Granting the function permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] } ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### policies? **Type** `Input` Policies to attach to the function. These policies will be added to the function's IAM role. Attaching policies lets you grant a set of predefined permissions to the function without having to specify the permissions in the `permissions` prop. For example, allow the function to have read-only access to all resources. ```js { policies: ["arn:aws:iam::aws:policy/ReadOnlyAccess"] } ``` ### python? **Type** `Input` - [`container?`](#python-container) `Input` - [`cache?`](#python-container-cache) Configure how your Python function is packaged. container? **Type** `Input` **Default** `false` Set this to `true` if you want to deploy this function as a container image. There are a couple of reasons why you might want to do this. 1. The Lambda package size has an unzipped limit of 250MB. Whereas the container image size has a limit of 10GB. 2. Even if you are below the 250MB limit, larger Lambda function packages have longer cold starts when compared to container image. 3. You might want to use a custom Dockerfile to handle complex builds. ```ts { python: { container: true } } ``` When you run `sst deploy`, it uses a built-in Dockerfile. It also needs the Docker daemon to be running. :::note This needs the Docker daemon to be running. ::: To use a custom Dockerfile, add one to the rooot of the uv workspace of the function. ```txt {5} ├── sst.config.ts ├── pyproject.toml └── function ├── pyproject.toml ├── Dockerfile └── src └── function └── api.py ``` You can refer to [this example of using a container image](/docs/examples/#aws-lambda-python-container). cache? **Type** `Input` **Default** `true` Controls whether Docker build cache is enabled. Disable Docker build caching, useful for environments like Localstack where ECR cache export is not supported. ```js { python: { container: { cache: false } } } ``` ### retries? **Type** `Input` **Default** `2` Configure the maximum number of retry attempts for this function when invoked asynchronously. This only affects asynchronous invocations of the function, ie. when subscribed to Topics, EventBuses, or Buckets. And not when directly invoking the function. Valid values are between 0 and 2. ```js { retries: 0 } ``` ### role? **Type** `Input` **Default** Creates a new role Assigns the given IAM role ARN to the function. This allows you to pass in a previously created role. :::caution When you pass in a role, the function will not update it if you add `permissions` or `link` resources. ::: By default, the function creates a new IAM role when it's created. It'll update this role if you add `permissions` or `link` resources. However, if you pass in a role, you'll need to update it manually if you add `permissions` or `link` resources. ```js { role: "arn:aws:iam::123456789012:role/my-role" } ``` ### runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x" | "go" | "rust" | "provided.al2" | "provided.al2023" | "python3.9" | "python3.10" | "python3.11" | "python3.12" | "python3.13">` **Default** `"nodejs24.x"` The language runtime for the function. Node.js and Golang are officially supported. While, Python and Rust are community supported. Support for other runtimes are on the roadmap. ```js { runtime: "nodejs24.x" } ``` ### storage? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"512 MB"` The amount of ephemeral storage allocated for the function. This sets the ephemeral storage of the lambda function (/tmp). Must be between "512 MB" and "10240 MB" ("10 GB") in 1 MB increments. ```js { storage: "5 GB" } ``` ### streaming? **Type** `Input` **Default** `false` Enable streaming for the function. Streaming is supported with both Function URLs and API Gateway REST API (V1). It is not supported with API Gateway HTTP API (V2). You'll also need to [wrap your handler](https://docs.aws.amazon.com/lambda/latest/dg/configuration-response-streaming.html) with `awslambda.streamifyResponse` to enable streaming. Check out the [AWS Lambda streaming example](/docs/examples/#aws-lambda-streaming) for more details. ```js { streaming: true } ``` ### tags? **Type** `Input>>` A list of tags to add to the function. ```js { tags: { "my-tag": "my-value" } } ``` ### timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the function can run. The minimum timeout is 1 second and the maximum is 900 seconds or 15 minutes. :::note If a function is connected to another service, the request will time out based on the service's limits. ::: While the maximum timeout is 15 minutes, if a function is connected to other services, it'll time out based on those limits. - API Gateway has a timeout of 30 seconds. So even if the function has a timeout of 15 minutes, the API request will time out after 30 seconds. - CloudFront has a default timeout of 60 seconds. You can have this limit increased by [contacting AWS Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { timeout: "900 seconds" } ``` ### transform? **Type** `Object` - [`eventInvokeConfig?`](#transform-eventinvokeconfig) - [`function?`](#transform-function) - [`logGroup?`](#transform-loggroup) - [`role?`](#transform-role) [Transform](/docs/components#transform) how this component creates its underlying resources. eventInvokeConfig? **Type** [`FunctionEventInvokeConfigArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/functioneventinvokeconfig/#inputs)` | (args: `[`FunctionEventInvokeConfigArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/functioneventinvokeconfig/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Function Event Invoke Config resource. This is only created when the `retries` property is set. function? **Type** [`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/function/#inputs)` | (args: `[`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/function/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Function resource. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch LogGroup resource. role? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the IAM Role resource. ### url? **Type** `Input` - [`authorization?`](#url-authorization) - [`cors?`](#url-cors) `Input` - [`allowCredentials?`](#url-cors-allowcredentials) - [`allowHeaders?`](#url-cors-allowheaders) - [`allowMethods?`](#url-cors-allowmethods) - [`allowOrigins?`](#url-cors-alloworigins) - [`exposeHeaders?`](#url-cors-exposeheaders) - [`maxAge?`](#url-cors-maxage) - [`router?`](#url-router) `Object` - [`connectionAttempts?`](#url-router-connectionattempts) - [`connectionTimeout?`](#url-router-connectiontimeout) - [`domain?`](#url-router-domain) - [`instance`](#url-router-instance) - [`keepAliveTimeout?`](#url-router-keepalivetimeout) - [`path?`](#url-router-path) - [`readTimeout?`](#url-router-readtimeout) - [`rewrite?`](#url-router-rewrite) `Input` - [`regex`](#url-router-rewrite-regex) - [`to`](#url-router-rewrite-to) **Default** `false` Enable [Lambda function URLs](https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html). These are dedicated endpoints for your Lambda functions. Enable it with the default options. ```js { url: true } ``` Configure the authorization and CORS settings for the endpoint. ```js { url: { authorization: "iam", cors: { allowOrigins: ['https://example.com'] } } } ``` authorization? **Type** `Input<"none" | "iam">` **Default** `"none"` The authorization used for the function URL. Supports [IAM authorization](https://docs.aws.amazon.com/lambda/latest/dg/urls-auth.html). ```js { url: { authorization: "iam" } } ``` cors? **Type** `Input` **Default** `true` Customize the CORS (Cross-origin resource sharing) settings for the function URL. Disable CORS. ```js { url: { cors: false } } ``` Only enable the `GET` and `POST` methods for `https://example.com`. ```js { url: { cors: { allowMethods: ["GET", "POST"], allowOrigins: ["https://example.com"] } } } ``` allowCredentials? **Type** `Input` **Default** `false` Allow cookies or other credentials in requests to the function URL. ```js { url: { cors: { allowCredentials: true } } } ``` allowHeaders? **Type** `Input[]>` **Default** `["*"]` The HTTP headers that origins can include in requests to the function URL. ```js { url: { cors: { allowHeaders: ["date", "keep-alive", "x-custom-header"] } } } ``` allowMethods? **Type** `Input[]>` **Default** `["*"]` The HTTP methods that are allowed when calling the function URL. ```js { url: { cors: { allowMethods: ["GET", "POST", "DELETE"] } } } ``` Or the wildcard for all methods. ```js { url: { cors: { allowMethods: ["*"] } } } ``` allowOrigins? **Type** `Input[]>` **Default** `["*"]` The origins that can access the function URL. ```js { url: { cors: { allowOrigins: ["https://www.example.com", "http://localhost:60905"] } } } ``` Or the wildcard for all origins. ```js { url: { cors: { allowOrigins: ["*"] } } } ``` exposeHeaders? **Type** `Input[]>` **Default** `[]` The HTTP headers you want to expose in your function to an origin that calls the function URL. ```js { url: { cors: { exposeHeaders: ["date", "keep-alive", "x-custom-header"] } } } ``` maxAge? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days">` **Default** `"0 seconds"` The maximum amount of time the browser can cache results of a preflight request. By default the browser doesn't cache the results. The maximum value is `86400 seconds` or `1 day`. ```js { url: { cors: { maxAge: "1 day" } } } ``` router? **Type** `Object` Serve your function URL through a `Router` instead of a standalone Function URL. By default, this component creates a direct function URL endpoint. But you might want to serve it through the distribution of your `Router` as a: - A path like `/api/users` - A subdomain like `api.example.com` - Or a combined pattern like `dev.example.com/api` To serve your function **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path` in the `url` prop. ```ts {4,5} { url: { router: { instance: router, path: "/api/users" } } } ``` To serve your function **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {5} { url: { router: { instance: router, domain: "api.example.com" } } } ``` Finally, to serve your function **from a combined pattern** like `dev.example.com/api`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {5,6} { url: { router: { instance: router, domain: "dev.example.com", path: "/api/users" } } } ``` connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### versioning? **Type** `Input` **Default** `false` Enable versioning for the function. :::note Durable functions enable this by default. ::: ```js { versioning: true } ``` ### volume? **Type** `Input` - [`efs`](#volume-efs) - [`path?`](#volume-path) Mount an EFS file system to the function. Create an EFS file system. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const fileSystem = new sst.aws.Efs("MyFileSystem", { vpc }); ``` And pass it in. ```js { volume: { efs: fileSystem } } ``` By default, the file system will be mounted to `/mnt/efs`. You can change this by passing in the `path` property. ```js { volume: { efs: fileSystem, path: "/mnt/my-files" } } ``` To use an existing EFS, you can pass in an EFS access point ARN. ```js { volume: { efs: "arn:aws:elasticfilesystem:us-east-1:123456789012:access-point/fsap-12345678", } } ``` efs **Type** `Input` The EFS file system to mount. Or an EFS access point ARN. path? **Type** `Input` **Default** `"/mnt/efs"` The path to mount the volume. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the function to connect to private subnets in a virtual private cloud or VPC. This allows your function to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ## Properties ### arn **Type** `Output` The ARN of the Lambda function. ### name **Type** `Output` The name of the Lambda function. ### nodes **Type** `Object` - [`eventInvokeConfig`](#nodes-eventinvokeconfig) - [`function`](#nodes-function) - [`logGroup`](#nodes-loggroup) - [`role`](#nodes-role) The underlying [resources](/docs/components/#nodes) this component creates. eventInvokeConfig **Type** `undefined | `[`FunctionEventInvokeConfig`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/functioneventinvokeconfig/) The Function Event Invoke Config resource if retries are configured. function **Type** `Output<`[`Function`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/function/)`>` The AWS Lambda function. logGroup **Type** `Output` The CloudWatch Log Group the function logs are stored. role **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The IAM Role the function will use. ### url **Type** `Output` The Lambda function URL if `url` is enabled. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `name` `string` The name of the Lambda function. - `qualifier?` `undefined | string` - `url` `undefined | string` The Lambda function URL if `url` is enabled. ## Methods ### addEnvironment ```ts addEnvironment(environment) ``` #### Parameters - `environment` `Input>>` The environment variables to add to the function. **Returns** [`FunctionEnvironmentUpdate`](/docs/component/aws/providers/function-environment-update) Add environment variables lazily to the function after the function is created. This is useful for adding environment variables that are only available after the function is created, like the function URL. Add the function URL as an environment variable. ```ts title="sst.config.ts" const fn = new sst.aws.Function("MyFunction", { handler: "src/handler.handler", url: true, }); fn.addEnvironment({ URL: fn.url, }); ``` --- ## KinesisStreamLambdaSubscriber Reference doc for the `sst.aws.KinesisStreamLambdaSubscriber` component. https://sst.dev/docs/component/aws/kinesis-stream-lambda-subscriber The `KinesisStreamLambdaSubscriber` component is internally used by the `KinesisStream` component to add a consumer to [Amazon Kinesis Data Streams](https://docs.aws.amazon.com/streams/latest/dev/introduction.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribe` method of the `KinesisStream` component. --- ## Constructor ```ts new KinesisStreamLambdaSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`eventSourceMapping`](#nodes-eventsourcemapping) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. eventSourceMapping **Type** [`EventSourceMapping`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/) The Lambda event source mapping. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function that'll be notified. ## Args ### filters? **Type** `Input>[]>` Filter the events that'll be processed by the `subscribers` functions. :::tip You can pass in up to 5 different filters. ::: You can pass in up to 5 different filter policies. These will logically ORed together. Meaning that if any single policy matches, the record will be processed. Learn more about the [filter rule syntax](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax). For example, if your Kinesis stream contains events in this JSON format. ```js { record: 12345, order: { type: "buy", stock: "ANYCO", quantity: 1000 } } ``` To process only those events where the `type` is `buy`. ```js { filters: [ { data: { order: { type: ["buy"], }, }, }, ], } ``` ### stream **Type** `Input` - [`arn`](#stream-arn) The Kinesis stream to use. arn **Type** `Input` The ARN of the stream. ### subscriber **Type** `Input` The subscriber function. ### transform? **Type** `Object` - [`eventSourceMapping?`](#transform-eventsourcemapping) [Transform](/docs/components#transform) how this component creates its underlying resources. eventSourceMapping? **Type** [`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)` | (args: `[`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Event Source Mapping resource. --- ## KinesisStream Reference doc for the `sst.aws.KinesisStream` component. https://sst.dev/docs/component/aws/kinesis-stream The `KinesisStream` component lets you add an [Amazon Kinesis Data Streams](https://docs.aws.amazon.com/streams/latest/dev/introduction.html) to your app. #### Minimal example ```ts title="sst.config.ts" const stream = new sst.aws.KinesisStream("MyStream"); ``` #### Subscribe to a stream ```ts title="sst.config.ts" stream.subscribe("MySubscriber", "src/subscriber.handler"); ``` #### Link the stream to a resource You can link the stream to other resources, like a function or your Next.js app. ```ts {2} title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [stream] }); ``` Once linked, you can write to the stream from your function code. ```ts title="app/page.tsx" {1,7} const client = new KinesisClient(); await client.send(new PutRecordCommand({ StreamName: Resource.MyStream.name, Data: JSON.stringify({ foo: "bar" }), PartitionKey: "myKey", })); ``` --- ## Constructor ```ts new KinesisStream(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`KinesisStreamArgs`](#kinesisstreamargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## KinesisStreamArgs ### transform? **Type** `Object` - [`stream?`](#transform-stream) [Transform](/docs/components#transform) how this component creates its underlying resources. stream? **Type** [`StreamArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/kinesis/stream/#inputs)` | (args: `[`StreamArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/kinesis/stream/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Kinesis stream resource. ## Properties ### arn **Type** `Output` ### name **Type** `Output` ### nodes **Type** `Object` - [`stream`](#nodes-stream) The underlying [resources](/docs/components/#nodes) this component creates. stream **Type** [`Stream`](https://www.pulumi.com/registry/packages/aws/api-docs/kinesis/stream/) The Amazon Kinesis Data Stream. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `name` `string` ## Methods ### subscribe ```ts subscribe(name, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `subscriber` `Input` The function that'll be notified. - `args?` [`KinesisStreamLambdaSubscriberArgs`](#kinesisstreamlambdasubscriberargs) Configure the subscription. **Returns** `Output<`[`KinesisStreamLambdaSubscriber`](/docs/component/aws/kinesis-stream-lambda-subscriber)`>` Subscribe to the Kinesis stream. ```js title="sst.config.ts" stream.subscribe("MySubscriber", "src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" stream.subscribe("MySubscriber", "src/subscriber.handler", { filters: [ { data: { order: { type: ["buy"], }, }, }, ], }); ``` Customize the subscriber function. ```js title="sst.config.ts" stream.subscribe("MySubscriber", { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" stream.subscribe("MySubscriber", "arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ### static subscribe ```ts KinesisStream.subscribe(name, streamArn, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `streamArn` `Input` The ARN of the Kinesis Stream to subscribe to. - `subscriber` `Input` The function that'll be notified. - `args?` [`KinesisStreamLambdaSubscriberArgs`](#kinesisstreamlambdasubscriberargs) Configure the subscription. **Returns** `Output<`[`KinesisStreamLambdaSubscriber`](/docs/component/aws/kinesis-stream-lambda-subscriber)`>` Subscribe to the Kinesis stream that was not created in your app. For example, let's say you have the ARN of an existing Kinesis stream. ```js title="sst.config.ts" const streamArn = "arn:aws:kinesis:us-east-1:123456789012:stream/MyStream"; ``` You can subscribe to it by passing in the ARN. ```js title="sst.config.ts" sst.aws.KinesisStream.subscribe("MySubscriber", streamArn, "src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" sst.aws.KinesisStream.subscribe("MySubscriber", streamArn, "src/subscriber.handler", { filters: [ { data: { order: { type: ["buy"], }, }, }, ], }); ``` Customize the subscriber function. ```js title="sst.config.ts" sst.aws.KinesisStream.subscribe("MySubscriber", streamArn, { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` ## KinesisStreamLambdaSubscriberArgs ### filters? **Type** `Input>[]>` Filter the events that'll be processed by the `subscribers` functions. :::tip You can pass in up to 5 different filters. ::: You can pass in up to 5 different filter policies. These will logically ORed together. Meaning that if any single policy matches, the record will be processed. Learn more about the [filter rule syntax](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax). For example, if your Kinesis stream contains events in this JSON format. ```js { record: 12345, order: { type: "buy", stock: "ANYCO", quantity: 1000 } } ``` To process only those events where the `type` is `buy`. ```js { filters: [ { data: { order: { type: ["buy"], }, }, }, ], } ``` ### transform? **Type** `Object` - [`eventSourceMapping?`](#transform-eventsourcemapping) [Transform](/docs/components#transform) how this component creates its underlying resources. eventSourceMapping? **Type** [`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)` | (args: `[`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Event Source Mapping resource. --- ## Mysql Reference doc for the `sst.aws.Mysql` component. https://sst.dev/docs/component/aws/mysql The `Mysql` component lets you add a MySQL database to your app using [Amazon RDS MySQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MySQL.html). #### Create the database ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const database = new sst.aws.Mysql("MyDatabase", { vpc }); ``` #### Link to a resource You can link your database to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [database], vpc }); ``` Once linked, you can connect to it from your function code. ```ts title="app/page.tsx" {1,5-9} const connection = await mysql.createConnection({ user: Resource.MyDatabase.username, password: Resource.MyDatabase.password, database: Resource.MyDatabase.database, host: Resource.MyDatabase.host, port: Resource.MyDatabase.port, }); await connection.execute("SELECT NOW()"); ``` #### Running locally By default, your RDS MySQL database is deployed in `sst dev`. But let's say you are running MySQL locally. ```bash docker run \ --rm \ -p 3306:3306 \ -v $(pwd)/.sst/storage/mysql:/var/lib/mysql/data \ -e MYSQL_DATABASE=local \ -e MYSQL_ROOT_PASSWORD=password \ mysql:8.0 ``` You can connect to it in `sst dev` by configuring the `dev` prop. ```ts title="sst.config.ts" {3-8} const mysql = new sst.aws.Mysql("MyMysql", { vpc, dev: { username: "root", password: "password", database: "local", port: 3306 } }); ``` This will skip deploying an RDS database and link to the locally running MySQL database instead. --- ### Cost By default this component uses a _Single-AZ Deployment_, _On-Demand DB Instances_ of a `db.t4g.micro` at $0.016 per hour. And 20GB of _General Purpose gp3 Storage_ at $0.115 per GB per month. That works out to $0.016 x 24 x 30 + $0.115 x 20 or **$14 per month**. Adjust this for the `instance` type and the `storage` you are using. The above are rough estimates for _us-east-1_, check out the [RDS for MySQL pricing](https://aws.amazon.com/rds/mysql/pricing/#On-Demand_DB_Instances_costs) for more details. #### RDS Proxy If you enable the `proxy`, it uses _Provisioned instances_ with 2 vCPUs at $0.015 per hour. That works out to an **additional** $0.015 x 2 x 24 x 30 or **$22 per month**. This is a rough estimate for _us-east-1_, check out the [RDS Proxy pricing](https://aws.amazon.com/rds/proxy/pricing/) for more details. --- ## Constructor ```ts new Mysql(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`MysqlArgs`](#mysqlargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## MysqlArgs ### blueGreen? **Type** `Input` **Default** `false` Enable [Blue/Green deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments.html) for version, instance type, and parameter group upgrades. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). When enabled, a staging (green) instance is created, updated, verified, then promoted to replace the production (blue) instance. This minimizes downtime during upgrades. ```js { blueGreen: true } ``` ### database? **Type** `Input` **Default** Based on the name of the current app Name of a database that is automatically created. The name must begin with a letter and contain only lowercase letters, numbers, or underscores. By default, it takes the name of the app, and replaces the hyphens with underscores. :::danger Changing the database name will cause the database to be destroyed and recreated. ::: ```js { database: "acme" } ``` ### dev? **Type** `Object` - [`database?`](#dev-database) - [`host?`](#dev-host) - [`password?`](#dev-password) - [`port?`](#dev-port) - [`username?`](#dev-username) Configure how this component works in `sst dev`. By default, your MySQL database is deployed in `sst dev`. But if you want to instead connect to a locally running MySQL database, you can configure the `dev` prop. :::note This will not create an RDS database in `sst dev`. ::: This will skip deploying an RDS database and link to the locally running MySQL database instead. Setting the `dev` prop also means that any linked resources will connect to the right database both in `sst dev` and `sst deploy`. ```ts { dev: { username: "root", password: "password", database: "mysql", host: "localhost", port: 3306 } } ``` database? **Type** `Input` **Default** Inherit from the top-level [`database`](#database). The database of the local MySQL to connect to when running in dev. host? **Type** `Input` **Default** `"localhost"` The host of the local MySQL to connect to when running in dev. password? **Type** `Input` **Default** Inherit from the top-level [`password`](#password). The password of the local MySQL to connect to when running in dev. port? **Type** `Input` **Default** `3306` The port of the local MySQL to connect to when running in dev. username? **Type** `Input` **Default** Inherit from the top-level [`username`](#username). The username of the local MySQL to connect to when running in dev. ### instance? **Type** `Input` **Default** `"t4g.micro"` The type of instance to use for the database. Check out the [supported instance types](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.Types.html). :::caution Changing the instance type will cause the database to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { instance: "m7g.xlarge" } ``` ### multiAz? **Type** `Input` **Default** `false` Enable [Multi-AZ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) deployment for the database. This creates a standby replica for the database in another availability zone (AZ). The standby database provides automatic failover in case the primary database fails. However, when the primary database is healthy, the standby database is not used for serving read traffic. :::caution Using Multi-AZ will approximately double the cost of the database since it will be deployed in two AZs. ::: ```js { multiAz: true } ``` ### password? **Type** `Input` **Default** A random password is generated. The password of the master user. ```js { password: "Passw0rd!" } ``` You can use a `Secret` to manage the password. ```js { password: new sst.Secret("MyDBPassword").value } ``` ### proxy? **Type** `Input` - [`credentials?`](#proxy-credentials) `Input[]>` - [`password`](#proxy-credentials-password) - [`username`](#proxy-credentials-username) **Default** `false` Enable [RDS Proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) for the database. ```js { proxy: true } ``` credentials? **Type** `Input[]>` Additional credentials the proxy can use to connect to the database. You don't need to specify the master user credentials as they are always added by default. :::note This component will not create the MySQL users listed here. You need to create them manually in the database. ::: ```js { credentials: [ { username: "metabase", password: "Passw0rd!" } ] } ``` You can use a `Secret` to manage the password. ```js { credentials: [ { username: "metabase", password: new sst.Secret("MyDBPassword").value } ] } ``` password **Type** `Input` The password of the user. username **Type** `Input` The username of the user. ### storage? **Type** `Input<"$\{number\} GB" | "$\{number\} TB">` **Default** `"20 GB"` The maximum storage limit for the database. RDS will autoscale your storage to match your usage up to the given limit. You are not billed for the maximum storage limit, You are only billed for the storage you use. :::note You are only billed for the storage you use, not the maximum limit. ::: By default, [gp3 storage volumes](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#Concepts.Storage.GeneralSSD) are used without additional provisioned IOPS. This provides good baseline performance for most use cases. The minimum storage size is 20 GB. And the maximum storage size is 64 TB. ```js { storage: "100 GB" } ``` ### transform? **Type** `Object` - [`instance?`](#transform-instance) - [`parameterGroup?`](#transform-parametergroup) - [`proxy?`](#transform-proxy) - [`subnetGroup?`](#transform-subnetgroup) [Transform](/docs/components#transform) how this component creates its underlying resources. instance? **Type** [`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/instance/#inputs)` | (args: `[`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/instance/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the database instance in the RDS Cluster. parameterGroup? **Type** [`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/parametergroup/#inputs)` | (args: `[`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/parametergroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS parameter group. proxy? **Type** [`ProxyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/proxy/#inputs)` | (args: `[`ProxyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/proxy/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS Proxy. subnetGroup? **Type** [`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)` | (args: `[`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS subnet group. ### username? **Type** `Input` **Default** `"root"` The username of the master user. :::danger Changing the username will cause the database to be destroyed and recreated. ::: ```js { username: "admin" } ``` ### version? **Type** `Input` **Default** `"8.0.40"` The MySQL engine version. Check out the [available versions in your region](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Concepts.VersionMgmt.html). :::caution Changing the version will cause the database to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { version: "8.4.4" } ``` ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`subnets`](#vpc-subnets) The VPC subnets to use for the database. ```js { vpc: { subnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"] } } ``` Or create a `Vpc` component. ```ts title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. The database will be placed in the private subnets. ```js { vpc: myVpc } ``` subnets **Type** `Input[]>` A list of subnet IDs in the VPC. ## Properties ### database **Type** `Output` The name of the database. ### host **Type** `Output` The host of the database. ### id **Type** `Output` The identifier of the MySQL instance. ### nodes **Type** `Object` - [`instance`](#nodes-instance) instance **Type** `undefined | `[`Instance`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/instance/) ### password **Type** `Output` The password of the master user. ### port **Type** `Output` The port of the database. ### proxyId **Type** `Output` The name of the MySQL proxy. ### username **Type** `Output` The username of the master user. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `database` `string` The name of the database. - `host` `string` The host of the database. - `password` `string` The password of the master user. - `port` `number` The port of the database. - `username` `string` The username of the master user. ## Methods ### static get ```ts Mysql.get(name, args, opts?) ``` #### Parameters - `name` `string` The name of the component. - `args` [`MysqlGetArgs`](#mysqlgetargs) The arguments to get the MySQL database. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Mysql`](.) Reference an existing MySQL database with the given name. This is useful when you create a MySQL database in one stage and want to share it in another. It avoids having to create a new MySQL database in the other stage. :::tip You can use the `static get` method to share MySQL databases across stages. ::: Imagine you create a database in the `dev` stage. And in your personal stage `frank`, instead of creating a new database, you want to share the same database from `dev`. ```ts title="sst.config.ts" const database = $app.stage === "frank" ? sst.aws.Mysql.get("MyDatabase", { id: "app-dev-mydatabase", proxyId: "app-dev-mydatabase-proxy" }) : new sst.aws.Mysql("MyDatabase", { proxy: true }); ``` Here `app-dev-mydatabase` is the ID of the database, and `app-dev-mydatabase-proxy` is the ID of the proxy created in the `dev` stage. You can find these by outputting the database ID and proxy ID in the `dev` stage. ```ts title="sst.config.ts" return { id: database.id, proxyId: database.proxyId }; ``` ## MysqlGetArgs ### id **Type** `Input` The ID of the database. ### proxyId? **Type** `Input` The ID of the proxy. --- ## Nextjs Reference doc for the `sst.aws.Nextjs` component. https://sst.dev/docs/component/aws/nextjs The `Nextjs` component lets you deploy [Next.js](https://nextjs.org) apps on AWS. It uses [OpenNext](https://open-next.js.org) to build your Next.js app, and transforms the build output to a format that can be deployed to AWS. #### Minimal example Deploy the Next.js app that's in the project root. ```js title="sst.config.ts" new sst.aws.Nextjs("MyWeb"); ``` #### Change the path Deploys a Next.js app in the `my-next-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { path: "my-next-app/" }); ``` #### Add a custom domain Set a custom domain for your Next.js app. ```js {2} title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your Next.js app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Nextjs("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your Next.js app. ```ts title="app/page.tsx" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new Nextjs(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`NextjsArgs`](#nextjsargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## NextjsArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) **Default** `Object` Configure how the Next.js app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` Read more about these options below. fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npx --yes open-next@OPEN_NEXT_VERSION build"` The command used internally to build your Next.js app. It uses OpenNext with the `openNextVersion`. If you want to use a custom `build` script from your `package.json`. This is useful if you have a custom build process or want to use a different version of OpenNext. OpenNext by default uses the `build` script for building next-js app in your `package.json`. You can customize the build command in OpenNext configuration. ```js { buildCommand: "npm run build:open-next" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the Next.js app to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your Next.js app is run in dev mode; it's not deployed. ::: Instead of deploying your Next.js app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your Next.js app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set [environment variables](https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables) in your Next.js app. These are made available: 1. In `next build`, they are loaded into `process.env`. 2. Locally while running through `sst dev`. :::tip You can also `link` resources to your Next.js app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: Recall that in Next.js, you need to prefix your environment variables with `NEXT_PUBLIC_` to access these in the browser. [Read more here](https://nextjs.org/docs/pages/building-your-application/configuring/environment-variables#bundling-environment-variables-for-the-browser). ```js { environment: { API_URL: api.url, // Accessible in the browser NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### imageOptimization? **Type** `Object` - [`memory?`](#imageoptimization-memory) - [`staticEtag?`](#imageoptimization-staticetag) **Default** `{memory: "1024 MB"}` Configure the Lambda function used for image optimization. memory? **Type** `"$\{number\} MB" | "$\{number\} GB"` **Default** `"1536 MB"` The amount of memory allocated to the image optimization function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { imageOptimization: { memory: "512 MB" } } ``` staticEtag? **Type** `boolean` **Default** `false` If set to true, a previously computed image will return _304 Not Modified_. This means that image needs to be **immutable**. The etag will be computed based on the image href, format and width and the next BUILD_ID. ```js { imageOptimization: { staticEtag: true, } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your Next.js app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Turn off invalidations. ```js { invalidation: false } ``` Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your Next.js app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### openNextVersion? **Type** `Input` **Default** Auto-detected based on your Next.js version. Configure the [OpenNext](https://opennext.js.org) version used to build the Next.js app. :::note The default OpenNext version is auto-detected based on your Next.js version and pinned to the version of SST you have. ::: By default, SST auto-detects the Next.js version from your `package.json` and picks a compatible OpenNext version. For Next.js 15+, it uses `3.9.14`. For Next.js 14, it uses `3.6.6` since newer versions of OpenNext dropped Next.js 14 support. If set, this overrides the auto-detection. You can [find the defaults in the source](https://github.com/sst/sst/blob/dev/platform/src/components/aws/nextjs.ts#L30) under `DEFAULT_OPEN_NEXT_VERSION`. OpenNext changed its package name from `open-next` to `@opennextjs/aws` in version `3.1.4`. SST will choose the correct one based on the version you provide. ```js { openNextVersion: "3.4.1" } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your Next.js app is located. This path is relative to your `sst.config.ts`. By default this assumes your Next.js app is in the root of your SST app. If your Next.js app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your Next.js app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your Next.js app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your Next.js app **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the [`basePath`](https://nextjs.org/docs/app/api-reference/config/next-config-js/basePath) in your `next.config.js`. :::caution If routing to a path, you need to set that as the base path in your Next.js app as well. ::: ```js title="next.config.js" {2} basePath: "/docs" }); ``` To serve your Next.js app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your Next.js app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set this as the `basePath` in your `next.config.js`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`revalidationEventsSubscriber?`](#transform-revalidationeventssubscriber) - [`revalidationSeeder?`](#transform-revalidationseeder) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. revalidationEventsSubscriber? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the revalidation events subscriber Function resource used for ISR. revalidationSeeder? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the revalidation seeder Function resource used for ISR. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`revalidationFunction`](#nodes-revalidationfunction) - [`revalidationQueue`](#nodes-revalidationqueue) - [`revalidationTable`](#nodes-revalidationtable) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. revalidationFunction **Type** `undefined | Output` The Lambda function that processes the ISR revalidation. revalidationQueue **Type** `undefined | Output` The Amazon SQS queue that triggers the ISR revalidator. revalidationTable **Type** `undefined | Output` The Amazon DynamoDB table that stores the ISR revalidation data. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the Next.js app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the Next.js app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## Nuxt Reference doc for the `sst.aws.Nuxt` component. https://sst.dev/docs/component/aws/nuxt The `Nuxt` component lets you deploy a [Nuxt](https://nuxt.com) app to AWS. #### Minimal example Deploy a Nuxt app that's in the project root. ```js title="sst.config.ts" new sst.aws.Nuxt("MyWeb"); ``` #### Change the path Deploys the Nuxt app in the `my-nuxt-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.Nuxt("MyWeb", { path: "my-nuxt-app/" }); ``` #### Add a custom domain Set a custom domain for your Nuxt app. ```js {2} title="sst.config.ts" new sst.aws.Nuxt("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.Nuxt("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your Nuxt app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Nuxt("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your Nuxt app. ```ts title="server/api/index.ts" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new Nuxt(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`NuxtArgs`](#nuxtargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## NuxtArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the Nuxt app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your Nuxt app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the Nuxt app to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your Nuxt app is run in dev mode; it's not deployed. ::: Instead of deploying your Nuxt app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your Nuxt app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set [environment variables](https://cli.vuejs.org/guide/mode-and-env.html) in your Nuxt app. These are made available: 1. In `nuxt build`, they are loaded into `process.env`. 2. Locally while running through `sst dev`. :::tip You can also `link` resources to your Nuxt app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: Recall that in Vue, you need to prefix your environment variables with `VUE_APP_` to access these in the browser. [Read more here](https://cli.vuejs.org/guide/mode-and-env.html#using-env-variables-in-client-side-code). ```js { environment: { API_URL: api.url, // Accessible in the browser VUE_APP_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your Nuxt app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your Nuxt app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your Nuxt app is located. This path is relative to your `sst.config.ts`. By default it assumes your Nuxt app is in the root of your SST app. If your Nuxt app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your Nuxt app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your Nuxt app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your Nuxt app **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the [`baseURL`](https://nuxt.com/docs/api/nuxt-config#baseurl) in your `nuxt.config.ts`. :::caution If routing to a path, you need to set that as the base path in your Nuxt app as well. ::: ```js title="nuxt.config.ts" {3} app: { baseURL: "/docs" } }); ``` To serve your Nuxt app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your Nuxt app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set this as the `basePath` in your `nuxt.config.ts`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the Nuxt app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the Nuxt app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## OpenSearch Reference doc for the `sst.aws.OpenSearch` component. https://sst.dev/docs/component/aws/open-search The `OpenSearch` component lets you add a deployed instance of OpenSearch, or an OpenSearch _domain_ to your app using [Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html). #### Create the instance ```js title="sst.config.ts" const search = new sst.aws.OpenSearch("MySearch"); ``` #### Link to a resource You can link your instance to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [search] }); ``` Once linked, you can connect to it from your function code. ```ts title="app/page.tsx" {1,5-9} const client = new Client({ node: Resource.MySearch.url, auth: { username: Resource.MySearch.username, password: Resource.MySearch.password } }); // Add a document await client.index({ index: "my-index", body: { message: "Hello world!" } }); // Search for documents const result = await client.search({ index: "my-index", body: { query: { match: { message: "world" } } } }); ``` #### Running locally By default, your OpenSearch domain is deployed in `sst dev`. But let's say you are running OpenSearch locally. ```bash docker run \ --rm \ -p 9200:9200 \ -v $(pwd)/.sst/storage/opensearch:/usr/share/opensearch/data \ -e discovery.type=single-node \ -e plugins.security.disabled=true \ -e OPENSEARCH_INITIAL_ADMIN_PASSWORD=^Passw0rd^ \ opensearchproject/opensearch:2.17.0 ``` You can connect to it in `sst dev` by configuring the `dev` prop. ```ts title="sst.config.ts" {3-5} const opensearch = new sst.aws.OpenSearch("MyOpenSearch", { dev: { url: "http://localhost:9200", username: "admin", password: "^Passw0rd^" } }); ``` This will skip deploying an OpenSearch domain and link to the locally running OpenSearch process instead. --- ### Cost By default this component uses a _Single-AZ Deployment_, _On-Demand Instances_ of a `t3.small.search` at $0.036 per hour. And 10GB of _General Purpose gp3 Storage_ at $0.122 per GB per month. That works out to $0.036 x 24 x 30 + $0.122 x 10 or **$27 per month**. Adjust this for the `instance` type and the `storage` you are using. The above are rough estimates for _us-east-1_, check out the [OpenSearch Service pricing](https://aws.amazon.com/opensearch-service/pricing/) for more details. --- ## Constructor ```ts new OpenSearch(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`OpenSearchArgs`](#opensearchargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## OpenSearchArgs ### dev? **Type** `Object` - [`password?`](#dev-password) - [`url?`](#dev-url) - [`username?`](#dev-username) Configure how this component works in `sst dev`. By default, your OpenSearch domain is deployed in `sst dev`. But if you want to instead connect to a locally running OpenSearch, you can configure the `dev` prop. :::note By default, this creates a new OpenSearch domain even in `sst dev`. ::: This will skip deploying an OpenSearch domain and link to the locally running OpenSearch process instead. Setting the `dev` prop also means that any linked resources will connect to the right instance both in `sst dev` and `sst deploy`. ```ts { dev: { username: "admin", password: "Passw0rd!", url: "http://localhost:9200" } } ``` password? **Type** `Input` **Default** Inherit from the top-level [`password`](#password). The password of the local OpenSearch to connect to when running in dev. url? **Type** `Input` **Default** `"http://localhost:9200"` The URL of the local OpenSearch to connect to when running in dev. username? **Type** `Input` **Default** Inherit from the top-level [`username`](#username). The username of the local OpenSearch to connect to when running in dev. ### instance? **Type** `Input` **Default** `"t3.small"` The type of instance to use for the domain. Check out the [supported instance types](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/supported-instance-types.html). :::caution Changing the instance type will cause the domain to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { instance: "m6g.large" } ``` ### password? **Type** `Input` **Default** A random password is generated. The password of the master user. ```js { password: "^Passw0rd^" } ``` Use [Secrets](/docs/component/secret) to manage the password. ```js { password: new sst.Secret("MyDomainPassword").value } ``` ### storage? **Type** `Input<"$\{number\} GB" | "$\{number\} TB">` **Default** `"10 GB"` The storage limit for the domain. ```js { storage: "100 GB" } ``` ### transform? **Type** `Object` - [`domain?`](#transform-domain) - [`policy?`](#transform-policy) [Transform](/docs/components#transform) how this component creates its underlying resources. domain? **Type** [`DomainArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/opensearch/domain/#inputs)` | (args: `[`DomainArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/opensearch/domain/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the OpenSearch domain. policy? **Type** [`DomainPolicyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/opensearch/domainpolicy/#inputs)` | (args: `[`DomainPolicyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/opensearch/domainpolicy/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the OpenSearch domain policy. ### username? **Type** `Input` **Default** `"admin"` The username of the master user. :::danger Changing the username will cause the domain to be destroyed and recreated. ::: ```js { username: "admin" } ``` ### version? **Type** `Input` **Default** `"OpenSearch_2.17"` The OpenSearch engine version. Check out the [available versions](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html#choosing-version). :::caution Changing the version will cause the domain to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { version: "OpenSearch_2.5" } ``` ## Properties ### id **Type** `Output` The ID of the OpenSearch component. ### nodes **Type** `Object` - [`domain`](#nodes-domain) domain **Type** `undefined | `[`Domain`](https://www.pulumi.com/registry/packages/aws/api-docs/opensearch/domain/) ### password **Type** `Output` The password of the master user. ### url **Type** `Output` The endpoint of the domain. ### username **Type** `Output` The username of the master user. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `password` `string` The password of the master user. - `url` `string` The endpoint of the domain. - `username` `string` The username of the master user. ## Methods ### static get ```ts OpenSearch.get(name, id, opts?) ``` #### Parameters - `name` `string` The name of the component. - `id` `Input` The ID of the existing OpenSearch component. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`OpenSearch`](.) Reference an existing OpenSearch domain with the given name. This is useful when you create a domain in one stage and want to share it in another. It avoids having to create a new domain in the other stage. :::tip You can use the `static get` method to share OpenSearch domains across stages. ::: Imagine you create a domain in the `dev` stage. And in your personal stage `frank`, instead of creating a new domain, you want to share the same domain from `dev`. ```ts title="sst.config.ts" const search = $app.stage === "frank" ? sst.aws.OpenSearch.get("MyOpenSearch", "app-dev-myopensearch-efsmkrbt") : new sst.aws.OpenSearch("MyOpenSearch"); ``` Here `app-dev-myopensearch-efsmkrbt` is the ID of the OpenSearch component created in the `dev` stage. You can find this by outputting the ID in the `dev` stage. ```ts title="sst.config.ts" return { id: search.id }; ``` --- ## OpenControl Reference doc for the `sst.aws.OpenControl` component. https://sst.dev/docs/component/aws/opencontrol The `OpenControl` component has been deprecated. It should not be used for new projects. :::caution This component has been deprecated. ::: The `OpenControl` component lets you deploy your [OpenControl](https://opencontrol.ai) server to [AWS Lambda](https://aws.amazon.com/lambda/). #### Create an OpenControl server ```ts title="sst.config.ts" const server = new sst.aws.OpenControl("MyServer", { server: "src/server.handler" }); ``` #### Link your AI API keys ```ts title="sst.config.ts" {6} const anthropicKey = new sst.Secret("AnthropicKey"); const server = new sst.aws.OpenControl("MyServer", { server: { handler: "src/server.handler", link: [anthropicKey] } }); ``` #### Link your resources If your tools are need access to specific resources, you can link them to the OpenControl server. ```ts title="sst.config.ts" {6} const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.OpenControl("MyServer", { server: { handler: "src/server.handler", link: [bucket] } }); ``` #### Give AWS permissions If you are using the AWS tool within OpenControl, you will need to give your OpenControl server permissions to access your AWS account. ```ts title="sst.config.ts" {4-6} new sst.aws.OpenControl("OpenControl", { server: { handler: "src/server.handler", policies: $dev ? ["arn:aws:iam::aws:policy/AdministratorAccess"] : ["arn:aws:iam::aws:policy/ReadOnlyAccess"] } }); ``` Here we are giving it admin access in dev but read-only access in prod. #### Define your server Your `server` function might look like this. ```ts title="src/server.ts" const myTool = tool({ name: "my_tool", description: "Get the most popular greeting", async run() { return "Hello, world!"; } }); const app = create({ model: createAnthropic({ apiKey: Resource.AnthropicKey.value, })("claude-3-7-sonnet-20250219"), tools: [myTool], }); ``` Learn more in the [OpenControl docs](https://opencontrol.ai) on how to configure the `server` function. --- ## Constructor ```ts new OpenControl(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`OpenControlArgs`](#opencontrolargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## OpenControlArgs ### server **Type** `Input` The function that's running your OpenControl server. ```js { server: "src/server.handler" } ``` You can also pass in the full `FunctionArgs`. ```js { server: { handler: "src/server.handler", link: [table] } } ``` Since the `server` function is a Hono app, you want to export it with the Lambda adapter. ```ts title="src/server.ts" const app = create({ // ... }); ``` Learn more in the [OpenControl docs](https://opencontrol.ai) on how to configure the `server` function. ## Properties ### nodes **Type** `Object` - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. server **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Function component for the server. ### password **Type** `Output` The password for the OpenControl server. ### url **Type** `Output` The URL of the OpenControl server. --- ## AWS Linkable helper Reference doc for the `sst.aws.permission` helper. https://sst.dev/docs/component/aws/permission The AWS Permission Linkable helper is used to define the AWS permissions included with the [`sst.Linkable`](/docs/component/linkable/) component. ```ts sst.aws.permission({ actions: ["lambda:InvokeFunction"], resources: ["*"] }) ``` --- ## Functions ### permission ```ts permission(input) ``` #### Parameters - `input` [`InputArgs`](#inputargs) **Returns** `Object` The AWS Permission Linkable helper is used to define the AWS permissions included with the [`sst.Linkable`](/docs/component/linkable/) component. ```ts sst.aws.permission({ actions: ["lambda:InvokeFunction"], resources: ["*"] }) ``` ## InputArgs ### actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` ### conditions? **Type** `Input[]>` - [`test`](#conditions-test) - [`values`](#conditions-values) - [`variable`](#conditions-variable) Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. ### effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` ### resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` --- ## Postgres.v1 Reference doc for the `sst.aws.Postgres.v1` component. https://sst.dev/docs/component/aws/postgres-v1 The `Postgres` component lets you add a Postgres database to your app using [Amazon Aurora Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html). For existing usage, rename `sst.aws.Postgres` to `sst.aws.Postgres.v1`. For new Postgres, use the latest [`Postgres`](/docs/component/aws/postgres) component instead. :::caution This component has been deprecated. ::: What changed: - In this version, the database used AWS RDS Aurora Serverless v2, which supported RDS Data API. This allowed your machine to connect to the database during "sst dev" without the need for a VPN. - In the new version, the database now uses AWS RDS Postgres. The "sst.aws.Vpc" component has been enhanced to set up a secure tunnel, enabling seamlessly connections to the database. Postgres provides greater flexibility and wider feature support while being cheaper to run. :::note Data API for Aurora Postgres Serverless v2 is still being [rolled out in all regions](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.apg). ::: To connect to your database from your Lambda functions, you can use the [AWS Data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html). It does not need a persistent connection, and works over HTTP. You also don't need a VPN to connect to it locally. #### Create the database ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const database = new sst.aws.Postgres.v1("MyDatabase", { vpc }); ``` #### Change the scaling config ```js title="sst.config.ts" new sst.aws.Postgres.v1("MyDatabase", { scaling: { min: "2 ACU", max: "128 ACU" }, vpc }); ``` #### Link to a resource You can link your database to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [database], vpc }); ``` Once linked, you can connect to it from your function code. ```ts title="app/page.tsx" {1,6,7,8} drizzle(new RDSDataClient({}), { database: Resource.MyDatabase.database, secretArn: Resource.MyDatabase.secretArn, resourceArn: Resource.MyDatabase.clusterArn }); ``` --- ## Constructor ```ts new Postgres.v1(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`PostgresArgs`](#postgresargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## PostgresArgs ### databaseName? **Type** `Input` **Default** Based on the name of the current app Name of a database that is automatically created inside the cluster. The name must begin with a letter and contain only lowercase letters, numbers, or underscores. By default, it takes the name of the app, and replaces the hyphens with underscores. ```js { databaseName: "acme" } ``` ### scaling? **Type** `Input` - [`max?`](#scaling-max) - [`min?`](#scaling-min) **Default** `{min: "0.5 ACU", max: "4 ACU"}` The Aurora Serverless v2 scaling config. By default, the cluster has one DB instance that is used for both writes and reads. The instance can scale from the minimum number of ACUs to the maximum number of ACUs. :::caution Aurora Serverless v2 does not scale down to 0. The minimum cost of a Postgres cluster per month is roughly `0.5 * $0.12 per ACU hour * 24 hrs * 30 days = $43.20`. ::: An ACU or Aurora Capacity Unit is a combination of CPU and RAM. The cost of an Aurora Serverless v2 cluster is based on the ACU hours used. Additionally, you are billed for I/O and storage used by the cluster. [Read more here](https://aws.amazon.com/rds/aurora/pricing/). Each ACU is roughly equivalent to 2 GB of memory. So pick the minimum and maximum based on the baseline and peak memory usage of your app. max? **Type** `Input<"$\{number\} ACU">` **Default** `4 ACU` The maximum number of ACUs, ranges from 1 to 128, in increments of 0.5. ```js { scaling: { max: "128 ACU" } } ``` min? **Type** `Input<"$\{number\} ACU">` **Default** `0.5 ACU` The minimum number of ACUs, ranges from 0.5 to 128, in increments of 0.5. For your production workloads, setting a minimum of 0.5 ACUs might not be a great idea due to the following reasons, you can also [read more here](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html#aurora-serverless-v2.setting-capacity.incompatible_parameters). - It takes longer to scale from a low number of ACUs to a much higher number. - Query performance depends on the buffer cache. So if frequently accessed data cannot fit into the buffer cache, you might see uneven performance. - The max connections for a 0.5 ACU Postgres instance is capped at 2000. ```js { scaling: { min: "2 ACU" } } ``` ### transform? **Type** `Object` - [`cluster?`](#transform-cluster) - [`instance?`](#transform-instance) - [`subnetGroup?`](#transform-subnetgroup) [Transform](/docs/components#transform) how this component creates its underlying resources. cluster? **Type** [`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/cluster/#inputs)` | (args: `[`ClusterArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/cluster/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS Cluster. instance? **Type** [`ClusterInstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterinstance/#inputs)` | (args: `[`ClusterInstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterinstance/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the database instance in the RDS Cluster. subnetGroup? **Type** [`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)` | (args: `[`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS subnet group. ### version? **Type** `Input` **Default** `"17"` The Postgres engine version. Check out the [available versions in your region](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.html#Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.apg). :::caution Changing the version will cause the database to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { version: "15.5" } ``` ### vpc **Type** `"default" | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) The VPC to use for the database cluster. Each AWS account has a default VPC. If `default` is specified, the default VPC is used. :::note The default VPC does not have private subnets and is not recommended for production use. ::: ```js { vpc: { privateSubnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"], securityGroups: ["sg-0399348378a4c256c"], } } ``` Or create a `Vpc` component. ```js const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of private subnet IDs in the VPC. The database will be placed in the private subnets. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ## Properties ### clusterArn **Type** `Output` The ARN of the RDS Cluster. ### clusterID **Type** `Output` The ID of the RDS Cluster. ### database **Type** `Output` The name of the database. ### host **Type** `Output` The host of the database. ### nodes **Type** `Object` - [`cluster`](#nodes-cluster) - [`instance`](#nodes-instance) cluster **Type** [`Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/cluster/) instance **Type** [`ClusterInstance`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/clusterinstance/) ### password **Type** `Output` The password of the master user. ### port **Type** `Output` The port of the database. ### secretArn **Type** `Output` The ARN of the master user secret. ### username **Type** `Output` The username of the master user. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `clusterArn` `string` The ARN of the RDS Cluster. - `database` `string` The name of the database. - `host` `string` The host of the database. - `password` `string` The password of the master user. - `port` `number` The port of the database. - `secretArn` `string` The ARN of the master user secret. - `username` `string` The username of the master user. ## Methods ### static get ```ts Postgres.get(name, clusterID) ``` #### Parameters - `name` `string` The name of the component. - `clusterID` `Input` The id of the existing Postgres cluster. **Returns** [`Postgres`](.) Reference an existing Postgres cluster with the given cluster name. This is useful when you create a Postgres cluster in one stage and want to share it in another. It avoids having to create a new Postgres cluster in the other stage. :::tip You can use the `static get` method to share Postgres clusters across stages. ::: Imagine you create a cluster in the `dev` stage. And in your personal stage `frank`, instead of creating a new cluster, you want to share the same cluster from `dev`. ```ts title="sst.config.ts" const database = $app.stage === "frank" ? sst.aws.Postgres.v1.get("MyDatabase", "app-dev-mydatabase") : new sst.aws.Postgres.v1("MyDatabase"); ``` Here `app-dev-mydatabase` is the ID of the cluster created in the `dev` stage. You can find this by outputting the cluster ID in the `dev` stage. ```ts title="sst.config.ts" return { cluster: database.clusterID }; ``` --- ## Postgres Reference doc for the `sst.aws.Postgres` component. https://sst.dev/docs/component/aws/postgres The `Postgres` component lets you add a Postgres database to your app using [Amazon RDS Postgres](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html). #### Create the database ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const database = new sst.aws.Postgres("MyDatabase", { vpc }); ``` #### Link to a resource You can link your database to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [database], vpc }); ``` Once linked, you can connect to it from your function code. ```ts title="app/page.tsx" {1,5-9} const client = new Pool({ user: Resource.MyDatabase.username, password: Resource.MyDatabase.password, database: Resource.MyDatabase.database, host: Resource.MyDatabase.host, port: Resource.MyDatabase.port, }); await client.connect(); ``` #### Running locally By default, your RDS Postgres database is deployed in `sst dev`. But let's say you are running Postgres locally. ```bash docker run \ --rm \ -p 5432:5432 \ -v $(pwd)/.sst/storage/postgres:/var/lib/postgresql/data \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=password \ -e POSTGRES_DB=local \ postgres:18 ``` You can connect to it in `sst dev` by configuring the `dev` prop. ```ts title="sst.config.ts" {3-8} const postgres = new sst.aws.Postgres("MyPostgres", { vpc, dev: { username: "postgres", password: "password", database: "local", port: 5432 } }); ``` This will skip deploying an RDS database and link to the locally running Postgres database instead. [Check out the full example](/docs/examples/#aws-postgres-local). --- ### Cost By default this component uses a _Single-AZ Deployment_, _On-Demand DB Instances_ of a `db.t4g.micro` at $0.016 per hour. And 20GB of _General Purpose gp3 Storage_ at $0.115 per GB per month. That works out to $0.016 x 24 x 30 + $0.115 x 20 or **$14 per month**. Adjust this for the `instance` type and the `storage` you are using. The above are rough estimates for _us-east-1_, check out the [RDS for PostgreSQL pricing](https://aws.amazon.com/rds/postgresql/pricing/#On-Demand_DB_Instances_costs) for more details. #### RDS Proxy If you enable the `proxy`, it uses _Provisioned instances_ with 2 vCPUs at $0.015 per hour. That works out to an **additional** $0.015 x 2 x 24 x 30 or **$22 per month**. This is a rough estimate for _us-east-1_, check out the [RDS Proxy pricing](https://aws.amazon.com/rds/proxy/pricing/) for more details. --- ## Constructor ```ts new Postgres(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`PostgresArgs`](#postgresargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## PostgresArgs ### blueGreen? **Type** `Input` **Default** `false` Enable [Blue/Green deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments.html) for version, instance type, and parameter group upgrades. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). When enabled, a staging (green) instance is created, updated, verified, then promoted to replace the production (blue) instance. This minimizes downtime during upgrades. ```js { blueGreen: true } ``` ### database? **Type** `Input` **Default** Based on the name of the current app Name of a database that is automatically created. The name must begin with a letter and contain only lowercase letters, numbers, or underscores. By default, it takes the name of the app, and replaces the hyphens with underscores. :::danger Changing the database name will cause the database to be destroyed and recreated. ::: ```js { database: "acme" } ``` ### dev? **Type** `Object` - [`database?`](#dev-database) - [`host?`](#dev-host) - [`password?`](#dev-password) - [`port?`](#dev-port) - [`username?`](#dev-username) Configure how this component works in `sst dev`. By default, your Postgres database is deployed in `sst dev`. But if you want to instead connect to a locally running Postgres database, you can configure the `dev` prop. :::note This will not create an RDS database in `sst dev`. ::: This will skip deploying an RDS database and link to the locally running Postgres database instead. Setting the `dev` prop also means that any linked resources will connect to the right database both in `sst dev` and `sst deploy`. ```ts { dev: { username: "postgres", password: "password", database: "postgres", host: "localhost", port: 5432 } } ``` database? **Type** `Input` **Default** Inherit from the top-level [`database`](#database). The database of the local Postgres to connect to when running in dev. host? **Type** `Input` **Default** `"localhost"` The host of the local Postgres to connect to when running in dev. password? **Type** `Input` **Default** Inherit from the top-level [`password`](#password). The password of the local Postgres to connect to when running in dev. port? **Type** `Input` **Default** `5432` The port of the local Postgres to connect to when running in dev. username? **Type** `Input` **Default** Inherit from the top-level [`username`](#username). The username of the local Postgres to connect to when running in dev. ### instance? **Type** `Input` **Default** `"t4g.micro"` The type of instance to use for the database. Check out the [supported instance types](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.Types.html). :::caution Changing the instance type will cause the database to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { instance: "m7g.xlarge" } ``` ### multiAz? **Type** `Input` **Default** `false` Enable [Multi-AZ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html) deployment for the database. This creates a standby replica for the database in another availability zone (AZ). The standby database provides automatic failover in case the primary database fails. However, when the primary database is healthy, the standby database is not used for serving read traffic. :::caution Using Multi-AZ will approximately double the cost of the database since it will be deployed in two AZs. ::: ```js { multiAz: true } ``` ### password? **Type** `Input` **Default** A random password is generated. The password of the master user. ```js { password: "Passw0rd!" } ``` You can use a `Secret` to manage the password. ```js { password: new sst.Secret("MyDBPassword").value } ``` ### proxy? **Type** `Input` - [`credentials?`](#proxy-credentials) `Input[]>` - [`password`](#proxy-credentials-password) - [`username`](#proxy-credentials-username) **Default** `false` Enable [RDS Proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) for the database. ```js { proxy: true } ``` credentials? **Type** `Input[]>` Additional credentials the proxy can use to connect to the database. You don't need to specify the master user credentials as they are always added by default. :::note This component will not create the Postgres users listed here. You need to create them manually in the database. ::: ```js { credentials: [ { username: "metabase", password: "Passw0rd!" } ] } ``` You can use a `Secret` to manage the password. ```js { credentials: [ { username: "metabase", password: new sst.Secret("MyDBPassword").value } ] } ``` password **Type** `Input` The password of the user. username **Type** `Input` The username of the user. ### storage? **Type** `Input<"$\{number\} GB" | "$\{number\} TB">` **Default** `"20 GB"` The maximum storage limit for the database. RDS will autoscale your storage to match your usage up to the given limit. You are not billed for the maximum storage limit, You are only billed for the storage you use. :::note You are only billed for the storage you use, not the maximum limit. ::: By default, [gp3 storage volumes](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#Concepts.Storage.GeneralSSD) are used without additional provisioned IOPS. This provides good baseline performance for most use cases. The minimum storage size is 20 GB. And the maximum storage size is 64 TB. ```js { storage: "100 GB" } ``` ### transform? **Type** `Object` - [`instance?`](#transform-instance) - [`parameterGroup?`](#transform-parametergroup) - [`proxy?`](#transform-proxy) - [`subnetGroup?`](#transform-subnetgroup) [Transform](/docs/components#transform) how this component creates its underlying resources. instance? **Type** [`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/instance/#inputs)` | (args: `[`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/instance/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the database instance in the RDS Cluster. parameterGroup? **Type** [`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/parametergroup/#inputs)` | (args: `[`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/parametergroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS parameter group. proxy? **Type** [`ProxyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/proxy/#inputs)` | (args: `[`ProxyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/proxy/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS Proxy. subnetGroup? **Type** [`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)` | (args: `[`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/subnetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the RDS subnet group. ### username? **Type** `Input` **Default** `"postgres"` The username of the master user. :::danger Changing the username will cause the database to be destroyed and recreated. ::: ```js { username: "admin" } ``` ### version? **Type** `Input` **Default** `"17"` The Postgres engine version. Check out the [available versions in your region](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Concepts.General.DBVersions.html). :::caution Changing the version will cause the database to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { version: "17.1" } ``` ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`subnets`](#vpc-subnets) The VPC subnets to use for the database. ```js { vpc: { subnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"] } } ``` Or create a `Vpc` component. ```ts title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. The database will be placed in the private subnets. ```js { vpc: myVpc } ``` subnets **Type** `Input[]>` A list of subnet IDs in the VPC. ## Properties ### database **Type** `Output` The name of the database. ### host **Type** `Output` The host of the database. ### id **Type** `Output` The identifier of the Postgres instance. ### nodes **Type** `Object` - [`instance`](#nodes-instance) instance **Type** `undefined | `[`Instance`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/instance/) ### password **Type** `Output` The password of the master user. ### port **Type** `Output` The port of the database. ### proxyId **Type** `Output` The name of the Postgres proxy. ### username **Type** `Output` The username of the master user. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `database` `string` The name of the database. - `host` `string` The host of the database. - `password` `string` The password of the master user. - `port` `number` The port of the database. - `username` `string` The username of the master user. ## Methods ### static get ```ts Postgres.get(name, args, opts?) ``` #### Parameters - `name` `string` The name of the component. - `args` [`PostgresGetArgs`](#postgresgetargs) The arguments to get the Postgres database. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Postgres`](.) Reference an existing Postgres database with the given name. This is useful when you create a Postgres database in one stage and want to share it in another. It avoids having to create a new Postgres database in the other stage. :::tip You can use the `static get` method to share Postgres databases across stages. ::: Imagine you create a database in the `dev` stage. And in your personal stage `frank`, instead of creating a new database, you want to share the same database from `dev`. ```ts title="sst.config.ts" const database = $app.stage === "frank" ? sst.aws.Postgres.get("MyDatabase", { id: "app-dev-mydatabase", proxyId: "app-dev-mydatabase-proxy" }) : new sst.aws.Postgres("MyDatabase", { proxy: true }); ``` Here `app-dev-mydatabase` is the ID of the database, and `app-dev-mydatabase-proxy` is the ID of the proxy created in the `dev` stage. You can find these by outputting the database ID and proxy ID in the `dev` stage. ```ts title="sst.config.ts" return { id: database.id, proxyId: database.proxyId }; ``` ## PostgresGetArgs ### id **Type** `Input` The ID of the database. ### proxyId? **Type** `Input` The ID of the proxy. --- ## FunctionEnvironmentUpdate Reference doc for the `sst.providers.FunctionEnvironmentUpdate` component. https://sst.dev/docs/component/aws/providers/function-environment-update The `FunctionEnvironmentUpdate` component is internally used by the `Function` component to update the environment variables of a function. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `addEnvironment` method of the `Function` component. --- ## Constructor ```ts new FunctionEnvironmentUpdate(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`FunctionEnvironmentUpdateInputs`](#functionenvironmentupdateinputs) - `opts?` [`CustomResourceOptions`](https://www.pulumi.com/docs/iac/concepts/resources/dynamic-providers/) ## FunctionEnvironmentUpdateInputs ### environment **Type** `Input>>` The environment variables to update. ### functionName **Type** `Input` The name of the function to update. ### region **Type** `Input` The region of the function to update. --- ## QueueLambdaSubscriber Reference doc for the `sst.aws.QueueLambdaSubscriber` component. https://sst.dev/docs/component/aws/queue-lambda-subscriber The `QueueLambdaSubscriber` component is internally used by the `Queue` component to add a consumer to [Amazon SQS](https://aws.amazon.com/sqs/). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribe` method of the `Queue` component. --- ## Constructor ```ts new QueueLambdaSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`eventSourceMapping`](#nodes-eventsourcemapping) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. eventSourceMapping **Type** [`EventSourceMapping`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/) The Lambda event source mapping. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function that'll be notified. ## Args ### batch? **Type** `Input` - [`partialResponses?`](#batch-partialresponses) - [`size?`](#batch-size) - [`window?`](#batch-window) **Default** `{size: 10, window: "20 seconds", partialResponses: false}` Configure batch processing options for the consumer function. partialResponses? **Type** `Input` **Default** `false` Whether to return partial successful responses for a batch. Enables reporting of individual message failures in a batch. When enabled, only failed messages become visible in the queue again, preventing unnecessary reprocessing of successful messages. The handler function must return a response with failed message IDs. :::note Ensure your Lambda function is updated to handle `batchItemFailures` responses when enabling this option. ::: Read more about [partial batch responses](https://docs.aws.amazon.com/lambda/latest/dg/services-sqs-errorhandling.html#services-sqs-batchfailurereporting). Enable partial responses. ```js { batch: { partialResponses: true } } ``` For a batch of messages (id1, id2, id3, id4, id5), if id2 and id4 fail: ```json { "batchItemFailures": [ { "itemIdentifier": "id2" }, { "itemIdentifier": "id4" } ] } ``` This makes only id2 and id4 visible again in the queue. size? **Type** `Input` **Default** `10` The maximum number of events that will be processed together in a single invocation of the consumer function. Value must be between 1 and 10000. :::note When `size` is set to a value greater than 10, `window` must be set to at least `1 second`. ::: Set batch size to 1. This will process events individually. ```js { batch: { size: 1 } } ``` window? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"0 seconds"` The maximum amount of time to wait for collecting events before sending the batch to the consumer function, even if the batch size hasn't been reached. Value must be between 0 seconds and 5 minutes (300 seconds). ```js { batch: { window: "20 seconds" } } ``` ### filters? **Type** `Input>[]>` Filter the records that'll be processed by the `subscriber` function. :::tip You can pass in up to 5 different filters. ::: You can pass in up to 5 different filter policies. These will logically ORed together. Meaning that if any single policy matches, the record will be processed. Learn more about the [filter rule syntax](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax). For example, if you Queue contains records in this JSON format. ```js { RecordNumber: 0000, RequestCode: "AAAA", TimeStamp: "yyyy-mm-ddThh:mm:ss" } ``` To process only those records where the `RequestCode` is `BBBB`. ```js { filters: [ { body: { RequestCode: ["BBBB"] } } ] } ``` And to process only those records where `RecordNumber` greater than `9999`. ```js { filters: [ { body: { RecordNumber: [{ numeric: [ ">", 9999 ] }] } } ] } ``` ### queue **Type** `Input` - [`arn`](#queue-arn) The queue to use. arn **Type** `Input` The ARN of the queue. ### subscriber **Type** `Input` The subscriber function. ### transform? **Type** `Object` - [`eventSourceMapping?`](#transform-eventsourcemapping) - [`function?`](#transform-function) [Transform](/docs/components#transform) how this component creates its underlying resources. eventSourceMapping? **Type** [`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)` | (args: `[`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Event Source Mapping resource. function? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the subscriber Function resource. --- ## Queue Reference doc for the `sst.aws.Queue` component. https://sst.dev/docs/component/aws/queue The `Queue` component lets you add a serverless queue to your app. It uses [Amazon SQS](https://aws.amazon.com/sqs/). #### Create a queue ```ts title="sst.config.ts" const queue = new sst.aws.Queue("MyQueue"); ``` #### Make it a FIFO queue You can optionally make it a FIFO queue. ```ts {2} title="sst.config.ts" new sst.aws.Queue("MyQueue", { fifo: true }); ``` #### Add a subscriber ```ts title="sst.config.ts" queue.subscribe("src/subscriber.handler"); ``` #### Link the queue to a resource You can link the queue to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [queue] }); ``` Once linked, you can send messages to the queue from your function code. ```ts title="app/page.tsx" {1,7} const sqs = new SQSClient({}); await sqs.send(new SendMessageCommand({ QueueUrl: Resource.MyQueue.url, MessageBody: "Hello from Next.js!" })); ``` --- ## Constructor ```ts new Queue(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`QueueArgs`](#queueargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## QueueArgs ### delay? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"0 seconds"` The period of time which the delivery of all messages in the queue is delayed. This can range from 0 seconds to 900 seconds (15 minutes). ```js { delay: "10 seconds" } ``` ### dlq? **Type** `Input` - [`queue`](#dlq-queue) - [`retry`](#dlq-retry) Optionally add a dead-letter queue or DLQ for this queue. A dead-letter queue is used to store messages that can't be processed successfully by the subscriber function after the `retry` limit is reached. This takes either the ARN of the dead-letter queue or an object to configure how the dead-letter queue is used. For example, here's how you can create a dead-letter queue and link it to the main queue. ```ts title="sst.config.ts" {4} const deadLetterQueue = new sst.aws.Queue("MyDLQ"); new sst.aws.Queue("MyQueue", { dlq: deadLetterQueue.arn, }); ``` By default, the main queue will retry processing the message 3 times before sending it to the dead-letter queue. You can customize this. ```ts title="sst.config.ts" {3} new sst.aws.Queue("MyQueue", { dlq: { retry: 5, queue: deadLetterQueue.arn, } }); ``` queue **Type** `Input` The ARN of the dead-letter queue. retry **Type** `Input` **Default** `3` The number of times the main queue will retry the message before sending it to the dead-letter queue. ### fifo? **Type** `Input` - [`contentBasedDeduplication?`](#fifo-contentbaseddeduplication) **Default** `false` FIFO or _first-in-first-out_ queues are designed to guarantee that messages are processed exactly once and in the order that they are sent. :::caution Changing a standard queue to a FIFO queue (or the other way around) will cause the queue to be destroyed and recreated. ::: ```js { fifo: true } ``` By default, content based deduplication is disabled. You can enable it by configuring the `fifo` property. ```js { fifo: { contentBasedDeduplication: true } } ``` contentBasedDeduplication? **Type** `Input` **Default** `false` Content-based deduplication automatically generates a deduplication ID by hashing the message body to prevent duplicate message delivery. ### transform? **Type** `Object` - [`queue?`](#transform-queue) [Transform](/docs/components#transform) how this component creates its underlying resources. queue? **Type** [`QueueArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sqs/queue/#inputs)` | (args: `[`QueueArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sqs/queue/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the SQS Queue resource. ### visibilityTimeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"30 seconds"` Visibility timeout is a period of time during which a message is temporarily invisible to other consumers after a consumer has retrieved it from the queue. This mechanism prevents other consumers from processing the same message concurrently, ensuring that each message is processed only once. This timeout can range from 0 seconds to 12 hours. ```js { visibilityTimeout: "1 hour" } ``` ## Properties ### arn **Type** `Output` The ARN of the SQS Queue. ### nodes **Type** `Object` - [`queue`](#nodes-queue) The underlying [resources](/docs/components/#nodes) this component creates. queue **Type** [`Queue`](https://www.pulumi.com/registry/packages/aws/api-docs/sqs/queue/) The Amazon SQS Queue. ### url **Type** `Output` The SQS Queue URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The SQS Queue URL. ## Methods ### subscribe ```ts subscribe(subscriber, args?, opts?) ``` #### Parameters - `subscriber` `Input` The function that'll be notified. - `args?` [`QueueSubscriberArgs`](#queuesubscriberargs) Configure the subscription. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** `Output<`[`QueueLambdaSubscriber`](/docs/component/aws/queue-lambda-subscriber)`>` Subscribe to this queue. ```js title="sst.config.ts" queue.subscribe("src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" queue.subscribe("src/subscriber.handler", { filters: [ { body: { RequestCode: ["BBBB"] } } ] }); ``` Customize the subscriber function. ```js title="sst.config.ts" queue.subscribe({ handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" queue.subscribe("arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ### static get ```ts Queue.get(name, queueUrl, opts?) ``` #### Parameters - `name` `string` The name of the component. - `queueUrl` `Input` The URL of the existing SQS Queue. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Queue`](.) Reference an existing SQS Queue with its queue URL. This is useful when you create a queue in one stage and want to share it in another stage. It avoids having to create a new queue in the other stage. :::tip You can use the `static get` method to share SQS queues across stages. ::: Imagine you create a queue in the `dev` stage. And in your personal stage `frank`, instead of creating a new queue, you want to share the queue from `dev`. ```ts title="sst.config.ts" const queue = $app.stage === "frank" ? sst.aws.Queue.get("MyQueue", "https://sqs.us-east-1.amazonaws.com/123456789012/MyQueue") : new sst.aws.Queue("MyQueue"); ``` Here `https://sqs.us-east-1.amazonaws.com/123456789012/MyQueue` is the URL of the queue created in the `dev` stage. You can find this by outputting the queue URL in the `dev` stage. ```ts title="sst.config.ts" return queue.url; ``` ### static subscribe ```ts Queue.subscribe(queueArn, subscriber, args?, opts?) ``` #### Parameters - `queueArn` `Input` The ARN of the SQS Queue to subscribe to. - `subscriber` `Input` The function that'll be notified. - `args?` [`QueueSubscriberArgs`](#queuesubscriberargs) Configure the subscription. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** `Output<`[`QueueLambdaSubscriber`](/docs/component/aws/queue-lambda-subscriber)`>` Subscribe to an SQS Queue that was not created in your app. For example, let's say you have an existing SQS Queue with the following ARN. ```js title="sst.config.ts" const queueArn = "arn:aws:sqs:us-east-1:123456789012:MyQueue"; ``` You can subscribe to it by passing in the ARN. ```js title="sst.config.ts" sst.aws.Queue.subscribe(queueArn, "src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" sst.aws.Queue.subscribe(queueArn, "src/subscriber.handler", { filters: [ { body: { RequestCode: ["BBBB"] } } ] }); ``` Customize the subscriber function. ```js title="sst.config.ts" sst.aws.Queue.subscribe(queueArn, { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` ## QueueSubscriberArgs ### batch? **Type** `Input` - [`partialResponses?`](#batch-partialresponses) - [`size?`](#batch-size) - [`window?`](#batch-window) **Default** `{size: 10, window: "20 seconds", partialResponses: false}` Configure batch processing options for the consumer function. partialResponses? **Type** `Input` **Default** `false` Whether to return partial successful responses for a batch. Enables reporting of individual message failures in a batch. When enabled, only failed messages become visible in the queue again, preventing unnecessary reprocessing of successful messages. The handler function must return a response with failed message IDs. :::note Ensure your Lambda function is updated to handle `batchItemFailures` responses when enabling this option. ::: Read more about [partial batch responses](https://docs.aws.amazon.com/lambda/latest/dg/services-sqs-errorhandling.html#services-sqs-batchfailurereporting). Enable partial responses. ```js { batch: { partialResponses: true } } ``` For a batch of messages (id1, id2, id3, id4, id5), if id2 and id4 fail: ```json { "batchItemFailures": [ { "itemIdentifier": "id2" }, { "itemIdentifier": "id4" } ] } ``` This makes only id2 and id4 visible again in the queue. size? **Type** `Input` **Default** `10` The maximum number of events that will be processed together in a single invocation of the consumer function. Value must be between 1 and 10000. :::note When `size` is set to a value greater than 10, `window` must be set to at least `1 second`. ::: Set batch size to 1. This will process events individually. ```js { batch: { size: 1 } } ``` window? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"0 seconds"` The maximum amount of time to wait for collecting events before sending the batch to the consumer function, even if the batch size hasn't been reached. Value must be between 0 seconds and 5 minutes (300 seconds). ```js { batch: { window: "20 seconds" } } ``` ### filters? **Type** `Input>[]>` Filter the records that'll be processed by the `subscriber` function. :::tip You can pass in up to 5 different filters. ::: You can pass in up to 5 different filter policies. These will logically ORed together. Meaning that if any single policy matches, the record will be processed. Learn more about the [filter rule syntax](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-syntax). For example, if you Queue contains records in this JSON format. ```js { RecordNumber: 0000, RequestCode: "AAAA", TimeStamp: "yyyy-mm-ddThh:mm:ss" } ``` To process only those records where the `RequestCode` is `BBBB`. ```js { filters: [ { body: { RequestCode: ["BBBB"] } } ] } ``` And to process only those records where `RecordNumber` greater than `9999`. ```js { filters: [ { body: { RecordNumber: [{ numeric: [ ">", 9999 ] }] } } ] } ``` ### transform? **Type** `Object` - [`eventSourceMapping?`](#transform-eventsourcemapping) - [`function?`](#transform-function) [Transform](/docs/components#transform) how this component creates its underlying resources. eventSourceMapping? **Type** [`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)` | (args: `[`EventSourceMappingArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/eventsourcemapping/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Event Source Mapping resource. function? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the subscriber Function resource. --- ## React Reference doc for the `sst.aws.React` component. https://sst.dev/docs/component/aws/react The `React` component lets you deploy a React app built with [React Router](https://reactrouter.com/) app to AWS. #### Minimal example Deploy a React app that's in the project root. ```js new sst.aws.React("MyWeb"); ``` #### Change the path Deploys the React app in the `my-react-app/` directory. ```js {2} new sst.aws.React("MyWeb", { path: "my-react-app/" }); ``` #### Add a custom domain Set a custom domain for your React app. ```js {2} new sst.aws.React("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} new sst.aws.React("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your React app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.React("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your React app. ```ts title="app/root.tsx" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new React(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`ReactArgs`](#reactargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ReactArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the React app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your React app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the React app to use an existing CloudFront cache policy. By default, a new cache policy is created. Note that CloudFront has a limit of 20 cache policies per account. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your React app is run in dev mode; it's not deployed. ::: Instead of deploying your React app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your React app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set [environment variables](https://vitejs.dev/guide/env-and-mode) in your React app. These are made available: 1. In `react-router build`, they are loaded into `process.env`. 2. Locally while running `react-router dev` through `sst dev`. :::tip You can also `link` resources to your React app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: ```js { environment: { API_URL: api.url, STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your React app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your React app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your React app is located. This path is relative to your `sst.config.ts`. By default it assumes your React app is in the root of your SST app. If your React app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your React app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your React app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your React app **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the `base` property in your `vite.config.ts`. :::caution If routing to a path, you need to set that as the base path in your `vite.config.ts` and `reac-router.config.ts` as well. ::: ```js title="vite.config.ts" {3} plugins: [tailwindcss(), reactRouter(), tsconfigPaths()], base: "/docs/" }); ``` And the `basename` in your React Router configuration. ```jsx title="react-router.config.ts" {2} basename: "/docs" }; ``` To serve your React app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your React app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set the base path in your `vite.config.ts` and `basename` in your `react-router.config.ts`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the React app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the React app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## RealtimeLambdaSubscriber Reference doc for the `sst.aws.RealtimeLambdaSubscriber` component. https://sst.dev/docs/component/aws/realtime-lambda-subscriber The `RealtimeLambdaSubscriber` component is internally used by the `Realtime` component to add subscriptions to the [AWS IoT endpoint](https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribe` method of the `Realtime` component. --- ## Constructor ```ts new RealtimeLambdaSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`permission`](#nodes-permission) - [`rule`](#nodes-rule) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. permission **Type** [`Permission`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/permission/) The Lambda permission. rule **Type** [`TopicRule`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/topicrule/) The IoT Topic rule. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function that'll be notified. ## Args ### filter **Type** `Input` Filter the topics that'll be processed by the subscriber. :::tip Learn more about [topic filters](https://docs.aws.amazon.com/iot/latest/developerguide/topics.html#topicfilters). ::: Subscribe to a specific topic. ```js { filter: `${$app.name}/${$app.stage}/chat/room1` } ``` Subscribe to all topics under a prefix. ```js { filter: `${$app.name}/${$app.stage}/chat/#` } ``` ### iot **Type** `Input` - [`name`](#iot-name) The IoT WebSocket server to use. name **Type** `Input` The name of the Realtime component. ### subscriber **Type** `Input` The subscriber function. ### transform? **Type** `Object` - [`topicRule?`](#transform-topicrule) [Transform](/docs/components#transform) how this subscription creates its underlying resources. topicRule? **Type** [`TopicRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/topicrule/#inputs)` | (args: `[`TopicRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/topicrule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the IoT Topic rule resource. --- ## Realtime Reference doc for the `sst.aws.Realtime` component. https://sst.dev/docs/component/aws/realtime The `Realtime` component lets you publish and subscribe to messages in realtime. It offers a **topic-based** messaging network using [AWS IoT](https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html). Letting you publish and subscribe to messages using a WebSocket in the browser and your server. Also, provides an [SDK](#sdk) to authorize clients, grant permissions to subscribe, and publish to topics. :::note IoT is shared across all apps and stages in your AWS account. So you need to prefix the topics by the app and stage name. ::: There is **only 1 IoT endpoint** per region per AWS account. Messages from all apps and stages are published to the same IoT endpoint. Make sure to prefix the topics by the app and stage name. #### Create a realtime endpoint ```ts title="sst.config.ts" const server = new sst.aws.Realtime("MyServer", { authorizer: "src/authorizer.handler" }); ``` #### Authorize the client ```ts title="src/authorizer.ts" "realtime.authorizer" // Validate the token // Return the topics to subscribe and publish return { subscribe: [`${Resource.App.name}/${Resource.App.stage}/chat/room1`], publish: [`${Resource.App.name}/${Resource.App.stage}/chat/room1`], }; }); ``` #### Publish and receive messages in your frontend ```ts title="app/page.tsx" const client = new mqtt.MqttClient(); // Configure with // - Resource.Realtime.endpoint // - Resource.Realtime.authorizer const connection = client.new_connection(config); // Subscribe messages connection.on("message", (topic, payload) => { // Handle the message }); // Publish messages connection.publish(topic, payload, mqtt.QoS.AtLeastOnce); ``` #### Subscribe messages in your backend ```ts title="sst.config.ts" server.subscribe("src/subscriber.handler", { filter: `${$app.name}/${$app.stage}/chat/room1` }); ``` #### Publish message from your backend ```ts title="src/lambda.ts" const data = new IoTDataPlaneClient(); await data.send( new PublishCommand({ payload: Buffer.from( JSON.stringify({ message: "Hello world" }) ), topic: `${Resource.App.name}/${Resource.App.stage}/chat/room1`, }) ); ``` --- ## Constructor ```ts new Realtime(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`RealtimeArgs`](#realtimeargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## RealtimeArgs ### authorizer **Type** `Input` The Lambda function that'll be used to authorize the client on connection. ```js { authorizer: "src/authorizer.handler" } ``` ### transform? **Type** `Object` - [`authorizer?`](#transform-authorizer) [Transform](/docs/components#transform) how this subscription creates its underlying resources. authorizer? **Type** [`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/authorizer/#inputs)` | (args: `[`AuthorizerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/authorizer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the IoT authorizer resource. ## Properties ### authorizer **Type** `Output` The name of the IoT authorizer. ### endpoint **Type** `Output` The IoT endpoint. ### nodes **Type** `Object` - [`authHandler`](#nodes-authhandler) - [`authorizer`](#nodes-authorizer) The underlying [resources](/docs/components/#nodes) this component creates. authHandler **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The IoT authorizer function resource. authorizer **Type** [`Authorizer`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/authorizer/) The IoT authorizer resource. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `authorizer` `string` The name of the IoT authorizer. - `endpoint` `string` The IoT endpoint. The `realtime` client SDK is available through the following. ```js title="src/authorizer.ts" ``` --- ### authorizer ```ts realtime.authorizer(input) ``` #### Parameters - `input` (token: `string`) => `Promise<`[`AuthResult`](#authresult)`>` **Returns** [`IoTCustomAuthorizerHandler`](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/aws-lambda/trigger/iot-authorizer.d.ts) Creates an authorization handler for the `Realtime` component. It validates the token and grants permissions for the topics the client can subscribe and publish to. ```js title="src/authorizer.ts" "realtime.authorizer" // Validate the token console.log(token); // Return the topics to subscribe and publish return { subscribe: ["*"], publish: ["*"], }; }); ``` ### AuthResult **Type** `Object` - [`disconnectAfterInSeconds?`](#authresult-disconnectafterinseconds) - [`policyDocuments?`](#authresult-policydocuments) - [`principalId?`](#authresult-principalid) - [`publish?`](#authresult-publish) - [`refreshAfterInSeconds?`](#authresult-refreshafterinseconds) - [`subscribe?`](#authresult-subscribe) disconnectAfterInSeconds? **Type** `number` **Default** `86400` The maximum duration in seconds of the connection to IoT Core. :::note This is set when the connection is established and cannot be modified during subsequent policy refresh authorization handler invocations. ::: The minimum value is 300 seconds, and the maximum is 86400 seconds. policyDocuments? **Type** [`PolicyDocument`](https://github.com/DefinitelyTyped/DefinitelyTyped/blob/master/types/aws-lambda/trigger/api-gateway-authorizer.d.ts)`[]` Any additional [IoT Core policy documents](https://docs.aws.amazon.com/iot/latest/developerguide/iot-policies.html) to attach to the client. There's a maximum of 10 policy documents. Where each document can contain a maximum of 2048 characters. ```js { policyDocuments: [ { Version: "2012-10-17", Statement: [ { Action: "iot:Publish", Effect: "Allow", Resource: "*" } ] } ] } ``` principalId? **Type** `string` The principal ID of the authorized client. This could be a user ID, username, or phone number. The value must be an alphanumeric string with at least one, and no more than 128, characters and match the regex pattern, `([a-zA-Z0-9]){1,128}`. publish? **Type** `string[]` The topics the client can publish to. For example, this publishes to two specific topics. ```js { publish: ["chat/room1", "chat/room2"] } ``` And to publish to all topics under a given prefix. ```js { publish: ["chat/*"] } ``` refreshAfterInSeconds? **Type** `number` The duration in seconds between policy refreshes. After the given duration, IoT Core will invoke the authorization handler function. The minimum value is 300 seconds, and the maximum value is 86400 seconds. subscribe? **Type** `string[]` The topics the client can subscribe to. For example, this subscribes to two specific topics. ```js { subscribe: ["chat/room1", "chat/room2"] } ``` And to subscribe to all topics under a given prefix. ```js { subscribe: ["chat/*"] } ``` ## Methods ### subscribe ```ts subscribe(subscriber, args) ``` #### Parameters - `subscriber` `Input` The function that'll be notified. - `args` [`RealtimeSubscriberArgs`](#realtimesubscriberargs) Configure the subscription. **Returns** `Output<`[`RealtimeLambdaSubscriber`](/docs/component/aws/realtime-lambda-subscriber)`>` Subscribe to this Realtime server. ```js title="sst.config.ts" server.subscribe("src/subscriber.handler", { filter: `${$app.name}/${$app.stage}/chat/room1` }); ``` Customize the subscriber function. ```js title="sst.config.ts" server.subscribe( { handler: "src/subscriber.handler", timeout: "60 seconds" }, { filter: `${$app.name}/${$app.stage}/chat/room1` } ); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" server.subscribe("arn:aws:lambda:us-east-1:123456789012:function:my-function", { filter: `${$app.name}/${$app.stage}/chat/room1` }); ``` ## RealtimeSubscriberArgs ### filter **Type** `Input` Filter the topics that'll be processed by the subscriber. :::tip Learn more about [topic filters](https://docs.aws.amazon.com/iot/latest/developerguide/topics.html#topicfilters). ::: Subscribe to a specific topic. ```js { filter: `${$app.name}/${$app.stage}/chat/room1` } ``` Subscribe to all topics under a prefix. ```js { filter: `${$app.name}/${$app.stage}/chat/#` } ``` ### transform? **Type** `Object` - [`topicRule?`](#transform-topicrule) [Transform](/docs/components#transform) how this subscription creates its underlying resources. topicRule? **Type** [`TopicRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/topicrule/#inputs)` | (args: `[`TopicRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iot/topicrule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the IoT Topic rule resource. --- ## Redis.v1 Reference doc for the `sst.aws.Redis.v1` component. https://sst.dev/docs/component/aws/redis-v1 The `Redis` component lets you add a Redis cluster to your app using [Amazon ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html). For existing usage, rename `sst.aws.Redis` to `sst.aws.Redis.v1`. For new Redis, use the latest [`Redis`](/docs/component/aws/redis) component instead. :::caution This component has been deprecated. ::: What changed: - In this version, the Redis/Valkey cluster uses the default parameter group, which cannot be customized. - In the new version, the cluster now creates a custom parameter group. This allows you to customize the parameters via the `transform` prop. #### Create the cluster ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const redis = new sst.aws.Redis.v1("MyRedis", { vpc }); ``` #### Link to a resource You can link your cluster to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [redis], vpc }); ``` Once linked, you can connect to it from your function code. ```ts title="app/page.tsx" {1,6,7,12,13} const client = new Cluster( [{ host: Resource.MyRedis.host, port: Resource.MyRedis.port }], { redisOptions: { tls: { checkServerIdentity: () => undefined }, username: Resource.MyRedis.username, password: Resource.MyRedis.password } } ); ``` #### Running locally By default, your Redis cluster is deployed in `sst dev`. But let's say you are running Redis locally. ```bash docker run \ --rm \ -p 6379:6379 \ -v $(pwd)/.sst/storage/redis:/data \ redis:latest ``` You can connect to it in `sst dev` by configuring the `dev` prop. ```ts title="sst.config.ts" {3-6} const redis = new sst.aws.Redis.v1("MyRedis", { vpc, dev: { host: "localhost", port: 6379 } }); ``` This will skip deploying a Redis ElastiCache cluster and link to the locally running Redis server instead. [Check out the full example](/docs/examples/#aws-redis-local). --- ### Cost By default this component uses _On-demand nodes_ with a single `cache.t4g.micro` instance. The default `redis` engine costs $0.016 per hour. That works out to $0.016 x 24 x 30 or **$12 per month**. If the `valkey` engine is used, the cost is $0.0128 per hour. That works out to $0.0128 x 24 x 30 or **$9 per month**. Adjust this for the `instance` type and number of `nodes` you are using. The above are rough estimates for _us-east-1_, check out the [ElastiCache pricing](https://aws.amazon.com/elasticache/pricing/) for more details. --- ## Constructor ```ts new Redis.v1(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`RedisArgs`](#redisargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## RedisArgs ### dev? **Type** `Object` - [`host?`](#dev-host) - [`password?`](#dev-password) - [`port?`](#dev-port) - [`username?`](#dev-username) Configure how this component works in `sst dev`. By default, your Redis cluster is deployed in `sst dev`. But if you want to instead connect to a locally running Redis server, you can configure the `dev` prop. :::note By default, this creates a new Redis ElastiCache cluster even in `sst dev`. ::: This will skip deploying a Redis ElastiCache cluster and link to the locally running Redis server instead. Setting the `dev` prop also means that any linked resources will connect to the right Redis instance both in `sst dev` and `sst deploy`. ```ts { dev: { host: "localhost", port: 6379 } } ``` host? **Type** `Input` **Default** `"localhost"` The host of the local Redis server to connect to when running in dev. password? **Type** `Input` **Default** No password The password of the local Redis server to connect to when running in dev. port? **Type** `Input` **Default** `6379` The port of the local Redis server when running in dev. username? **Type** `Input` **Default** `"default"` The username of the local Redis server to connect to when running in dev. ### engine? **Type** `Input<"redis" | "valkey">` **Default** `"redis"` The Redis engine to use. The following engines are supported: - `"redis"`: The open-source version of Redis. - `"valkey"`: [Valkey](https://valkey.io/) is a Redis-compatible in-memory key-value store. ### instance? **Type** `Input` **Default** `"t4g.micro"` The type of instance to use for the nodes of the Redis cluster. Check out the [supported instance types](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/CacheNodes.SupportedTypes.html). :::caution Changing the instance type will cause the instance to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { instance: "m7g.xlarge" } ``` ### nodes? **Type** `Input` **Default** `1` The number of nodes to use for the Redis cluster. ```js { nodes: 4 } ``` ### transform? **Type** `Object` - [`cluster?`](#transform-cluster) - [`subnetGroup?`](#transform-subnetgroup) [Transform](/docs/components#transform) how this component creates its underlying resources. cluster? **Type** [`ReplicationGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/replicationgroup/#inputs)` | (args: `[`ReplicationGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/replicationgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Redis cluster. subnetGroup? **Type** [`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/subnetgroup/#inputs)` | (args: `[`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/subnetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Redis subnet group. ### version? **Type** `Input` **Default** `"7.1"` for Redis, `"7.2"` for Valkey The version of Redis. The default is `"7.1"` for the `"redis"` engine and `"7.2"` for the `"valkey"` engine. Check out the [supported versions](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/supported-engine-versions.html). :::caution Changing the version will cause the instance to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { version: "6.2" } ``` ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`securityGroups`](#vpc-securitygroups) - [`subnets`](#vpc-subnets) The VPC to use for the Redis cluster. Create a VPC component. ```js const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. ```js { vpc: myVpc } ``` Or pass in a custom VPC configuration. ```js { vpc: { subnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"], securityGroups: ["sg-0399348378a4c256c"] } } ``` securityGroups **Type** `Input[]>` A list of VPC security group IDs. subnets **Type** `Input[]>` A list of subnet IDs in the VPC to deploy the Redis cluster in. ## Properties ### clusterID **Type** `Output` The ID of the Redis cluster. ### host **Type** `Output` The host to connect to the Redis cluster. ### nodes **Type** `Object` - [`cluster`](#nodes-cluster) The underlying [resources](/docs/components/#nodes) this component creates. cluster **Type** [`ReplicationGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/replicationgroup/) The ElastiCache Redis cluster. ### password **Type** `undefined | Output` The password to connect to the Redis cluster. ### port **Type** `Output` The port to connect to the Redis cluster. ### username **Type** `Output` The username to connect to the Redis cluster. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `host` `string` The host to connect to the Redis cluster. - `password` `undefined | string` The password to connect to the Redis cluster. - `port` `number` The port to connect to the Redis cluster. - `username` `string` The username to connect to the Redis cluster. ## Methods ### static get ```ts Redis.get(name, clusterID, opts?) ``` #### Parameters - `name` `string` The name of the component. - `clusterID` `Input` The id of the existing Redis cluster. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Redis`](.) Reference an existing Redis cluster with the given cluster name. This is useful when you create a Redis cluster in one stage and want to share it in another. It avoids having to create a new Redis cluster in the other stage. :::tip You can use the `static get` method to share Redis clusters across stages. ::: Imagine you create a cluster in the `dev` stage. And in your personal stage `frank`, instead of creating a new cluster, you want to share the same cluster from `dev`. ```ts title="sst.config.ts" const redis = $app.stage === "frank" ? sst.aws.Redis.v1.get("MyRedis", "app-dev-myredis") : new sst.aws.Redis.v1("MyRedis"); ``` Here `app-dev-myredis` is the ID of the cluster created in the `dev` stage. You can find this by outputting the cluster ID in the `dev` stage. ```ts title="sst.config.ts" return { cluster: redis.clusterID }; ``` --- ## Redis Reference doc for the `sst.aws.Redis` component. https://sst.dev/docs/component/aws/redis The `Redis` component lets you add a Redis cluster to your app using [Amazon ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html). #### Create the cluster ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const redis = new sst.aws.Redis("MyRedis", { vpc }); ``` #### Link to a resource You can link your cluster to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [redis], vpc }); ``` Once linked, you can connect to it from your function code. ```ts title="app/page.tsx" {1,6,7,12,13} const client = new Cluster( [{ host: Resource.MyRedis.host, port: Resource.MyRedis.port }], { redisOptions: { tls: { checkServerIdentity: () => undefined }, username: Resource.MyRedis.username, password: Resource.MyRedis.password } } ); ``` #### Running locally By default, your Redis cluster is deployed in `sst dev`. But let's say you are running Redis locally. ```bash docker run \ --rm \ -p 6379:6379 \ -v $(pwd)/.sst/storage/redis:/data \ redis:latest ``` You can connect to it in `sst dev` by configuring the `dev` prop. ```ts title="sst.config.ts" {3-6} const redis = new sst.aws.Redis("MyRedis", { vpc, dev: { host: "localhost", port: 6379 } }); ``` This will skip deploying a Redis ElastiCache cluster and link to the locally running Redis server instead. [Check out the full example](/docs/examples/#aws-redis-local). --- ### Cost By default this component uses _On-demand nodes_ with a single `cache.t4g.micro` instance. The default `redis` engine costs $0.016 per hour. That works out to $0.016 x 24 x 30 or **$12 per month**. If the `valkey` engine is used, the cost is $0.0128 per hour. That works out to $0.0128 x 24 x 30 or **$9 per month**. Adjust this for the `instance` type and number of `nodes` you are using. The above are rough estimates for _us-east-1_, check out the [ElastiCache pricing](https://aws.amazon.com/elasticache/pricing/) for more details. --- ## Constructor ```ts new Redis(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`RedisArgs`](#redisargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## RedisArgs ### cluster? **Type** `Input` - [`nodes`](#cluster-nodes) **Default** `{ nodes: 1 }` Configure cluster mode for Redis. Disable cluster mode. ```js { cluster: false } ``` nodes **Type** `Input` **Default** `1` The number of nodes to use for the Redis cluster. ```js { nodes: 4 } ``` ### dev? **Type** `Object` - [`host?`](#dev-host) - [`password?`](#dev-password) - [`port?`](#dev-port) - [`username?`](#dev-username) Configure how this component works in `sst dev`. By default, your Redis instance is deployed in `sst dev`. But if you want to instead connect to a locally running Redis server, you can configure the `dev` prop. :::note By default, this creates a new Redis ElastiCache instance even in `sst dev`. ::: This will skip deploying a Redis ElastiCache instance and link to the locally running Redis server instead. Setting the `dev` prop also means that any linked resources will connect to the right Redis instance both in `sst dev` and `sst deploy`. ```ts { dev: { host: "localhost", port: 6379 } } ``` host? **Type** `Input` **Default** `"localhost"` The host of the local Redis server to connect to when running in dev. password? **Type** `Input` **Default** No password The password of the local Redis server to connect to when running in dev. port? **Type** `Input` **Default** `6379` The port of the local Redis server when running in dev. username? **Type** `Input` **Default** `"default"` The username of the local Redis server to connect to when running in dev. ### engine? **Type** `Input<"redis" | "valkey">` **Default** `"redis"` The Redis engine to use. The following engines are supported: - `"redis"`: The open-source version of Redis. - `"valkey"`: [Valkey](https://valkey.io/) is a Redis-compatible in-memory key-value store. :::danger Changing the engine will cause the database to be destroyed and recreated. ::: ### instance? **Type** `Input` **Default** `"t4g.micro"` The type of instance to use for the nodes of the Redis instance. Check out the [supported instance types](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/CacheNodes.SupportedTypes.html). :::caution Changing the instance type will cause the instance to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { instance: "m7g.xlarge" } ``` ### parameters? **Type** `Input>>` Key-value pairs that define custom parameters for the Redis's parameter group. These values override the defaults set by AWS. ```js { parameters: { "maxmemory-policy": "noeviction" } } ``` ### transform? **Type** `Object` - [`cluster?`](#transform-cluster) - [`parameterGroup?`](#transform-parametergroup) - [`subnetGroup?`](#transform-subnetgroup) [Transform](/docs/components#transform) how this component creates its underlying resources. cluster? **Type** [`ReplicationGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/replicationgroup/#inputs)` | (args: `[`ReplicationGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/replicationgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Redis cluster. parameterGroup? **Type** [`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/parametergroup/#inputs)` | (args: `[`ParameterGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/parametergroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Redis parameter group. subnetGroup? **Type** [`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/subnetgroup/#inputs)` | (args: `[`SubnetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/subnetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Redis subnet group. ### version? **Type** `Input` **Default** `"7.1"` for Redis, `"7.2"` for Valkey The version of Redis. The default is `"7.1"` for the `"redis"` engine and `"7.2"` for the `"valkey"` engine. Check out the [supported versions](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/supported-engine-versions.html). :::caution Changing the version will cause the instance to restart on the next `sst deploy`, causing downtime. Learn more about [upgrading databases](/docs/upgrade-aws-databases/). ::: ```js { version: "6.2" } ``` ### vpc **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`securityGroups`](#vpc-securitygroups) - [`subnets`](#vpc-subnets) The VPC to use for the Redis instance. Create a VPC component. ```js const myVpc = new sst.aws.Vpc("MyVpc"); ``` And pass it in. ```js { vpc: myVpc } ``` Or pass in a custom VPC configuration. ```js { vpc: { subnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"], securityGroups: ["sg-0399348378a4c256c"] } } ``` securityGroups **Type** `Input[]>` A list of VPC security group IDs. subnets **Type** `Input[]>` A list of subnet IDs in the VPC to deploy the Redis instance in. ## Properties ### clusterId **Type** `Output` The ID of the Redis cluster. ### host **Type** `Output` The host to connect to the Redis cluster. ### nodes **Type** `Object` - [`cluster`](#nodes-cluster) The underlying [resources](/docs/components/#nodes) this component creates. cluster **Type** `Output<`[`ReplicationGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/elasticache/replicationgroup/)`>` The ElastiCache Redis cluster. ### password **Type** `undefined | Output` The password to connect to the Redis cluster. ### port **Type** `Output` The port to connect to the Redis cluster. ### username **Type** `Output` The username to connect to the Redis cluster. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `host` `string` The host to connect to the Redis cluster. - `password` `undefined | string` The password to connect to the Redis cluster. - `port` `number` The port to connect to the Redis cluster. - `username` `string` The username to connect to the Redis cluster. ## Methods ### static get ```ts Redis.get(name, clusterId, opts?) ``` #### Parameters - `name` `string` The name of the component. - `clusterId` `Input` The id of the existing Redis cluster. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Redis`](.) Reference an existing Redis cluster with the given cluster name. This is useful when you create a Redis cluster in one stage and want to share it in another. It avoids having to create a new Redis cluster in the other stage. :::tip You can use the `static get` method to share Redis clusters across stages. ::: Imagine you create a cluster in the `dev` stage. And in your personal stage `frank`, instead of creating a new cluster, you want to share the same cluster from `dev`. ```ts title="sst.config.ts" const redis = $app.stage === "frank" ? sst.aws.Redis.get("MyRedis", "app-dev-myredis") : new sst.aws.Redis("MyRedis"); ``` Here `app-dev-myredis` is the ID of the cluster created in the `dev` stage. You can find this by outputting the cluster ID in the `dev` stage. ```ts title="sst.config.ts" return { cluster: redis.clusterId }; ``` --- ## Remix Reference doc for the `sst.aws.Remix` component. https://sst.dev/docs/component/aws/remix The `Remix` component lets you deploy a [Remix](https://remix.run) app to AWS. #### Minimal example Deploy a Remix app that's in the project root. ```js title="sst.config.ts" new sst.aws.Remix("MyWeb"); ``` #### Change the path Deploys the Remix app in the `my-remix-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.Remix("MyWeb", { path: "my-remix-app/" }); ``` #### Add a custom domain Set a custom domain for your Remix app. ```js {2} title="sst.config.ts" new sst.aws.Remix("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.Remix("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your Remix app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Remix("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your Remix app. ```ts title="app/root.tsx" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new Remix(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`RemixArgs`](#remixargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## RemixArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the Remix app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your Remix app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### buildDirectory? **Type** `Input` **Default** `"build"` The directory where the build output is located. This should match the value of `buildDirectory` in the Remix plugin section of your Vite config. ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the Remix app to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your Remix app is run in dev mode; it's not deployed. ::: Instead of deploying your Remix app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your Remix app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set [environment variables](https://remix.run/docs/en/main/guides/envvars) in your Remix app. These are made available: 1. In `remix build`, they are loaded into `process.env`. 2. Locally while running through `sst dev`. :::tip You can also `link` resources to your Remix app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: ```js { environment: { API_URL: api.url, STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your Remix app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your Remix app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your Remix app is located. This path is relative to your `sst.config.ts`. By default it assumes your Remix app is in the root of your SST app. If your Remix app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your Remix app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your Remix app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your Remix app **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the `base` in your `vite.config.ts`. :::caution If routing to a path, you need to set that as the base path in your Remix app as well. ::: ```js title="vite.config.ts" {3} plugins: [...], base: "/docs" }); ``` To serve your Remix app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your Remix app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set this as the `base` in your `vite.config.ts`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the Remix app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the Remix app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## Router Reference doc for the `sst.aws.Router` component. https://sst.dev/docs/component/aws/router The `Router` component lets you use a CloudFront distribution to direct requests to various parts of your application like: - A URL - A function - A frontend - An S3 bucket #### Minimal example ```ts title="sst.config.ts" new sst.aws.Router("MyRouter"); ``` #### Add a custom domain ```ts {2} title="sst.config.ts" new sst.aws.Router("MyRouter", { domain: "myapp.com" }); ``` #### Sharing the router across stages ```ts title="sst.config.ts" const router = $app.stage === "production" ? new sst.aws.Router("MyRouter", { domain: { name: "example.com", aliases: ["*.example.com"] } }) : sst.aws.Router.get("MyRouter", "E1XWRGCYGTFB7Z"); ``` #### Route to a URL ```ts title="sst.config.ts" {3} const router = new sst.aws.Router("MyRouter"); router.route("/", "https://some-external-service.com"); ``` #### Route to an S3 bucket ```ts title="sst.config.ts" {2,6} const myBucket = new sst.aws.Bucket("MyBucket", { access: "cloudfront" }); const router = new sst.aws.Router("MyRouter"); router.routeBucket("/files", myBucket); ``` You need to allow CloudFront to access the bucket by setting the `access` prop on the bucket. #### Route to a function ```ts title="sst.config.ts" {8-11} const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); const myFunction = new sst.aws.Function("MyFunction", { handler: "src/api.handler", url: { router: { instance: router, path: "/api" } } }); ``` Setting the route through the function, instead of `router.route()` makes it so that `myFunction.url` gives you the URL based on the Router domain. #### Route to a frontend ```ts title="sst.config.ts" {4-6} const router = new sst.aws.Router("MyRouter"); const mySite = new sst.aws.Nextjs("MyWeb", { router: { instance: router } }); ``` Setting the route through the site, instead of `router.route()` makes it so that `mySite.url` gives you the URL based on the Router domain. #### Route to a frontend on a path ```ts title="sst.config.ts" {4-7} const router = new sst.aws.Router("MyRouter"); new sst.aws.Nextjs("MyWeb", { router: { instance: router, path: "/docs" } }); ``` If you are routing to a path, you'll need to configure the base path in your frontend app as well. [Learn more](/docs/component/aws/nextjs/#router). #### Route to a frontend on a subdomain ```ts title="sst.config.ts" {4,9-12} const router = new sst.aws.Router("MyRouter", { domain: { name: "example.com", aliases: ["*.example.com"] } }); new sst.aws.Nextjs("MyWeb", { router: { instance: router, domain: "docs.example.com" } }); ``` We configure `*.example.com` as an alias so that we can route to a subdomain. #### How it works This uses a CloudFront KeyValueStore to store the routing data and a CloudFront function to route the request. As routes are added, the store is updated. So when a request comes in, it does a lookup in the store and dynamically sets the origin based on the routing data. For frontends, that have their server functions deployed to multiple `regions`, it routes to the closest region based on the user's location. You might notice a _placeholder.sst.dev_ behavior in CloudFront. This is not used and is only there because CloudFront requires a default behavior. #### Limits There are some limits on this setup but it's managed by SST. - The CloudFront function can be a maximum of 10KB in size. But because all the route data is stored in the KeyValueStore, the function can be kept small. - Each value in the KeyValueStore needs to be less than 1KB. This component splits the routes into multiple values to keep it under the limit. - The KeyValueStore can be a maximum of 5MB. This is fairly large. But to handle sites that have a lot of files, only top-level assets get individual entries. --- ## Constructor ```ts new Router(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`RouterArgs`](#routerargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## RouterArgs ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your Router. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Object` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set. - Add the `x-forwarded-host` header. - Route requests to the corresponding target based on the domain and request path. The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KeyValueStore to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KeyValueStore to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`token?`](#invalidation-token) - [`wait?`](#invalidation-wait) **Default** Invalidation is turned off Configure how the CloudFront cache invalidations are handled. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Setting this to `true` will invalidate all paths. It's equivalent to passing in `{ paths: ["/*"] }`. ```js { invalidation: true } ``` paths? **Type** `Input[]>` **Default** `["/*"]` Specify an array of glob pattern of paths to invalidate. :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. token? **Type** `Input` **Default** A unique value is auto-generated on each deploy A token used to determine if the cache should be invalidated. If the token is the same as the previous deployment, the cache will not be invalidated. You can set this to a hash that's computed on every deploy. So if the hash changes, the cache will be invalidated. ```js { invalidation: { token: "foo123" } } ``` wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note Switching from `"none"` to `"oac"` or `"oac-with-edge-signing"` may cause brief 403 errors (~10-60s) during deployment while CloudFront edge nodes pick up the new signing configuration. For zero-disruption upgrades, set `protection` when first creating the Router. ::: :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function URL protection through CloudFront Origin Access Control. When set, all Functions and SSR sites routing through this Router automatically inherit the protection mode. ```js { protection: "oac" } ``` ```js { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### transform? **Type** `Object` - [`cachePolicy?`](#transform-cachepolicy) - [`cdn?`](#transform-cdn) - [`waf?`](#transform-waf) - [`wafLogGroup?`](#transform-wafloggroup) - [`wafLogging?`](#transform-waflogging) [Transform](/docs/components#transform) how this component creates its underlying resources. cachePolicy? **Type** [`CachePolicyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/cachepolicy/#inputs)` | (args: `[`CachePolicyArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/cachepolicy/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cache Policy that's attached to each CloudFront behavior. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. waf? **Type** [`WebAclArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/wafv2/webacl/#inputs)` | (args: `[`WebAclArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/wafv2/webacl/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the WAF WebACL resource. wafLogGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch LogGroup resource used for WAF logs. wafLogging? **Type** [`WebAclLoggingConfigurationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/wafv2/webaclloggingconfiguration/#inputs)` | (args: `[`WebAclLoggingConfigurationArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/wafv2/webaclloggingconfiguration/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the WAF WebACL logging configuration resource. ### waf? **Type** `Input` **Default** WAF is disabled Enable AWS WAF (Web Application Firewall) to protect your Router from common web exploits and bots. :::tip WAF provides protection against SQL injection, cross-site scripting (XSS), and other common attacks. ::: Enable with sensible defaults. ```js { waf: true } ``` Or customize the configuration. ```js { waf: { rateLimitPerIp: 1000, managedRules: { coreRuleSet: true, knownBadInputs: true, sqlInjection: true } } } ``` ## Properties ### distributionID **Type** `Output` The ID of the Router distribution. ### nodes **Type** `Object` - [`cdn`](#nodes-cdn) The underlying [resources](/docs/components/#nodes) this component creates. cdn **Type** `Output<`[`Cdn`](/docs/component/aws/cdn)`>` The Amazon CloudFront CDN resource. ### url **Type** `Output` The URL of the Router. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the Router. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## Methods ### route ```ts route(pattern, url, args?) ``` #### Parameters - `pattern` `Input` The path prefix to match for this route. - `url` `Input` The destination URL to route matching requests to. - `args?` `Input<`[`RouterUrlRouteArgs`](#routerurlrouteargs)`>` Configure the route. **Returns** `void` Add a route to a destination URL. You can match a route based on: - A path prefix like `/api` - A domain pattern like `api.example.com` - A combined pattern like `dev.example.com/api` For example, to match a path prefix. ```ts title="sst.config.ts" router.route("/api", "https://api.example.com"); ``` Or match a domain. ```ts title="sst.config.ts" router.route("api.myapp.com/", "https://api.example.com"); ``` Or a combined pattern. ```ts title="sst.config.ts" router.route("dev.myapp.com/api", "https://api.example.com"); ``` You can also rewrite the request path. ```ts title="sst.config.ts" router.route("/api", "https://api.example.com", { rewrite: { regex: "^/api/(.*)$", to: "/$1" } }); ``` Here something like `/api/users/profile` will be routed to `https://api.example.com/users/profile`. ### routeBucket ```ts routeBucket(pattern, bucket, args?) ``` #### Parameters - `pattern` `Input` The path prefix to match for this route. - `bucket` `Input<`[`Bucket`](/docs/component/aws/bucket)`>` The S3 bucket to route matching requests to. - `args?` `Input<`[`RouterBucketRouteArgs`](#routerbucketrouteargs)`>` Configure the route. **Returns** `void` Add a route to an S3 bucket. Let's say you have an S3 bucket that gives CloudFront `access`. ```ts title="sst.config.ts" {2} const bucket = new sst.aws.Bucket("MyBucket", { access: "cloudfront" }); ``` You can match a pattern and route to it based on: - A path prefix like `/api` - A domain pattern like `api.example.com` - A combined pattern like `dev.example.com/api` For example, to match a path prefix. ```ts title="sst.config.ts" router.routeBucket("/files", bucket); ``` Or match a domain. ```ts title="sst.config.ts" router.routeBucket("files.example.com", bucket); ``` Or a combined pattern. ```ts title="sst.config.ts" router.routeBucket("dev.example.com/files", bucket); ``` You can also rewrite the request path. ```ts title="sst.config.ts" router.routeBucket("/files", bucket, { rewrite: { regex: "^/files/(.*)$", to: "/$1" } }); ``` Here something like `/files/logo.png` will be routed to `/logo.png`. ### static get ```ts Router.get(name, distributionID, opts?) ``` #### Parameters - `name` `string` The name of the component. - `distributionID` `Input` The ID of the existing Router distribution. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Router`](.) Reference an existing Router with the given Router distribution ID. Let's say you create a Router in the `dev` stage. And in your personal stage `frank`, you want to share the same Router. ```ts title="sst.config.ts" const router = $app.stage === "frank" ? sst.aws.Router.get("MyRouter", "E2IDLMESRN6V62") : new sst.aws.Router("MyRouter"); ``` Here `E2IDLMESRN6V62` is the ID of the Router distribution created in the `dev` stage. You can find this by outputting the distribution ID in the `dev` stage. ```ts title="sst.config.ts" return { router: router.distributionID }; ``` Learn more about [how to configure a router for your app](/docs/configure-a-router). ## RouterBucketRouteArgs ### connectionAttempts? **Type** `Input` **Default** 3 The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js { connectionAttempts: 1 } ``` ### connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js { connectionTimeout: "3 seconds" } ``` ### rewrite? **Type** `Input` - [`regex`](#rewrite-regex) - [`to`](#rewrite-to) Rewrite the request path. If the route path is `/files/*` and a request comes in for `/files/logo.png`, the request path the destination sees is `/files/logo.png`. If you want to serve the file from the root of the bucket, you can rewrite the request path to `/logo.png`. ```js { rewrite: { regex: "^/files/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ## RouterUrlRouteArgs ### connectionAttempts? **Type** `Input` **Default** 3 The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js { connectionAttempts: 1 } ``` ### connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js { connectionTimeout: "3 seconds" } ``` ### keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds ```js { keepAliveTimeout: "10 seconds" } ``` ### readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js { readTimeout: "60 seconds" } ``` ### rewrite? **Type** `Input` - [`regex`](#rewrite-regex-1) - [`to`](#rewrite-to-1) Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js { rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ## WafArgs ### logging? **Type** `Input` **Default** Logging is disabled Configure WAF logging to CloudWatch. When set to `true`, all WAF-evaluated requests are logged with a 1-month retention. Or pass in an object to customize what is logged, how long logs are retained, and which fields are redacted. :::tip WAF logging is off by default. Enabling it will incur additional [CloudWatch costs](https://aws.amazon.com/cloudwatch/pricing/) depending on log volume. ::: Enable with defaults. ```js { waf: { logging: true } } ``` Only log blocked requests. ```js { waf: { logging: { include: "blocked", retention: "3 months" } } } ``` Redact sensitive fields. ```js { waf: { logging: { redact: { queryString: true, headers: ["cookie"] } } } } ``` ### managedRules? **Type** `Input` - [`coreRuleSet?`](#managedrules-coreruleset) - [`knownBadInputs?`](#managedrules-knownbadinputs) - [`sqlInjection?`](#managedrules-sqlinjection) **Default** All managed rules enabled Configure which AWS managed rule groups to enable. ```js { waf: { managedRules: { coreRuleSet: true, knownBadInputs: true, sqlInjection: false } } } ``` coreRuleSet? **Type** `Input` **Default** `true` Enable the AWS Core Rule Set (CRS) which provides protection against common web vulnerabilities. knownBadInputs? **Type** `Input` **Default** `true` Enable protection against known bad inputs, including Log4j vulnerabilities. sqlInjection? **Type** `Input` **Default** `true` Enable SQL injection protection. ### rateLimitPerIp? **Type** `Input` **Default** `2000` The rate limit per IP address. Requests from an IP that exceed this limit within a 5-minute window will be blocked. ```js { waf: { rateLimitPerIp: 1000 } } ``` ## WafLoggingArgs ### include? **Type** `Input<"all" | "blocked">` **Default** `"all"` Filter which requests are logged. - `"all"` logs every request evaluated by the WAF. - `"blocked"` only logs requests that were blocked. ```js { waf: { logging: { include: "blocked" } } } ``` ### redact? **Type** `Input` - [`headers?`](#redact-headers) - [`method?`](#redact-method) - [`queryString?`](#redact-querystring) - [`uriPath?`](#redact-uripath) **Default** `{ queryString: true, headers: ["cookie", "authorization"] }` Configure which parts of the request are redacted from the logs. Redacted fields are replaced with `REDACTED` in the log output. By default, the query string and the `cookie` and `authorization` headers are redacted since they commonly contain PII or credentials. Set to `false` to disable all redaction. Disable all redaction. ```js { waf: { logging: { redact: false } } } ``` Redact everything. ```js { waf: { logging: { redact: { queryString: true, uriPath: true, method: true, headers: ["cookie", "authorization"] } } } } ``` headers? **Type** `Input` **Default** `["cookie", "authorization"]` A list of header names to redact from the logs. Must be lowercase. ```js { headers: ["cookie", "authorization", "x-api-key"] } ``` method? **Type** `Input` **Default** `false` Redact the HTTP method from the logs (GET, POST, etc.). queryString? **Type** `Input` **Default** `true` Redact the query string from the logs. The query string is the part of a URL after the `?` and can contain tokens, user IDs, or other sensitive parameters. uriPath? **Type** `Input` **Default** `false` Redact the URI path from the logs. The URI path identifies the resource being accessed, like `/users/123/profile`. ### retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `"1 month"` The duration the WAF logs are kept in CloudWatch. ```js { waf: { logging: { retention: "3 months" } } } ``` --- ## Service.v1 Reference doc for the `sst.aws.Service.v1` component. https://sst.dev/docs/component/aws/service-v1 The `Service` component is internally used by the `Cluster` component to deploy services to [Amazon ECS](https://aws.amazon.com/ecs/). It uses [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html). :::note This component is not meant to be created directly. ::: This component is returned by the `addService` method of the `Cluster` component. --- ## Constructor ```ts new Service.v1(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`ServiceArgs`](#serviceargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ServiceArgs ### architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The CPU architecture of the container in this service. ```js { architecture: "arm64" } ``` ### cluster **Type** `Input` - [`arn`](#cluster-arn) - [`name`](#cluster-name) The cluster to use for the service. arn **Type** `Input` The ARN of the cluster. name **Type** `Input` The name of the cluster. ### cpu? **Type** `"0.25 vCPU" | "0.5 vCPU" | "1 vCPU" | "2 vCPU" | "4 vCPU" | "8 vCPU" | "16 vCPU"` **Default** `"0.25 vCPU"` The amount of CPU allocated to the container in this service. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { cpu: "1 vCPU" } ``` ### dev? **Type** `Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your service is run locally; it's not deployed. ::: Instead of deploying your service, this starts it locally. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` The command that `sst dev` runs to start this in dev mode. This is the command you run when you want to run your service locally. directory? **Type** `Input` **Default** Uses the `image.dockerfile` path Change the directory from where the `command` is run. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### environment? **Type** `Input>>` Key-value pairs of values that are set as [container environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html). The keys need to: - Start with a letter - Be at least 2 characters long - Contain only letters, numbers, or underscores ```js { environment: { DEBUG: "true" } } ``` ### image? **Type** `Input` - [`args?`](#image-args) - [`context?`](#image-context) - [`dockerfile?`](#image-dockerfile) **Default** `{}` Configure the docker build command for building the image. Prior to building the image, SST will automatically add the `.sst` directory to the `.dockerignore` if not already present. ```js { image: { context: "./app", dockerfile: "Dockerfile", args: { MY_VAR: "value" } } } ``` args? **Type** `Input>>` Key-value pairs of [build args](https://docs.docker.com/build/guide/build-args/) to pass to the docker build command. ```js { args: { MY_VAR: "value" } } ``` context? **Type** `Input` **Default** `"."` The path to the [Docker build context](https://docs.docker.com/build/building/context/#local-context). The path is relative to your project's `sst.config.ts`. To change where the docker build context is located. ```js { context: "./app" } ``` dockerfile? **Type** `Input` **Default** `"Dockerfile"` The path to the [Dockerfile](https://docs.docker.com/reference/cli/docker/image/build/#file). The path is relative to the build `context`. To use a different Dockerfile. ```js { dockerfile: "Dockerfile.prod" } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your service. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your app using the [SDK](/docs/reference/sdk/). Takes a list of components to link to the service. ```js { link: [bucket, stripeKey] } ``` ### logging? **Type** `Input` - [`retention?`](#logging-retention) **Default** `{ retention: "1 month" }` Configure the service's logs in CloudWatch. ```js { logging: { retention: "forever" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `"1 month"` The duration the logs are kept in CloudWatch. ### memory? **Type** `"$\{number\} GB"` **Default** `"0.5 GB"` The amount of memory allocated to the container in this service. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { memory: "2 GB" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the service needs to access. These permissions are used to create the service's [task role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html). :::tip If you `link` the service to a resource, the permissions to access it are automatically added. ::: Allow the service to read and write to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Allow the service to perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Granting the service permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### public? **Type** `Input` - [`domain?`](#public-domain) `Input` - [`cert?`](#public-domain-cert) - [`dns?`](#public-domain-dns) - [`name`](#public-domain-name) - [`ports`](#public-ports) `Input` - [`forward?`](#public-ports-forward) - [`listen`](#public-ports-listen) Configure a public endpoint for the service. When configured, a load balancer will be created to route traffic to the containers. By default, the endpoint is an auto-generated load balancer URL. You can also add a custom domain for the public endpoint. ```js { public: { domain: "example.com", ports: [ { listen: "80/http" }, { listen: "443/https", forward: "80/http" } ] } } ``` domain? **Type** `Input` Set a custom domain for your public endpoint. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the load balancer endpoint. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` ports **Type** `Input` Configure the mapping for the ports the public endpoint listens to and forwards to the service. This supports two types of protocols: 1. Application Layer Protocols: `http` and `https`. This'll create an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html). 2. Network Layer Protocols: `tcp`, `udp`, `tcp_udp`, and `tls`. This'll create a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html). :::note If you are listening on `https` or `tls`, you need to specify a custom `public.domain`. ::: You can **not** configure both application and network layer protocols for the same service. Here we are listening on port `80` and forwarding it to the service on port `8080`. ```js { public: { ports: [ { listen: "80/http", forward: "8080/http" } ] } } ``` The `forward` port and protocol defaults to the `listen` port and protocol. So in this case both are `80/http`. ```js { public: { ports: [ { listen: "80/http" } ] } } ``` forward? **Type** `Input<"$\{number\}/https" | "$\{number\}/http" | "$\{number\}/tcp" | "$\{number\}/udp" | "$\{number\}/tcp_udp" | "$\{number\}/tls">` **Default** The same port and protocol as `listen`. The port and protocol of the container the service forwards the traffic to. Uses the format `{port}/{protocol}`. listen **Type** `Input<"$\{number\}/https" | "$\{number\}/http" | "$\{number\}/tcp" | "$\{number\}/udp" | "$\{number\}/tcp_udp" | "$\{number\}/tls">` The port and protocol the service listens on. Uses the format `{port}/{protocol}`. ### scaling? **Type** `Input` - [`cpuUtilization?`](#scaling-cpuutilization) - [`max?`](#scaling-max) - [`memoryUtilization?`](#scaling-memoryutilization) - [`min?`](#scaling-min) **Default** `{ min: 1, max: 1 }` Configure the service to automatically scale up or down based on the CPU or memory utilization of a container. By default, scaling is disabled and the service will run in a single container. ```js { scaling: { min: 4, max: 16, cpuUtilization: 50, memoryUtilization: 50 } } ``` cpuUtilization? **Type** `Input` **Default** `70` The target CPU utilization percentage to scale up or down. It'll scale up when the CPU utilization is above the target and scale down when it's below the target. ```js { scaling: { cpuUtilization: 50 } } ``` max? **Type** `Input` **Default** `1` The maximum number of containers to scale up to. ```js { scaling: { max: 16 } } ``` memoryUtilization? **Type** `Input` **Default** `70` The target memory utilization percentage to scale up or down. It'll scale up when the memory utilization is above the target and scale down when it's below the target. ```js { scaling: { memoryUtilization: 50 } } ``` min? **Type** `Input` **Default** `1` The minimum number of containers to scale down to. ```js { scaling: { min: 4 } } ``` ### storage? **Type** `"$\{number\} GB"` **Default** `"21 GB"` The amount of ephemeral storage (in GB) allocated to a container in this service. ```js { storage: "100 GB" } ``` ### transform? **Type** `Object` - [`image?`](#transform-image) - [`listener?`](#transform-listener) - [`loadBalancer?`](#transform-loadbalancer) - [`loadBalancerSecurityGroup?`](#transform-loadbalancersecuritygroup) - [`logGroup?`](#transform-loggroup) - [`service?`](#transform-service) - [`target?`](#transform-target) - [`taskDefinition?`](#transform-taskdefinition) - [`taskRole?`](#transform-taskrole) [Transform](/docs/components#transform) how this component creates its underlying resources. image? **Type** [`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)` | (args: `[`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Docker Image resource. listener? **Type** [`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)` | (args: `[`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer listener resource. loadBalancer? **Type** [`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)` | (args: `[`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer resource. loadBalancerSecurityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Security Group resource for the Load Balancer. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch log group resource. service? **Type** [`ServiceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/#inputs)` | (args: `[`ServiceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Service resource. target? **Type** [`TargetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/targetgroup/#inputs)` | (args: `[`TargetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/targetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer target group resource. taskDefinition? **Type** [`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)` | (args: `[`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task Definition resource. taskRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task IAM Role resource. ### vpc **Type** `Input` - [`id`](#vpc-id) - [`privateSubnets`](#vpc-privatesubnets) - [`publicSubnets`](#vpc-publicsubnets) - [`securityGroups`](#vpc-securitygroups) The VPC to use for the cluster. id **Type** `Input` The ID of the VPC. privateSubnets **Type** `Input[]>` A list of private subnet IDs in the VPC. The service will be placed in the private subnets. publicSubnets **Type** `Input[]>` A list of public subnet IDs in the VPC. If a service has public ports configured, its load balancer will be placed in the public subnets. securityGroups **Type** `Input[]>` A list of VPC security group IDs for the service. ## Properties ### nodes **Type** `Object` - [`loadBalancer`](#nodes-loadbalancer) - [`service`](#nodes-service) - [`taskDefinition`](#nodes-taskdefinition) - [`taskRole`](#nodes-taskrole) The underlying [resources](/docs/components/#nodes) this component creates. loadBalancer **Type** [`LoadBalancer`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/) The Amazon Elastic Load Balancer. service **Type** [`Service`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/) The Amazon ECS Service. taskDefinition **Type** [`TaskDefinition`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/) The Amazon ECS Task Definition. taskRole **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The Amazon ECS Task Role. ### url **Type** `Output` The URL of the service. If `public.domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated load balancer URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `undefined | string` The URL of the service. If `public.domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated load balancer URL. --- ## Service Reference doc for the `sst.aws.Service` component. https://sst.dev/docs/component/aws/service The `Service` component lets you create containers that are always running, like web or application servers. It uses [Amazon ECS](https://aws.amazon.com/ecs/) on [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html). #### Create a Service Services are run inside an ECS Cluster. If you haven't already, create one. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); ``` Add the service to it. ```ts title="sst.config.ts" const service = new sst.aws.Service("MyService", { cluster }); ``` #### Configure the container image By default, the service will look for a Dockerfile in the root directory. Optionally configure the image context and dockerfile. ```ts title="sst.config.ts" new sst.aws.Service("MyService", { cluster, image: { context: "./app", dockerfile: "Dockerfile" } }); ``` To add multiple containers in the service, pass in an array of containers args. ```ts title="sst.config.ts" new sst.aws.Service("MyService", { cluster, containers: [ { name: "app", image: "nginxdemos/hello:plain-text" }, { name: "admin", image: { context: "./admin", dockerfile: "Dockerfile" } } ] }); ``` This is useful for running sidecar containers. #### Enable auto-scaling ```ts title="sst.config.ts" new sst.aws.Service("MyService", { cluster, scaling: { min: 4, max: 16, cpuUtilization: 50, memoryUtilization: 50 } }); ``` #### Expose through API Gateway You can give your service a public URL by exposing it through API Gateway HTTP API. You can also optionally give it a custom domain. ```ts title="sst.config.ts" const service = new sst.aws.Service("MyService", { cluster, serviceRegistry: { port: 80 } }); const api = new sst.aws.ApiGatewayV2("MyApi", { vpc, domain: "example.com" }); api.routePrivate("$default", service.nodes.cloudmapService.arn); ``` #### Add a load balancer You can also expose your service by adding a load balancer to it and optionally adding a custom domain. ```ts title="sst.config.ts" new sst.aws.Service("MyService", { cluster, loadBalancer: { domain: "example.com", rules: [ { listen: "80/http" }, { listen: "443/https", forward: "80/http" } ] } }); ``` #### Link resources [Link resources](/docs/linking/) to your service. This will grant permissions to the resources and allow you to access it in your app. ```ts {5} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Service("MyService", { cluster, link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your service. ```ts title="app.ts" console.log(Resource.MyBucket.name); ``` #### Service discovery This component automatically creates a Cloud Map service host name for the service. So anything in the same VPC can access it using the service's host name. For example, if you link the service to a Lambda function that's in the same VPC. ```ts title="sst.config.ts" {2,4} new sst.aws.Function("MyFunction", { vpc, url: true, link: [service], handler: "lambda.handler" }); ``` You can access the service by its host name using the [SDK](/docs/reference/sdk/). ```ts title="lambda.ts" await fetch(`http://${Resource.MyService.service}`); ``` [Check out an example](/docs/examples/#aws-cluster-service-discovery). --- ### Cost By default, this uses a _Linux/X86_ _Fargate_ container with 0.25 vCPUs at $0.04048 per vCPU per hour and 0.5 GB of memory at $0.004445 per GB per hour. It includes 20GB of _Ephemeral Storage_ for free with additional storage at $0.000111 per GB per hour. Each container also gets a public IPv4 address at $0.005 per hour. It works out to $0.04048 x 0.25 x 24 x 30 + $0.004445 x 0.5 x 24 x 30 + $0.005 x 24 x 30 or **$12 per month**. If you are using all Fargate Spot instances with `capacity: "spot"`, it's $0.01218784 x 0.25 x 24 x 30 + $0.00133831 x 0.5 x 24 x 30 + $0.005 x 24 x 30 or **$6 per month** Adjust this for the `cpu`, `memory` and `storage` you are using. And check the prices for _Linux/ARM_ if you are using `arm64` as your `architecture`. The above are rough estimates for _us-east-1_, check out the [Fargate pricing](https://aws.amazon.com/fargate/pricing/) and the [Public IPv4 Address pricing](https://aws.amazon.com/vpc/pricing/) for more details. #### Scaling By default, `scaling` is disabled. If enabled, adjust the above for the number of containers. #### API Gateway If you expose your service through API Gateway, you'll need to add the cost of [API Gateway HTTP API](https://aws.amazon.com/api-gateway/pricing/#HTTP_APIs) as well. For services that don't get a lot of traffic, this ends up being a lot cheaper since API Gateway is pay per request. Learn more about using [Cluster with API Gateway](/docs/examples/#aws-cluster-with-api-gateway). #### Application Load Balancer If you add `loadBalancer` _HTTP_ or _HTTPS_ `rules`, an ALB is created at $0.0225 per hour, $0.008 per LCU-hour, and $0.005 per hour if HTTPS with a custom domain is used. Where LCU is a measure of how much traffic is processed. That works out to $0.0225 x 24 x 30 or **$16 per month**. Add $0.005 x 24 x 30 or **$4 per month** for HTTPS. Also add the LCU-hour used. The above are rough estimates for _us-east-1_, check out the [Application Load Balancer pricing](https://aws.amazon.com/elasticloadbalancing/pricing/) for more details. #### Network Load Balancer If you add `loadBalancer` _TCP_, _UDP_, or _TLS_ `rules`, an NLB is created at $0.0225 per hour and $0.006 per NLCU-hour. Where NCLU is a measure of how much traffic is processed. That works out to $0.0225 x 24 x 30 or **$16 per month**. Also add the NLCU-hour used. The above are rough estimates for _us-east-1_, check out the [Network Load Balancer pricing](https://aws.amazon.com/elasticloadbalancing/pricing/) for more details. --- ## Constructor ```ts new Service(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`ServiceArgs`](#serviceargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## ServiceArgs ### architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The CPU architecture of the container. ```js { architecture: "arm64" } ``` ### capacity? **Type** `Input<"spot" | Object>` - [`fargate?`](#capacity-fargate) `Input` - [`base?`](#capacity-fargate-base) - [`weight`](#capacity-fargate-weight) - [`spot?`](#capacity-spot) `Input` - [`base?`](#capacity-spot-base) - [`weight`](#capacity-spot-weight) **Default** Regular Fargate Configure the capacity provider; regular Fargate or Fargate Spot, for this service. :::tip Fargate Spot is a good option for dev or PR environments. ::: Fargate Spot allows you to run containers on spare AWS capacity at around 50% discount compared to regular Fargate. [Learn more about Fargate pricing](https://aws.amazon.com/fargate/pricing/). :::note AWS might shut down Fargate Spot instances to reclaim capacity. ::: There are a couple of caveats: 1. AWS may reclaim this capacity and **turn off your service** after a two-minute warning. This is rare, but it can happen. 2. If there's no spare capacity, you'll **get an error**. This makes Fargate Spot a good option for dev or PR environments. You can set this using. ```js { capacity: "spot" } ``` You can also configure the % of regular vs spot capacity you want through the `weight` prop. And optionally set the `base` or first X number of tasks that'll be started using a given capacity. For example, the `base: 1` says that the first task uses regular Fargate, and from that point on there will be an even split between the capacity providers. ```js { capacity: { fargate: { weight: 1, base: 1 }, spot: { weight: 1 } } } ``` The `base` works in tandem with the `scaling` prop. So setting `base` to X doesn't mean it'll start those tasks right away. It means that as your service scales up, according to the `scaling` prop, it'll ensure that the first X tasks will be with the given capacity. :::caution Changing `capacity` requires taking down and recreating the ECS service. ::: And this is why you can only set the `base` for only one capacity provider. So you are not allowed to do the following. ```js { capacity: { fargate: { weight: 1, base: 1 }, // This will give you an error spot: { weight: 1, base: 1 } } } ``` When you change the `capacity`, the ECS service is terminated and recreated. This will cause some temporary downtime. Here are some examples settings. - Use only Fargate Spot. ```js { capacity: "spot" } ``` - Use 50% regular Fargate and 50% Fargate Spot. ```js { capacity: { fargate: { weight: 1 }, spot: { weight: 1 } } } ``` - Use 50% regular Fargate and 50% Fargate Spot. And ensure that the first 2 tasks use regular Fargate. ```js { capacity: { fargate: { weight: 1, base: 2 }, spot: { weight: 1 } } } ``` fargate? **Type** `Input` Configure how the regular Fargate capacity is allocated. base? **Type** `Input` Start the first `base` number of tasks with the given capacity. :::caution You can only specify `base` for one capacity provider. ::: weight **Type** `Input` Ensure the given ratio of tasks are started for this capacity. spot? **Type** `Input` Configure how the Fargate spot capacity is allocated. base? **Type** `Input` Start the first `base` number of tasks with the given capacity. :::caution You can only specify `base` for one capacity provider. ::: weight **Type** `Input` Ensure the given ratio of tasks are started for this capacity. ### cluster **Type** [`Cluster`](/docs/component/aws/cluster) The ECS Cluster to use. Create a new `Cluster` in your app, if you haven't already. ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const myCluster = new sst.aws.Cluster("MyCluster", { vpc }); ``` And pass it in. ```js { cluster: myCluster } ``` ### command? **Type** `Input[]>` The command to override the default command in the container. ```js { command: ["npm", "run", "start"] } ``` ### containers? **Type** `Input[]` - [`command?`](#containers-command) - [`cpu?`](#containers-cpu) - [`dev?`](#containers-dev) `Object` - [`autostart?`](#containers-dev-autostart) - [`command`](#containers-dev-command) - [`directory?`](#containers-dev-directory) - [`entrypoint?`](#containers-entrypoint) - [`environment?`](#containers-environment) - [`environmentFiles?`](#containers-environmentfiles) - [`health?`](#containers-health) `Input` - [`command`](#containers-health-command) - [`interval?`](#containers-health-interval) - [`retries?`](#containers-health-retries) - [`startPeriod?`](#containers-health-startperiod) - [`timeout?`](#containers-health-timeout) - [`image?`](#containers-image) `Input` - [`args?`](#containers-image-args) - [`cache?`](#containers-image-cache) - [`context?`](#containers-image-context) - [`dockerfile?`](#containers-image-dockerfile) - [`secrets?`](#containers-image-secrets) - [`tags?`](#containers-image-tags) - [`target?`](#containers-image-target) - [`logging?`](#containers-logging) `Input` - [`name?`](#containers-logging-name) - [`retention?`](#containers-logging-retention) - [`memory?`](#containers-memory) - [`name`](#containers-name) - [`ssm?`](#containers-ssm) - [`volumes?`](#containers-volumes) `Input[]` - [`efs`](#containers-volumes-efs) `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` - [`accessPoint`](#containers-volumes-efs-accesspoint) - [`fileSystem`](#containers-volumes-efs-filesystem) - [`path`](#containers-volumes-path) The containers to run in the service. :::tip You can optionally run multiple containers in a service. ::: By default this starts a single container. To add multiple containers in the service, pass in an array of containers args. ```ts { containers: [ { name: "app", image: "nginxdemos/hello:plain-text" }, { name: "admin", image: { context: "./admin", dockerfile: "Dockerfile" } } ] } ``` If you specify `containers`, you cannot list the above args at the top-level. For example, you **cannot** pass in `image` at the top level. ```diff lang="ts" { - image: "nginxdemos/hello:plain-text", containers: [ { name: "app", image: "nginxdemos/hello:plain-text" }, { name: "admin", image: "nginxdemos/hello:plain-text" } ] } ``` You will need to pass in `image` as a part of the `containers`. command? **Type** `Input` The command to override the default command in the container. Same as the top-level [`command`](#command). cpu? **Type** `"$\{number\} vCPU"` The amount of CPU allocated to the container. By default, a container can use up to all the CPU allocated to all the containers. If set, this container is capped at this allocation even if more idle CPU is available. The sum of all the containers' CPU must be less than or equal to the total available CPU. ```js { cpu: "0.25 vCPU" } ``` dev? **Type** `Object` Configure how this container works in `sst dev`. Same as the top-level [`dev`](#dev). autostart? **Type** `Input` Configure if you want to automatically start this when `sst dev` starts. Same as the top-level [`dev.autostart`](#dev-autostart). command **Type** `Input` The command that `sst dev` runs to start this in dev mode. Same as the top-level [`dev.command`](#dev-command). directory? **Type** `Input` Change the directory from where the `command` is run. Same as the top-level [`dev.directory`](#dev-directory). entrypoint? **Type** `Input` The entrypoint to override the default entrypoint in the container. Same as the top-level [`entrypoint`](#entrypoint). environment? **Type** `Input>>` Key-value pairs of values that are set as container environment variables. Same as the top-level [`environment`](#environment). environmentFiles? **Type** `Input[]>` A list of Amazon S3 file paths of environment files to load environment variables from. Same as the top-level [`environmentFiles`](#environmentFiles). health? **Type** `Input` Configure the health check for the container. Same as the top-level [`health`](#health). command **Type** `Input` A string array representing the command that the container runs to determine if it is healthy. It must start with `CMD` to run the command arguments directly. Or `CMD-SHELL` to run the command with the container's default shell. ```js { command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"] } ``` interval? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"30 seconds"` The time between running the command for the health check. Must be between `5 seconds` and `300 seconds`. retries? **Type** `Input` **Default** `3` The number of consecutive failures required to consider the check to have failed. Must be between `1` and `10`. startPeriod? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"0 seconds"` The grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. Must be between `0 seconds` and `300 seconds`. timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The maximum time to allow one command to run. Must be between `2 seconds` and `60 seconds`. image? **Type** `Input` Configure the Docker image for the container. Same as the top-level [`image`](#image). args? **Type** `Input>>` Key-value pairs of build args. Same as the top-level [`image.args`](#image-args). cache? **Type** `Input` **Default** `true` Controls whether Docker build cache is enabled. Same as the top-level [`image.cache`](#image-cache). context? **Type** `Input` The path to the Docker build context. Same as the top-level [`image.context`](#image-context). dockerfile? **Type** `Input` The path to the Dockerfile. Same as the top-level [`image.dockerfile`](#image-dockerfile). secrets? **Type** `Input>>` Key-value pairs of [build secrets](https://docs.docker.com/build/building/secrets/) to pass to the Docker build. Unlike build args, secrets are not persisted in the final image. They are available in the Dockerfile via [`--mount=type=secret`](https://docs.docker.com/build/building/secrets/#secret-mounts). ```js { secrets: { MY_TOKEN: "my-secret-token", } } ``` Then in the Dockerfile, reference it as a file: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN \ cat /run/secrets/MY_TOKEN ``` Or as an environment variable: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN,env=MY_TOKEN \ echo $MY_TOKEN ``` tags? **Type** `Input[]>` Tags to apply to the Docker image. ```js { tags: ["v1.0.0", "commit-613c1b2"] } ``` target? **Type** `Input` The stage to build up to. Same as the top-level [`image.target`](#image-target). logging? **Type** `Input` Configure the logs in CloudWatch. Same as the top-level [`logging`](#logging). name? **Type** `Input` The name of the CloudWatch log group. Same as the top-level [`logging.name`](#logging-name). retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` The duration the logs are kept in CloudWatch. Same as the top-level [`logging.retention`](#logging-retention). memory? **Type** `"$\{number\} GB"` The amount of memory allocated to the container. By default, a container can use up to all the memory allocated to all the containers. If set, the container is capped at this allocation. If exceeded, the container will be killed even if there is idle memory available. The sum of all the containers' memory must be less than or equal to the total available memory. ```js { memory: "0.5 GB" } ``` name **Type** `Input` The name of the container. This is used as the `--name` option in the Docker run command. ssm? **Type** `Input>>` Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables. Same as the top-level [`ssm`](#ssm). volumes? **Type** `Input[]` Mount Amazon EFS file systems into the container. Same as the top-level [`volumes`](#volumes). efs **Type** `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` The Amazon EFS file system to mount. accessPoint **Type** `Input` The ID of the EFS access point. fileSystem **Type** `Input` The ID of the EFS file system. path **Type** `Input` The path to mount the volume. ### cpu? **Type** `"0.25 vCPU" | "0.5 vCPU" | "1 vCPU" | "2 vCPU" | "4 vCPU" | "8 vCPU" | "16 vCPU"` **Default** `"0.25 vCPU"` The amount of CPU allocated to the container. If there are multiple containers, this is the total amount of CPU shared across all the containers. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { cpu: "1 vCPU" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your service is not deployed. ::: By default, your service in not deployed in `sst dev`. Instead, you can set the `dev.command` and it'll be started locally in a separate tab in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). This makes it so that the container doesn't have to be redeployed on every change. To disable this and deploy your service in `sst dev`, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` The command that `sst dev` runs to start this in dev mode. This is the command you run when you want to run your service locally. directory? **Type** `Input` **Default** Uses the `image.dockerfile` path Change the directory from where the `command` is run. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### entrypoint? **Type** `Input` The entrypoint that overrides the default entrypoint in the container. ```js { entrypoint: ["/usr/bin/my-entrypoint"] } ``` ### environment? **Type** `Input>>` Key-value pairs of values that are set as [container environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html). The keys need to: 1. Start with a letter. 2. Be at least 2 characters long. 3. Contain only letters, numbers, or underscores. ```js { environment: { DEBUG: "true" } } ``` ### environmentFiles? **Type** `Input[]>` A list of Amazon S3 object ARNs pointing to [environment files](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/use-environment-file.html) used to load environment variables into the container. Each file must be a plain text file in `.env` format. Create an S3 bucket and upload an environment file. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("EnvBucket"); const file = new aws.s3.BucketObjectv2("EnvFile", { bucket: bucket.name, key: "test.env", content: ["FOO=hello", "BAR=world"].join("\n"), }); ``` And pass in the ARN of the environment file. ```js title="sst.config.ts" { environmentFiles: [file.arn] } ``` ### executionRole? **Type** `Input` **Default** Creates a new role Assigns the given IAM role name to AWS ECS to launch and manage the containers. This allows you to pass in a previously created role. By default, a new IAM role is created. ```js { executionRole: "my-execution-role" } ``` ### health? **Type** `Input` - [`command`](#health-command) - [`interval?`](#health-interval) - [`retries?`](#health-retries) - [`startPeriod?`](#health-startperiod) - [`timeout?`](#health-timeout) **Default** Health check is disabled Configure the health check that ECS runs on your containers. :::tip This health check is different from the [`loadBalancer.health`](#loadbalancer-health) check. ::: This health check is run by ECS. While, `loadBalancer.health` is run by the load balancer, if you are using one. This is off by default. While the load balancer one cannot be disabled. This config maps to the `HEALTHCHECK` parameter of the `docker run` command. Learn more about [container health checks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_HealthCheck.html). ```js { health: { command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"], startPeriod: "60 seconds", timeout: "5 seconds", interval: "30 seconds", retries: 3 } } ``` command **Type** `Input` A string array representing the command that the container runs to determine if it is healthy. It must start with `CMD` to run the command arguments directly. Or `CMD-SHELL` to run the command with the container's default shell. ```js { command: ["CMD-SHELL", "curl -f http://localhost:3000/ || exit 1"] } ``` interval? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"30 seconds"` The time between running the command for the health check. Must be between `5 seconds` and `300 seconds`. retries? **Type** `Input` **Default** `3` The number of consecutive failures required to consider the check to have failed. Must be between `1` and `10`. startPeriod? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"0 seconds"` The grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. Must be between `0 seconds` and `300 seconds`. timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The maximum time to allow one command to run. Must be between `2 seconds` and `60 seconds`. ### image? **Type** `Input` - [`args?`](#image-args) - [`cache?`](#image-cache) - [`context?`](#image-context) - [`dockerfile?`](#image-dockerfile) - [`secrets?`](#image-secrets) - [`tags?`](#image-tags) - [`target?`](#image-target) **Default** Build a Docker image from the Dockerfile in the root directory. Configure the Docker build command for building the image or specify a pre-built image. Building a Docker image. Prior to building the image, SST will automatically add the `.sst` directory to the `.dockerignore` if not already present. ```js { image: { context: "./app", dockerfile: "Dockerfile", args: { MY_VAR: "value" } } } ``` Alternatively, you can pass in a pre-built image. ```js { image: "nginxdemos/hello:plain-text" } ``` args? **Type** `Input>>` Key-value pairs of [build args](https://docs.docker.com/build/guide/build-args/) to pass to the Docker build command. ```js { args: { MY_VAR: "value" } } ``` cache? **Type** `Input` **Default** `true` Controls whether Docker build cache is enabled. Disable Docker build caching, useful for environments like Localstack where ECR cache export is not supported. ```js { image: { cache: false } } ``` context? **Type** `Input` **Default** `"."` The path to the [Docker build context](https://docs.docker.com/build/building/context/#local-context). The path is relative to your project's `sst.config.ts`. To change where the Docker build context is located. ```js { context: "./app" } ``` dockerfile? **Type** `Input` **Default** `"Dockerfile"` The path to the [Dockerfile](https://docs.docker.com/reference/cli/docker/image/build/#file). The path is relative to the build `context`. To use a different Dockerfile. ```js { dockerfile: "Dockerfile.prod" } ``` secrets? **Type** `Input>>` Key-value pairs of [build secrets](https://docs.docker.com/build/building/secrets/) to pass to the Docker build. Unlike build args, secrets are not persisted in the final image. They are available in the Dockerfile via [`--mount=type=secret`](https://docs.docker.com/build/building/secrets/#secret-mounts). ```js { secrets: { MY_TOKEN: "my-secret-token", } } ``` Then in the Dockerfile, reference it as a file: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN \ cat /run/secrets/MY_TOKEN ``` Or as an environment variable: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN,env=MY_TOKEN \ echo $MY_TOKEN ``` tags? **Type** `Input[]>` Tags to apply to the Docker image. ```js { tags: ["v1.0.0", "commit-613c1b2"] } ``` target? **Type** `Input` The stage to build up to in a [multi-stage Dockerfile](https://docs.docker.com/build/building/multi-stage/#stop-at-a-specific-build-stage). ```js { target: "stage1" } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your containers. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your app using the [SDK](/docs/reference/sdk/). Takes a list of components to link to the containers. ```js { link: [bucket, stripeKey] } ``` ### loadBalancer? **Type** `Input` - [`domain?`](#loadbalancer-domain) `Input` - [`aliases?`](#loadbalancer-domain-aliases) - [`cert?`](#loadbalancer-domain-cert) - [`dns?`](#loadbalancer-domain-dns) - [`name`](#loadbalancer-domain-name) - [`health?`](#loadbalancer-health) `Input>>` - [`healthyThreshold?`](#loadbalancer-health-healthythreshold) - [`interval?`](#loadbalancer-health-interval) - [`path?`](#loadbalancer-health-path) - [`successCodes?`](#loadbalancer-health-successcodes) - [`timeout?`](#loadbalancer-health-timeout) - [`unhealthyThreshold?`](#loadbalancer-health-unhealthythreshold) - [`public?`](#loadbalancer-public) - [`rules?`](#loadbalancer-rules) `Input` - [`conditions?`](#loadbalancer-rules-conditions) `Input` - [`header?`](#loadbalancer-rules-conditions-header) `Input` - [`name`](#loadbalancer-rules-conditions-header-name) - [`values`](#loadbalancer-rules-conditions-header-values) - [`path?`](#loadbalancer-rules-conditions-path) - [`query?`](#loadbalancer-rules-conditions-query) `Input[]>` - [`key?`](#loadbalancer-rules-conditions-query-key) - [`value`](#loadbalancer-rules-conditions-query-value) - [`container?`](#loadbalancer-rules-container) - [`forward?`](#loadbalancer-rules-forward) - [`listen`](#loadbalancer-rules-listen) - [`redirect?`](#loadbalancer-rules-redirect) - [`health?`](#loadbalancer-health-1) `Record<"$\{number\}/https" | "$\{number\}/http"`, `Input>` - [`healthyThreshold?`](#loadbalancer-health-healthythreshold-1) - [`interval?`](#loadbalancer-health-interval-1) - [`path?`](#loadbalancer-health-path-1) - [`successCodes?`](#loadbalancer-health-successcodes-1) - [`timeout?`](#loadbalancer-health-timeout-1) - [`unhealthyThreshold?`](#loadbalancer-health-unhealthythreshold-1) - [`instance`](#loadbalancer-instance) - [`rules`](#loadbalancer-rules-1) `Object[]` - [`conditions`](#loadbalancer-rules-conditions-1) `Object` - [`header?`](#loadbalancer-rules-conditions-header-1) `Input` - [`name`](#loadbalancer-rules-conditions-header-name-1) - [`values`](#loadbalancer-rules-conditions-header-values-1) - [`path?`](#loadbalancer-rules-conditions-path-1) - [`query?`](#loadbalancer-rules-conditions-query-1) `Input[]>` - [`key?`](#loadbalancer-rules-conditions-query-key-1) - [`value`](#loadbalancer-rules-conditions-query-value-1) - [`container?`](#loadbalancer-rules-container-1) - [`forward`](#loadbalancer-rules-forward-1) - [`listen`](#loadbalancer-rules-listen-1) - [`priority`](#loadbalancer-rules-priority) **Default** Load balancer is not created Configure a load balancer to route traffic to the containers. While you can expose a service through API Gateway, it's better to use a load balancer for most traditional web applications. It is more expensive to start but at higher levels of traffic it ends up being more cost effective. Also, if you need to listen on network layer protocols like `tcp` or `udp`, you have to expose it through a load balancer. By default, the endpoint is an auto-generated load balancer URL. You can also add a custom domain for the endpoint. ```js { loadBalancer: { domain: "example.com", rules: [ { listen: "80/http", redirect: "443/https" }, { listen: "443/https", forward: "80/http" } ] } } ``` domain? **Type** `Input` Set a custom domain for your load balancer endpoint. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` aliases? **Type** `Input` Alias domains that should be used. ```js {4} { domain: { name: "app1.example.com", aliases: ["app2.example.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the load balancer endpoint. ```js { domain: { name: "example.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` Wildcard domains are supported. ```js { domain: { name: "*.example.com" } } ``` health? **Type** `Input>>` Configure the health check that the load balancer runs on your containers. :::tip This health check is different from the [`health`](#health) check. ::: This health check is run by the load balancer. While, `health` is run by ECS. This cannot be disabled if you are using a load balancer. While the other is off by default. Since this cannot be disabled, here are some tips on how to debug an unhealthy health check.
How to debug a load balancer health check If you notice a `Unhealthy: Health checks failed` error, it's because the health check has failed. When it fails, the load balancer will terminate the containers, causing any requests to fail. Here's how to debug it: 1. Verify the health check path. By default, the load balancer checks the `/` path. Ensure it's accessible in your containers. If your application runs on a different path, then update the path in the health check config accordingly. 2. Confirm the containers are operational. Navigate to **ECS console** > select the **cluster** > go to the **Tasks tab** > choose **Any desired status** under the **Filter desired status** dropdown > select a task and check for errors under the **Logs tab**. If it has error that means that the container failed to start. 3. If the container was terminated by the load balancer while still starting up, try increasing the health check interval and timeout.
For `http` and `https` the default is: ```js { path: "/", healthyThreshold: 5, successCodes: "200", timeout: "5 seconds", unhealthyThreshold: 2, interval: "30 seconds" } ``` For `tcp` and `udp` the default is: ```js { healthyThreshold: 5, timeout: "6 seconds", unhealthyThreshold: 2, interval: "30 seconds" } ``` To configure the health check, we use the _port/protocol_ format. Here we are configuring a health check that pings the `/health` path on port `8080` every 10 seconds. ```js { rules: [ { listen: "80/http", forward: "8080/http" } ], health: { "8080/http": { path: "/health", interval: "10 seconds" } } } ``` healthyThreshold? **Type** `Input` **Default** `5` The number of consecutive successful health check requests required to consider the target healthy. Must be between 2 and 10. interval? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"30 seconds"` The time period between each health check request. Must be between `5 seconds` and `300 seconds`. path? **Type** `Input` **Default** `"/"` The URL path to ping on the service for health checks. Only applicable to `http` and `https` protocols. successCodes? **Type** `Input` **Default** `"200"` One or more HTTP response codes the health check treats as successful. Only applicable to `http` and `https` protocols. ```js { successCodes: "200-299" } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The timeout for each health check request. If no response is received within this time, it is considered failed. Must be between `2 seconds` and `120 seconds`. unhealthyThreshold? **Type** `Input` **Default** `2` The number of consecutive failed health check requests required to consider the target unhealthy. Must be between 2 and 10. public? **Type** `Input` **Default** `true` Configure if the load balancer should be public or private. When set to `false`, the load balancer endpoint will only be accessible within the VPC. rules? **Type** `Input` Configure the mapping for the ports the load balancer listens to, forwards, or redirects to the service. This supports two types of protocols: 1. Application Layer Protocols: `http` and `https`. This'll create an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html). 2. Network Layer Protocols: `tcp`, `udp`, `tcp_udp`, and `tls`. This'll create a [Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html). :::note If you want to listen on `https` or `tls`, you need to specify a custom `loadBalancer.domain`. ::: You **can not configure** both application and network layer protocols for the same service. Here we are listening on port `80` and forwarding it to the service on port `8080`. ```js { rules: [ { listen: "80/http", forward: "8080/http" } ] } ``` The `forward` port and protocol defaults to the `listen` port and protocol. So in this case both are `80/http`. ```js { rules: [ { listen: "80/http" } ] } ``` If multiple containers are configured via the `containers` argument, you need to specify which container the traffic should be forwarded to. ```js { rules: [ { listen: "80/http", container: "app" }, { listen: "8000/http", container: "admin" } ] } ``` You can also route the same port to multiple containers via path-based routing. ```js { rules: [ { listen: "80/http", container: "app", conditions: { path: "/api/*" } }, { listen: "80/http", container: "admin", conditions: { path: "/admin/*" } } ] } ``` Additionally, you can redirect traffic from one port to another. This is commonly used to redirect http to https. ```js { rules: [ { listen: "80/http", redirect: "443/https" }, { listen: "443/https", forward: "80/http" } ] } ``` conditions? **Type** `Input` The conditions for the redirect. Only applicable to `http` and `https` protocols. header? **Type** `Input` **Default** Header is not checked when forwarding requests. Configure header based routing. Only requests matching the header name and values are forwarded to the container. Both the header name and values are case insensitive. For example, if you specify `X-Custom-Header` as the name and `Value1` as a value, it will match requests with the header `x-custom-header: value1` as well. ```js { header: { name: "X-Custom-Header", values: ["Value1", "Value2", "Prefix*"] } } ``` name **Type** `Input` The name of the HTTP header field to check. This is case-insensitive. values **Type** `Input>[]` The values to match against the header value. The rule matches if the request header matches any of these values. Values are case-insensitive and support wildcards (`*` and `?`) for pattern matching. path? **Type** `Input` **Default** Requests to all paths are forwarded. Configure path-based routing. Only requests matching the path are forwarded to the container. ```js { path: "/api/*" } ``` The path pattern is case-sensitive, supports wildcards, and can be up to 128 characters. - `*` matches 0 or more characters. For example, `/api/*` matches `/api/` or `/api/orders`. - `?` matches exactly 1 character. For example, `/api/?.png` matches `/api/a.png`. query? **Type** `Input[]>` **Default** Query string is not checked when forwarding requests. Configure query string based routing. Only requests matching one of the query string conditions are forwarded to the container. Takes a list of `key`, the name of the query string parameter, and `value` pairs. Where `value` is the value of the query string parameter. But it can be a pattern as well. If multiple `key` and `value` pairs are provided, it'll match requests with **any** of the query string parameters. For example, to match requests with query string `version=v1`. ```js { query: [ { key: "version", value: "v1" } ] } ``` Or match requests with query string matching `env=test*`. ```js { query: [ { key: "env", value: "test*" } ] } ``` Match requests with query string `version=v1` **or** `env=test*`. ```js { query: [ { key: "version", value: "v1" }, { key: "env", value: "test*" } ] } ``` Match requests with any query string key with value `example`. ```js { query: [ { value: "example" } ] } ``` key? **Type** `Input` The name of the query string parameter. value **Type** `Input` The value of the query string parameter. If no `key` is provided, it'll match any request where a query string parameter with the given value exists. container? **Type** `Input` The name of the container to forward the traffic to. This maps to the `name` defined in the `container` prop. You only need this if there's more than one container. If there's only one container, the traffic is automatically forwarded there. forward? **Type** `Input<"$\{number\}/https" | "$\{number\}/http" | "$\{number\}/tcp" | "$\{number\}/udp" | "$\{number\}/tcp_udp" | "$\{number\}/tls">` **Default** The same port and protocol as `listen`. The port and protocol of the container the service forwards the traffic to. Uses the format `{port}/{protocol}`. ```js { forward: "80/http" } ``` listen **Type** `Input<"$\{number\}/https" | "$\{number\}/http" | "$\{number\}/tcp" | "$\{number\}/udp" | "$\{number\}/tcp_udp" | "$\{number\}/tls">` The port and protocol the service listens on. Uses the format `{port}/{protocol}`. ```js { listen: "80/http" } ``` redirect? **Type** `Input<"$\{number\}/https" | "$\{number\}/http" | "$\{number\}/tcp" | "$\{number\}/udp" | "$\{number\}/tcp_udp" | "$\{number\}/tls">` The port and protocol to redirect the traffic to. Uses the format `{port}/{protocol}`. ```js { redirect: "80/http" } ``` health? **Type** `Record<"$\{number\}/https" | "$\{number\}/http"`, `Input>` Configure health checks for the target groups. Uses the same format as the inline health check config, keyed by `{port}/{protocol}`. healthyThreshold? **Type** `Input` interval? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` path? **Type** `Input` successCodes? **Type** `Input` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` unhealthyThreshold? **Type** `Input` instance **Type** [`Alb`](/docs/component/aws/alb) The `Alb` instance to attach this service to. When provided, the service creates target groups and listener rules on the shared ALB instead of creating its own load balancer. ECS tasks use the VPC's default security group, which allows all traffic within the VPC CIDR. For tighter security, add an explicit security group ingress rule from the ALB's security group using `transform`. ```js { loadBalancer: { instance: alb, rules: [ { listen: "443/https", forward: "8080/http", conditions: { path: "/api/*" }, priority: 100 } ] } } ``` rules **Type** `Object[]` The rules for routing traffic from the ALB to this service's containers. Each rule must have explicit conditions and priority. conditions **Type** `Object` The conditions for the listener rule. At least one condition (path, query, or header) must be specified. The ALB owns the default action — services only add conditional rules. ```js { conditions: { path: "/api/*" } } ``` header? **Type** `Input` HTTP header condition. name **Type** `Input` values **Type** `Input>[]` path? **Type** `Input` Path pattern to match. Supports wildcards (`*` and `?`). query? **Type** `Input[]>` Query string conditions. key? **Type** `Input` value **Type** `Input` container? **Type** `string` The name of the container to forward the traffic to. Required when multiple containers are configured. forward **Type** `"$\{number\}/https" | "$\{number\}/http"` The container port and protocol to forward traffic to. Uses the format `{port}/{protocol}`. The protocol must match what the container actually speaks — using `"3000/https"` when the container speaks HTTP will cause health check failures. ```js { forward: "8080/http" } ``` listen **Type** `"$\{number\}/https" | "$\{number\}/http"` The port and protocol to listen on, in `{port}/{protocol}` format. Must match a listener on the ALB. ```js { listen: "443/https" } ``` priority **Type** `number` Explicit priority for the listener rule (1–50000). Must be unique per listener across ALL services sharing the ALB. Use non-overlapping ranges per service (e.g., Service A: 100-199, Service B: 200-299). ```js { priority: 100 } ``` ### logging? **Type** `Input` - [`name?`](#logging-name) - [`retention?`](#logging-retention) **Default** `{ retention: "1 month" }` Configure the logs in CloudWatch. ```js { logging: { retention: "forever" } } ``` name? **Type** `Input` **Default** `"/sst/cluster/${CLUSTER_NAME}/${SERVICE_NAME}/${CONTAINER_NAME}"` The name of the CloudWatch log group. If omitted, the log group name is generated based on the cluster name, service name, and container name. retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `"1 month"` The duration the logs are kept in CloudWatch. ### memory? **Type** `"$\{number\} GB"` **Default** `"0.5 GB"` The amount of memory allocated to the container. If there are multiple containers, this is the total amount of memory shared across all the containers. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { memory: "2 GB" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that you need to access. These permissions are used to create the [task role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html). :::tip If you `link` the service to a resource, the permissions to access it are automatically added. ::: Allow the container to read and write to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Allow the container to perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Granting the container permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### scaling? **Type** `Input` - [`cpuUtilization?`](#scaling-cpuutilization) - [`max?`](#scaling-max) - [`memoryUtilization?`](#scaling-memoryutilization) - [`min?`](#scaling-min) - [`requestCount?`](#scaling-requestcount) - [`scaleInCooldown?`](#scaling-scaleincooldown) - [`scaleOutCooldown?`](#scaling-scaleoutcooldown) **Default** `{ min: 1, max: 1 }` Configure the service to automatically scale up or down based on the CPU or memory utilization of a container. By default, scaling is disabled and the service will run in a single container. ```js { scaling: { min: 4, max: 16, cpuUtilization: 50, memoryUtilization: 50 } } ``` cpuUtilization? **Type** `Input` **Default** `70` The target CPU utilization percentage to scale up or down. It'll scale up when the CPU utilization is above the target and scale down when it's below the target. ```js { scaling: { cpuUtilization: 50 } } ``` max? **Type** `Input` **Default** `1` The maximum number of containers to scale up to. ```js { scaling: { max: 16 } } ``` memoryUtilization? **Type** `Input` **Default** `70` The target memory utilization percentage to scale up or down. It'll scale up when the memory utilization is above the target and scale down when it's below the target. ```js { scaling: { memoryUtilization: 50 } } ``` min? **Type** `Input` **Default** `1` The minimum number of containers to scale down to. ```js { scaling: { min: 4 } } ``` requestCount? **Type** `Input` **Default** `false` The target request count to scale up or down. It'll scale up when the request count is above the target and scale down when it's below the target. ```js { scaling: { requestCount: 1500 } } ``` scaleInCooldown? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` The amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start. This prevents the auto scaler from removing too many tasks too quickly. ```js { scaling: { scaleInCooldown: "60 seconds" } } ``` scaleOutCooldown? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` The amount of time, in seconds, after a scale-out activity completes before another scale-out activity can start. This prevents the auto scaler from adding too many tasks too quickly. ```js { scaling: { scaleOutCooldown: "60 seconds" } } ``` ### serviceRegistry? **Type** `Input` - [`port`](#serviceregistry-port) Configure the CloudMap service registry for the service. This creates an `srv` record in the CloudMap service. This is needed if you want to connect an `ApiGatewayV2` VPC link to the service. API Gateway will forward requests to the given port on the service. ```js { serviceRegistry: { port: 80 } } ``` port **Type** `number` The port in the service to forward requests to. ### ssm? **Type** `Input>>` Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables. ```js { ssm: { DATABASE_PASSWORD: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-secret-123abc" } } ``` ### storage? **Type** `"$\{number\} GB"` **Default** `"20 GB"` The amount of ephemeral storage (in GB) allocated to the container. ```js { storage: "100 GB" } ``` ### taskRole? **Type** `Input` **Default** Creates a new role Assigns the given IAM role name to the containers. This allows you to pass in a previously created role. :::caution When you pass in a role, it will not update it if you add `permissions` or `link` resources. ::: By default, a new IAM role is created. It'll update this role if you add `permissions` or `link` resources. However, if you pass in a role, you'll need to update it manually if you add `permissions` or `link` resources. ```js { taskRole: "my-task-role" } ``` ### transform? **Type** `Object` - [`autoScalingTarget?`](#transform-autoscalingtarget) - [`executionRole?`](#transform-executionrole) - [`image?`](#transform-image) - [`listener?`](#transform-listener) - [`listenerRule?`](#transform-listenerrule) - [`loadBalancer?`](#transform-loadbalancer) - [`loadBalancerSecurityGroup?`](#transform-loadbalancersecuritygroup) - [`logGroup?`](#transform-loggroup) - [`service?`](#transform-service) - [`target?`](#transform-target) - [`taskDefinition?`](#transform-taskdefinition) - [`taskRole?`](#transform-taskrole) [Transform](/docs/components#transform) how this component creates its underlying resources. autoScalingTarget? **Type** [`TargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appautoscaling/target/#inputs)` | (args: `[`TargetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/appautoscaling/target/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Application Auto Scaling target resource. executionRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Execution IAM Role resource. image? **Type** [`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)` | (args: `[`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Docker Image resource. listener? **Type** [`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)` | (args: `[`ListenerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listener/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer listener resource. listenerRule? **Type** [`ListenerRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listenerrule/#inputs)` | (args: `[`ListenerRuleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/listenerrule/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer listener rule resource. Only applies when attaching to an external ALB via the `loadBalancer.instance` prop. loadBalancer? **Type** [`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)` | (args: `[`LoadBalancerArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer resource. loadBalancerSecurityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Security Group resource for the Load Balancer. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch log group resource. service? **Type** [`ServiceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/#inputs)` | (args: `[`ServiceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Service resource. target? **Type** [`TargetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/targetgroup/#inputs)` | (args: `[`TargetGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/targetgroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the AWS Load Balancer target group resource. taskDefinition? **Type** [`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)` | (args: `[`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task Definition resource. taskRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task IAM Role resource. ### volumes? **Type** `Input[]` - [`efs`](#volumes-efs) `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` - [`accessPoint`](#volumes-efs-accesspoint) - [`fileSystem`](#volumes-efs-filesystem) - [`path`](#volumes-path) Mount Amazon EFS file systems into the container. Create an EFS file system. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const fileSystem = new sst.aws.Efs("MyFileSystem", { vpc }); ``` And pass it in. ```js { volumes: [ { efs: fileSystem, path: "/mnt/efs" } ] } ``` Or pass in a the EFS file system ID. ```js { volumes: [ { efs: { fileSystem: "fs-12345678", accessPoint: "fsap-12345678" }, path: "/mnt/efs" } ] } ``` efs **Type** `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` The Amazon EFS file system to mount. accessPoint **Type** `Input` The ID of the EFS access point. fileSystem **Type** `Input` The ID of the EFS file system. path **Type** `Input` The path to mount the volume. ### wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the service to be stable. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { wait: true } ``` ## Properties ### nodes **Type** `Object` - [`executionRole`](#nodes-executionrole) - [`taskRole`](#nodes-taskrole) - [`autoScalingTarget`](#nodes-autoscalingtarget) - [`cloudmapService`](#nodes-cloudmapservice) - [`loadBalancer`](#nodes-loadbalancer) - [`service`](#nodes-service) - [`taskDefinition`](#nodes-taskdefinition) The underlying [resources](/docs/components/#nodes) this component creates. executionRole **Type** `undefined | `[`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The Amazon ECS Execution Role. taskRole **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The Amazon ECS Task Role. autoScalingTarget **Type** [`Target`](https://www.pulumi.com/registry/packages/aws/api-docs/appautoscaling/target/) The Amazon Application Auto Scaling target. cloudmapService **Type** `Output<`[`Service`](https://www.pulumi.com/registry/packages/aws/api-docs/servicediscovery/service/)`>` The Amazon Cloud Map service. loadBalancer **Type** [`LoadBalancer`](https://www.pulumi.com/registry/packages/aws/api-docs/lb/loadbalancer/) The Amazon Elastic Load Balancer. service **Type** `Output<`[`Service`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/)`>` The Amazon ECS Service. taskDefinition **Type** `Output<`[`TaskDefinition`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/)`>` The Amazon ECS Task Definition. ### service **Type** `Output` The name of the Cloud Map service. This is useful for service discovery. ### url **Type** `Output` The URL of the service. If `public.domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated load balancer URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `service` `undefined | string` The name of the Cloud Map service. This is useful for service discovery. - `url` `undefined | string` The URL of the service. If `public.domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated load balancer URL. ## ServiceAlbRule ### conditions **Type** `Object` - [`header?`](#conditions-header) `Input` - [`name`](#conditions-header-name) - [`values`](#conditions-header-values) - [`path?`](#conditions-path) - [`query?`](#conditions-query) `Input[]>` - [`key?`](#conditions-query-key) - [`value`](#conditions-query-value) The conditions for the listener rule. At least one condition (path, query, or header) must be specified. The ALB owns the default action — services only add conditional rules. ```js { conditions: { path: "/api/*" } } ``` header? **Type** `Input` HTTP header condition. name **Type** `Input` values **Type** `Input>[]` path? **Type** `Input` Path pattern to match. Supports wildcards (`*` and `?`). query? **Type** `Input[]>` Query string conditions. key? **Type** `Input` value **Type** `Input` ### container? **Type** `string` The name of the container to forward the traffic to. Required when multiple containers are configured. ### forward **Type** `"$\{number\}/https" | "$\{number\}/http"` The container port and protocol to forward traffic to. Uses the format `{port}/{protocol}`. The protocol must match what the container actually speaks — using `"3000/https"` when the container speaks HTTP will cause health check failures. ```js { forward: "8080/http" } ``` ### listen **Type** `"$\{number\}/https" | "$\{number\}/http"` The port and protocol to listen on, in `{port}/{protocol}` format. Must match a listener on the ALB. ```js { listen: "443/https" } ``` ### priority **Type** `number` Explicit priority for the listener rule (1–50000). Must be unique per listener across ALL services sharing the ALB. Use non-overlapping ranges per service (e.g., Service A: 100-199, Service B: 200-299). ```js { priority: 100 } ``` --- ## SnsTopicLambdaSubscriber Reference doc for the `sst.aws.SnsTopicLambdaSubscriber` component. https://sst.dev/docs/component/aws/sns-topic-lambda-subscriber The `SnsTopicLambdaSubscriber` component is internally used by the `SnsTopic` component to add subscriptions to your [Amazon SNS Topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribe` method of the `SnsTopic` component. --- ## Constructor ```ts new SnsTopicLambdaSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`permission`](#nodes-permission) - [`subscription`](#nodes-subscription) - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. permission **Type** [`Permission`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/permission/) The Lambda permission. subscription **Type** [`TopicSubscription`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/) The SNS Topic subscription. function **Type** `Output<`[`Function`](/docs/component/aws/function)`>` The Lambda function that'll be notified. ## Args ### filter? **Type** `Input>` Filter the messages that'll be processed by the subscriber. If any single property in the filter doesn't match an attribute assigned to the message, then the policy rejects the message. :::tip Learn more about [subscription filter policies](https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html). ::: For example, if your SNS Topic message contains this in a JSON format. ```js { store: "example_corp", event: "order-placed", customer_interests: [ "soccer", "rugby", "hockey" ], price_usd: 210.75 } ``` Then this filter policy accepts the message. ```js { filter: { store: ["example_corp"], event: [{"anything-but": "order_cancelled"}], customer_interests: [ "rugby", "football", "baseball" ], price_usd: [{numeric: [">=", 100]}] } } ``` ### subscriber **Type** `Input` The subscriber function. ### topic **Type** `Input` - [`arn`](#topic-arn) The Topic to use. arn **Type** `Input` The ARN of the Topic. ### transform? **Type** `Object` - [`subscription?`](#transform-subscription) [Transform](/docs/components#transform) how this subscription creates its underlying resources. subscription? **Type** [`TopicSubscriptionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/#inputs)` | (args: `[`TopicSubscriptionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the SNS Topic Subscription resource. --- ## SnsTopicQueueSubscriber Reference doc for the `sst.aws.SnsTopicQueueSubscriber` component. https://sst.dev/docs/component/aws/sns-topic-queue-subscriber The `SnsTopicQueueSubscriber` component is internally used by the `SnsTopic` component to add subscriptions to your [Amazon SNS Topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html). :::note This component is not intended to be created directly. ::: You'll find this component returned by the `subscribeQueue` method of the `SnsTopic` component. --- ## Constructor ```ts new SnsTopicQueueSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`Args`](#args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## Properties ### nodes **Type** `Object` - [`policy`](#nodes-policy) - [`subscription`](#nodes-subscription) The underlying [resources](/docs/components/#nodes) this component creates. policy **Type** [`QueuePolicy`](https://www.pulumi.com/registry/packages/aws/api-docs/sqs/queuepolicy/) The SQS Queue policy. subscription **Type** [`TopicSubscription`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/) The SNS Topic subscription. ## Args ### filter? **Type** `Input>` Filter the messages that'll be processed by the subscriber. If any single property in the filter doesn't match an attribute assigned to the message, then the policy rejects the message. :::tip Learn more about [subscription filter policies](https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html). ::: For example, if your SNS Topic message contains this in a JSON format. ```js { store: "example_corp", event: "order-placed", customer_interests: [ "soccer", "rugby", "hockey" ], price_usd: 210.75 } ``` Then this filter policy accepts the message. ```js { filter: { store: ["example_corp"], event: [{"anything-but": "order_cancelled"}], customer_interests: [ "rugby", "football", "baseball" ], price_usd: [{numeric: [">=", 100]}] } } ``` ### queue **Type** `Input` The ARN of the SQS Queue. ### topic **Type** `Input` - [`arn`](#topic-arn) The SNS Topic to use. arn **Type** `Input` The ARN of the SNS Topic. ### transform? **Type** `Object` - [`subscription?`](#transform-subscription) [Transform](/docs/components#transform) how this subscription creates its underlying resources. subscription? **Type** [`TopicSubscriptionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/#inputs)` | (args: `[`TopicSubscriptionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the SNS Topic Subscription resource. --- ## SnsTopic Reference doc for the `sst.aws.SnsTopic` component. https://sst.dev/docs/component/aws/sns-topic The `SnsTopic` component lets you add an [Amazon SNS Topic](https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html) to your app. :::note The difference between an `SnsTopic` and a `Queue` is that with a topic you can deliver messages to multiple subscribers. ::: #### Create a topic ```ts title="sst.config.ts" const topic = new sst.aws.SnsTopic("MyTopic"); ``` #### Make it a FIFO topic You can optionally make it a FIFO topic. ```ts {2} title="sst.config.ts" new sst.aws.SnsTopic("MyTopic", { fifo: true }); ``` #### Add a subscriber ```ts title="sst.config.ts" topic.subscribe("MySubscriber", "src/subscriber.handler"); ``` #### Link the topic to a resource You can link the topic to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [topic] }); ``` Once linked, you can publish messages to the topic from your function code. ```ts title="app/page.tsx" {1,7} const sns = new SNSClient({}); await sns.send(new PublishCommand({ TopicArn: Resource.MyTopic.arn, Message: "Hello from Next.js!" })); ``` --- ## Constructor ```ts new SnsTopic(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`SnsTopicArgs`](#snstopicargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## SnsTopicArgs ### fifo? **Type** `Input` **Default** `false` FIFO (First-In-First-Out) topics are designed to provide strict message ordering. :::caution Changing a standard topic to a FIFO topic or the other way around will result in the destruction and recreation of the topic. ::: ```js { fifo: true } ``` ### transform? **Type** `Object` - [`topic?`](#transform-topic) [Transform](/docs/components#transform) how this component creates its underlying resources. topic? **Type** [`TopicArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topic/#inputs)` | (args: `[`TopicArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topic/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the SNS Topic resource. ## Properties ### arn **Type** `Output` The ARN of the SNS Topic. ### name **Type** `Output` The name of the SNS Topic. ### nodes **Type** `Object` - [`topic`](#nodes-topic) The underlying [resources](/docs/components/#nodes) this component creates. topic **Type** [`Topic`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topic/) The Amazon SNS Topic. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `arn` `string` The ARN of the SNS Topic. ## Methods ### subscribe ```ts subscribe(name, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `subscriber` `Input` The function that'll be notified. - `args?` [`SnsTopicSubscriberArgs`](#snstopicsubscriberargs) Configure the subscription. **Returns** `Output<`[`SnsTopicLambdaSubscriber`](/docs/component/aws/sns-topic-lambda-subscriber)`>` Subscribe to this SNS Topic. ```js title="sst.config.ts" topic.subscribe("MySubscriber", "src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" topic.subscribe("MySubscriber", "src/subscriber.handler", { filter: { price_usd: [{numeric: [">=", 100]}] } }); ``` Customize the subscriber function. ```js title="sst.config.ts" topic.subscribe("MySubscriber", { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` Or pass in the ARN of an existing Lambda function. ```js title="sst.config.ts" topic.subscribe("MySubscriber", "arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` ### subscribeQueue ```ts subscribeQueue(name, queue, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `queue` `Input` The ARN of the queue or `Queue` component that'll be notified. - `args?` [`SnsTopicSubscriberArgs`](#snstopicsubscriberargs) Configure the subscription. **Returns** `Output<`[`SnsTopicQueueSubscriber`](/docs/component/aws/sns-topic-queue-subscriber)`>` Subscribe to this SNS Topic with an SQS Queue. For example, let's say you have a queue. ```js title="sst.config.ts" const queue = sst.aws.Queue("MyQueue"); ``` You can subscribe to this topic with it. ```js title="sst.config.ts" topic.subscribeQueue("MySubscriber", queue.arn); ``` Add a filter to the subscription. ```js title="sst.config.ts" topic.subscribeQueue("MySubscriber", queue.arn, { filter: { price_usd: [{numeric: [">=", 100]}] } }); ``` ### static get ```ts SnsTopic.get(name, topicArn, opts?) ``` #### Parameters - `name` `string` The name of the component. - `topicArn` `Input` The ARN of the existing SNS Topic. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`SnsTopic`](.) Reference an existing SNS topic with its topic ARN. This is useful when you create a topic in one stage and want to share it in another stage. It avoids having to create a new topic in the other stage. :::tip You can use the `static get` method to share SNS topics across stages. ::: Imagine you create a topic in the `dev` stage. And in your personal stage `frank`, instead of creating a new topic, you want to share the topic from `dev`. ```ts title="sst.config.ts" const topic = $app.stage === "frank" ? sst.aws.SnsTopic.get("MyTopic", "arn:aws:sns:us-east-1:123456789012:MyTopic") : new sst.aws.SnsTopic("MyTopic"); ``` Here `arn:aws:sns:us-east-1:123456789012:MyTopic` is the ARN of the topic created in the `dev` stage. You can find this by outputting the topic ARN in the `dev` stage. ```ts title="sst.config.ts" return topic.arn; ``` ### static subscribe ```ts SnsTopic.subscribe(name, topicArn, subscriber, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `topicArn` `Input` The ARN of the SNS Topic to subscribe to. - `subscriber` `Input` The function that'll be notified. - `args?` [`SnsTopicSubscriberArgs`](#snstopicsubscriberargs) Configure the subscription. **Returns** `Output<`[`SnsTopicLambdaSubscriber`](/docs/component/aws/sns-topic-lambda-subscriber)`>` Subscribe to an SNS Topic that was not created in your app. For example, let's say you have an existing SNS Topic with the following ARN. ```js title="sst.config.ts" const topicArn = "arn:aws:sns:us-east-1:123456789012:MyTopic"; ``` You can subscribe to it by passing in the ARN. ```js title="sst.config.ts" sst.aws.SnsTopic.subscribe("MySubscriber", topicArn, "src/subscriber.handler"); ``` Add a filter to the subscription. ```js title="sst.config.ts" sst.aws.SnsTopic.subscribe("MySubscriber", topicArn, "src/subscriber.handler", { filter: { price_usd: [{numeric: [">=", 100]}] } }); ``` Customize the subscriber function. ```js title="sst.config.ts" sst.aws.SnsTopic.subscribe("MySubscriber", topicArn, { handler: "src/subscriber.handler", timeout: "60 seconds" }); ``` ### static subscribeQueue ```ts SnsTopic.subscribeQueue(name, topicArn, queue, args?) ``` #### Parameters - `name` `string` The name of the subscriber. - `topicArn` `Input` The ARN of the SNS Topic to subscribe to. - `queue` `Input` The ARN of the queue or `Queue` component that'll be notified. - `args?` [`SnsTopicSubscriberArgs`](#snstopicsubscriberargs) Configure the subscription. **Returns** `Output<`[`SnsTopicQueueSubscriber`](/docs/component/aws/sns-topic-queue-subscriber)`>` Subscribe to an existing SNS Topic with a previously created SQS Queue. For example, let's say you have an existing SNS Topic and SQS Queue with the following ARNs. ```js title="sst.config.ts" const topicArn = "arn:aws:sns:us-east-1:123456789012:MyTopic"; const queueArn = "arn:aws:sqs:us-east-1:123456789012:MyQueue"; ``` You can subscribe to the topic with the queue. ```js title="sst.config.ts" sst.aws.SnsTopic.subscribeQueue("MySubscriber", topicArn, queueArn); ``` Add a filter to the subscription. ```js title="sst.config.ts" sst.aws.SnsTopic.subscribeQueue("MySubscriber", topicArn, queueArn, { filter: { price_usd: [{numeric: [">=", 100]}] } }); ``` ## SnsTopicSubscriberArgs ### filter? **Type** `Input>` Filter the messages that'll be processed by the subscriber. If any single property in the filter doesn't match an attribute assigned to the message, then the policy rejects the message. :::tip Learn more about [subscription filter policies](https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html). ::: For example, if your SNS Topic message contains this in a JSON format. ```js { store: "example_corp", event: "order-placed", customer_interests: [ "soccer", "rugby", "hockey" ], price_usd: 210.75 } ``` Then this filter policy accepts the message. ```js { filter: { store: ["example_corp"], event: [{"anything-but": "order_cancelled"}], customer_interests: [ "rugby", "football", "baseball" ], price_usd: [{numeric: [">=", 100]}] } } ``` ### transform? **Type** `Object` - [`subscription?`](#transform-subscription) [Transform](/docs/components#transform) how this subscription creates its underlying resources. subscription? **Type** [`TopicSubscriptionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/#inputs)` | (args: `[`TopicSubscriptionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topicsubscription/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the SNS Topic Subscription resource. --- ## SolidStart Reference doc for the `sst.aws.SolidStart` component. https://sst.dev/docs/component/aws/solid-start The `SolidStart` component lets you deploy a [SolidStart](https://start.solidjs.com) app to AWS. #### Minimal example Deploy a SolidStart app that's in the project root. ```js title="sst.config.ts" new sst.aws.SolidStart("MyWeb"); ``` #### Change the path Deploys the SolidStart app in the `my-solid-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.SolidStart("MyWeb", { path: "my-solid-app/" }); ``` #### Add a custom domain Set a custom domain for your SolidStart app. ```js {2} title="sst.config.ts" new sst.aws.SolidStart("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.SolidStart("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your SolidStart app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.SolidStart("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your SolidStart app. ```ts title="src/app.tsx" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new SolidStart(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`SolidStartArgs`](#solidstartargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## SolidStartArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the SolidStart app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your SolidStart app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the SolidStart app to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your SolidStart app is run in dev mode; it's not deployed. ::: Instead of deploying your SolidStart app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your SolidStart app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set in your SolidStart app. These are made available: 1. In `vinxi build`, they are loaded into `process.env`. 2. Locally while running through `sst dev`. :::tip You can also `link` resources to your SolidStart app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: ```js { environment: { API_URL: api.url, STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your SolidStart app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your SolidStart app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your SolidStart app is located. This path is relative to your `sst.config.ts`. By default it assumes your SolidStart app is in the root of your SST app. If your SolidStart app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your SolidStart app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your SolidStart app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your SolidStart app **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the `baseURL` property in your `app.config.ts` without a trailing slash. :::caution If routing to a path, you need to set that as the base path in your SolidStart app as well. ::: ```js title="app.config.ts" {3} server: { preset: "aws-lambda" }, baseURL: "/docs" }); ``` To serve your SolidStart app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your SolidStart app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set the baseURL in your `app.config.ts`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the SolidStart app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the SolidStart app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## StaticSite Reference doc for the `sst.aws.StaticSite` component. https://sst.dev/docs/component/aws/static-site The `StaticSite` component lets you deploy a static website to AWS. It uses [Amazon S3](https://aws.amazon.com/s3/) to store your files and [Amazon CloudFront](https://aws.amazon.com/cloudfront/) to serve them. It can also `build` your site by running your static site generator, like [Vite](https://vitejs.dev) and uploading the build output to S3. #### Minimal example Simply uploads the current directory as a static site. ```js title="sst.config.ts" new sst.aws.StaticSite("MyWeb"); ``` #### Change the path Change the `path` that should be uploaded. ```js title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { path: "path/to/site" }); ``` #### Running locally In `sst dev`, we don't deploy your site to AWS because we assume you are running it locally. :::note Your static site will not be deployed when run locally with `sst dev`. ::: For example, for a Vite site, you can run it locally with. ```bash sst dev vite dev ``` This will start the Vite dev server and pass in any environment variables that you've set in your config. But it will not deploy your site to AWS. #### Deploy a Vite SPA Use [Vite](https://vitejs.dev) to deploy a React/Vue/Svelte/etc. SPA by specifying the `build` config. ```js title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { build: { command: "npm run build", output: "dist" } }); ``` #### Deploy a Jekyll site Use [Jekyll](https://jekyllrb.com) to deploy a static site. ```js title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { errorPage: "/404.html", build: { command: "bundle exec jekyll build", output: "_site" } }); ``` #### Deploy a Gatsby site Use [Gatsby](https://www.gatsbyjs.com) to deploy a static site. ```js title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { errorPage: "/404.html", build: { command: "npm run build", output: "public" } }); ``` #### Deploy an Angular SPA Use [Angular](https://angular.dev) to deploy a SPA. ```js title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { build: { command: "ng build --output-path dist", output: "dist" } }); ``` #### Add a custom domain Set a custom domain for your site. ```js {2} title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Set environment variables Set `environment` variables for the build process of your static site. These will be used locally and on deploy. :::tip For Vite, the types for the environment variables are also generated. This can be configured through the `vite` prop. ::: For some static site generators like Vite, [environment variables](https://vitejs.dev/guide/env-and-mode) prefixed with `VITE_` can be accessed in the browser. ```ts {5-7} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.StaticSite("MyWeb", { environment: { BUCKET_NAME: bucket.name, // Accessible in the browser VITE_STRIPE_PUBLISHABLE_KEY: "pk_test_123" }, build: { command: "npm run build", output: "dist" } }); ``` --- ## Constructor ```ts new StaticSite(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`StaticSiteArgs`](#staticsiteargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## StaticSiteArgs ### assets? **Type** `Object` - [`bucket?`](#assets-bucket) - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`path?`](#assets-path) - [`purge?`](#assets-purge) - [`routes?`](#assets-routes) - [`textEncoding?`](#assets-textencoding) **Default** `Object` Configure how the static site's assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", fileOptions: [ { files: "**", cacheControl: "max-age=31536000,public,immutable" }, { files: "**/*.html", cacheControl: "max-age=0,no-cache,no-store,must-revalidate" } ] } } ``` bucket? **Type** `Input` **Default** Creates a new bucket The name of the S3 bucket to upload the assets to. ```js { assets: { bucket: "my-existing-bucket" } } ``` :::note The bucket must allow CloudFront to access the bucket. ::: When using an existing bucket, ensure that the bucket has a policy that allows CloudFront to access the bucket. For example, the bucket policy might look like this: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-existing-bucket/*" } ] } ``` fileOptions? **Type** `Input` **Default** `Object[]` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. By default, this is set to cache CSS/JS files for 1 year and not cache HTML files. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], cacheControl: "max-age=31536000,public,immutable" }, { files: "**/*.html", cacheControl: "max-age=0,no-cache,no-store,must-revalidate" } ] } } ``` You can change the default options. For example, apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" }, ], } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" }, ], } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. path? **Type** `Input` **Default** Root of the bucket The path into the S3 bucket where the assets should be uploaded. ```js { assets: { path: "websites/my-website" } } ``` purge? **Type** `Input` **Default** `true` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` routes? **Type** `Input[]>` Configure additional asset routes for serving files directly from the S3 bucket. These routes allow files stored in specific S3 bucket paths to be served under the same domain as your site. This is particularly useful for handling user-uploaded content. If user-uploaded files are stored in the `uploads` directory, and no `routes` are configured, these files will return 404 errors or display the `errorPage` if set. By including `uploads` in `routes`, all files in that folder will be served directly from the S3 bucket. ```js { assets: { routes: ["uploads"] } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets uploaded, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` ### build? **Type** `Input` - [`command`](#build-command) - [`output`](#build-output) Configure if your static site needs to be built. This is useful if you are using a static site generator. The `build.output` directory will be uploaded to S3 instead. For a Vite project using npm this might look like this. ```js { build: { command: "npm run build", output: "dist" } } ``` command **Type** `Input` The command that builds the static site. It's run before your site is deployed. This is run at the root of your site, `path`. ```js { build: { command: "yarn build" } } ``` output **Type** `Input` The directory where the build output of your static site is generated. This will be uploaded. The path is relative to the root of your site, `path`. ```js { build: { output: "build" } } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your static site is run in dev mode; it's not deployed. ::: Instead of deploying your static site, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your static site. Supports domains hosted either on [Route 53](https://aws.amazon.com/route53/) or outside AWS. :::tip You can also migrate an externally hosted domain to Amazon Route 53 by [following this guide](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html). ::: ```js { domain: "domain.com" } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set. - Rewrite URLs to append `index.html` to the URL if the URL ends with a `/`. - Rewrite URLs to append `.html` to the URL if the URL does not contain a file extension. You can pass in the code to inject into the function. The provided code will be injected at the start of the function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this add basic auth, [check out an example](/docs/examples/#aws-static-site-basic-auth). injection **Type** `Input` The code to inject into the viewer request function. To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. You can pass in the code to inject into the function. And a CloudFront function will be created with the provided code injected into it. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` injection **Type** `Input` The code to inject into the viewer response function. To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { server: { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } } ``` ### environment? **Type** `Input>>` Set environment variables for your static site. These are made available: 1. Locally while running your site through `sst dev`. 2. In the build process when running `build.command`. ```js environment: { API_URL: api.url STRIPE_PUBLISHABLE_KEY: "pk_test_123" } ``` Some static site generators like Vite have their [concept of environment variables](https://vitejs.dev/guide/env-and-mode), and you can use this option to set them. :::note The types for the Vite environment variables are generated automatically. You can change their location through `vite.types`. ::: These can be accessed as `import.meta.env` in your site. And only the ones prefixed with `VITE_` can be accessed in the browser. ```js environment: { API_URL: api.url // Accessible in the browser VITE_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } ``` ### errorPage? **Type** `Input` **Default** The `indexPage` of your site. The error page to display on a 403 or 404 error. This is a path relative to the root of your site, or the `path`. ```js { errorPage: "404.html" } ``` ### indexPage? **Type** `string` **Default** `"index.html"` The name of the index page of the site. This is a path relative to the root of your site, or the `path`. :::note The index page only applies to the root of your site. ::: By default this is set to `index.html`. So if a visitor goes to your site, let's say `example.com`, `example.com/index.html` will be served. ```js { indexPage: "home.html" } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your static site has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Turn off invalidations. ```js { invalidation: false } ``` Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use the built-in option `all` to invalidation all files when any file changes. :::note Invalidating `all` counts as one invalidation, while each glob pattern counts as a single invalidation path. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for the CloudFront cache invalidation process to finish ensures that the new content will be served once the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### path? **Type** `Input` **Default** `"."` Path to the directory where your static site is located. By default this assumes your static site is in the root of your SST app. This directory will be uploaded to S3. The path is relative to your `sst.config.ts`. :::note If the `build` options are specified, `build.output` will be uploaded to S3 instead. ::: If you are using a static site generator, like Vite, you'll need to configure the `build` options. When these are set, the `build.output` directory will be uploaded to S3 instead. Change where your static site is located. ```js { path: "packages/web" } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your static site through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your static site **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` If you are using a static site generator make sure the base path is set in your config. :::caution If routing to a path, you need to configure that as the base path in your static site generator as well. ::: For Vite, set the `base` option in your `vite.config.ts`. It should end with a `/` to ensure asset paths like CSS and JS, are constructed correctly. ```js title="vite.config.ts" {2} base: "/docs/" }); ``` To serve your static site **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your static site **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set the base path in your static site generator configuration, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. ### vite? **Type** `Input` - [`types?`](#vite-types) Configure [Vite](https://vitejs.dev) related options. :::tip If a `vite.config.ts` or `vite.config.js` file is detected in the `path`, then these options will be used during the build and deploy process. ::: types? **Type** `string` **Default** `"src/sst-env.d.ts"` The path where the type definition for the `environment` variables are generated. This is relative to the `path`. [Read more](https://vitejs.dev/guide/env-and-mode#intellisense-for-typescript). ```js { vite: { types: "other/path/sst-env.d.ts" } } ``` ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. ### url **Type** `Output` The URL of the website. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the website. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## StepFunctions Reference doc for the `sst.aws.StepFunctions` component. https://sst.dev/docs/component/aws/step-functions The `StepFunctions` component lets you add state machines to your app using [AWS Step Functions](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html). :::note This component is currently in beta. Please [report any issues](https://github.com/sst/sst/issues) you find. ::: You define your state machine using a collection of states. Where each state needs a unique name. It uses [JSONata](https://jsonata.org) for transforming data between states. #### Minimal example The state machine definition is compiled into JSON and passed to AWS. ```ts title="sst.config.ts" const foo = sst.aws.StepFunctions.pass({ name: "Foo" }); const bar = sst.aws.StepFunctions.succeed({ name: "Bar" }); const definition = foo.next(bar); new sst.aws.StepFunctions("MyStateMachine", { definition }); ``` #### Invoking a Lambda function Create a function and invoke it from a state machine. ```ts title="sst.config.ts" {5-8,12} const myFunction = new sst.aws.Function("MyFunction", { handler: "src/index.handler" }); const invoke = sst.aws.StepFunctions.lambdaInvoke({ name: "InvokeMyFunction", function: myFunction }); const done = sst.aws.StepFunctions.succeed({ name: "Done" }); new sst.aws.StepFunctions("MyStateMachine", { definition: invoke.next(done) }); ``` #### Use the express workflow ```ts title="sst.config.ts" {5} const foo = sst.aws.StepFunctions.pass({ name: "Foo" }); const bar = sst.aws.StepFunctions.succeed({ name: "Bar" }); new sst.aws.StepFunctions("MyStateMachine", { type: "express", definition: foo.next(bar) }); ``` --- ## Constructor ```ts new StepFunctions(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`StepFunctionsArgs`](#stepfunctionsargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## StepFunctionsArgs ### definition **Type** [`State`](/docs/component/aws/step-functions/state) The definition of the state machine. It takes a chain of `State` objects. ```ts title="sst.config.ts" const foo = sst.aws.StepFunctions.pass({ name: "Foo" }); const bar = sst.aws.StepFunctions.succeed({ name: "Bar" }); new sst.aws.StepFunctions("MyStateMachine", { definition: foo.next(bar) }); ``` ### logging? **Type** `Input` - [`includeData?`](#logging-includedata) - [`level?`](#logging-level) - [`retention?`](#logging-retention) **Default** `{retention: "1 month", level: "error", includeData: false}` Configure the execution logs in CloudWatch. Or pass in `false` to disable writing logs. ```js { logging: false } ``` includeData? **Type** `Input` **Default** `false` Specify whether execution data is included in the logs. ```js { logging: { includeData: true } } ``` level? **Type** `Input<"all" | "error" | "fatal">` **Default** `"error"` Specify the type of execution events that are logged. Read more about the [Step Functions log level](https://docs.aws.amazon.com/step-functions/latest/dg/cw-logs.html#cloudwatch-log-level). ```js { logging: { level: "all" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `1 month` The duration the logs are kept in CloudWatch. ```js { logging: { retention: "forever" } } ``` ### transform? **Type** `Object` - [`logGroup?`](#transform-loggroup) - [`stateMachine?`](#transform-statemachine) [Transform](/docs/components#transform) how this component creates its underlying resources. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Step Functions LogGroup resource. stateMachine? **Type** [`StateMachineArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sfn/statemachine/#inputs)` | (args: `[`StateMachineArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/sfn/statemachine/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Step Functions StateMachine resource. ### type? **Type** `Input<"standard" | "express">` **Default** `"standard"` The type of state machine workflow to create. :::caution Changing the type of the state machine workflow will cause the state machine to be destroyed and recreated. ::: The `standard` workflow is the default and is meant for long running workflows. The `express` workflow is meant for workflows shorter than 5 minutes. This is because the `express` workflow is run in a single Lambda function. As a result, it's faster and cheaper to run. So if your workflow are short, the `express` workflow is recommended. ```js { type: "express" } ``` ## Properties ### arn **Type** `Output` The State Machine ARN. ### nodes **Type** `Object` - [`stateMachine`](#nodes-statemachine) The underlying [resources](/docs/components/#nodes) this component creates. stateMachine **Type** [`StateMachine`](https://www.pulumi.com/registry/packages/aws/api-docs/sfn/statemachine/) The Step Function State Machine resource. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `arn` `string` The State Machine ARN. ## Methods ### static choice ```ts StepFunctions.choice(args) ``` #### Parameters - `args` [`ChoiceArgs`](/docs/component/aws/step-functions/choice#choiceargs) **Returns** [`Choice`](/docs/component/aws/step-functions/choice) A `Choice` state is used to conditionally continue to different states based on the matched condition. ```ts title="sst.config.ts" const processPayment = sst.aws.StepFunctions.choice({ name: "ProcessPayment" }); const makePayment = sst.aws.StepFunctions.lambdaInvoke({ name: "MakePayment" }); const sendReceipt = sst.aws.StepFunctions.lambdaInvoke({ name: "SendReceipt" }); const failure = sst.aws.StepFunctions.fail({ name: "Failure" }); processPayment.when("{% $states.input.status === 'unpaid' %}", makePayment); processPayment.when("{% $states.input.status === 'paid' %}", sendReceipt); processPayment.otherwise(failure); ``` ### static fail ```ts StepFunctions.fail(args) ``` #### Parameters - `args` [`FailArgs`](/docs/component/aws/step-functions/fail#failargs) **Returns** [`Fail`](/docs/component/aws/step-functions/fail) A `Fail` state is used to fail the execution of a state machine. ```ts title="sst.config.ts" sst.aws.StepFunctions.fail({ name: "Failure" }); ``` ### static map ```ts StepFunctions.map(args) ``` #### Parameters - `args` [`MapArgs`](/docs/component/aws/step-functions/map#mapargs) **Returns** [`Map`](/docs/component/aws/step-functions/map) A `Map` state is used to iterate over a list of items and execute a task for each item. ```ts title="sst.config.ts" const processor = sst.aws.StepFunctions.lambdaInvoke({ name: "Processor", function: "src/processor.handler" }); sst.aws.StepFunctions.map({ processor, name: "Map", items: "{% $states.input.items %}" }); ``` ### static parallel ```ts StepFunctions.parallel(args) ``` #### Parameters - `args` [`ParallelArgs`](/docs/component/aws/step-functions/parallel#parallelargs) **Returns** [`Parallel`](/docs/component/aws/step-functions/parallel) A `Parallel` state is used to execute multiple branches of a state in parallel. ```ts title="sst.config.ts" const processorA = sst.aws.StepFunctions.lambdaInvoke({ name: "ProcessorA", function: "src/processorA.handler" }); const processorB = sst.aws.StepFunctions.lambdaInvoke({ name: "ProcessorB", function: "src/processorB.handler" }); const parallel = sst.aws.StepFunctions.parallel({ name: "Parallel" }); parallel.branch(processorA); parallel.branch(processorB); ``` ### static pass ```ts StepFunctions.pass(args) ``` #### Parameters - `args` [`PassArgs`](/docs/component/aws/step-functions/pass#passargs) **Returns** [`Pass`](/docs/component/aws/step-functions/pass) A `Pass` state is used to pass the input to the next state. It's useful for transforming the input before passing it along. ```ts title="sst.config.ts" sst.aws.StepFunctions.pass({ name: "Pass", output: "{% $states.input.message %}" }); ``` ### static succeed ```ts StepFunctions.succeed(args) ``` #### Parameters - `args` [`SucceedArgs`](/docs/component/aws/step-functions/succeed#succeedargs) **Returns** [`Succeed`](/docs/component/aws/step-functions/succeed) A `Succeed` state is used to indicate that the execution of a state machine has succeeded. ```ts title="sst.config.ts" sst.aws.StepFunctions.succeed({ name: "Succeed" }); ``` ### static task ```ts StepFunctions.task(args) ``` #### Parameters - `args` [`TaskArgs`](/docs/component/aws/step-functions/task#taskargs) **Returns** [`Task`](/docs/component/aws/step-functions/task) A `Task` state can be used to make calls to AWS resources. We created a few convenience methods for common tasks like: - `sst.aws.StepFunctions.lambdaInvoke` to invoke a Lambda function. - `sst.aws.StepFunctions.ecsRunTask` to run an ECS task. - `sst.aws.StepFunctions.eventBridgePutEvents` to send custom events to EventBridge. For everything else, you can use the `Task` state. For example, to start an AWS CodeBuild build. ```ts title="sst.config.ts" sst.aws.StepFunctions.task({ name: "Task", resource: "arn:aws:states:::codebuild:startBuild", arguments: { projectName: "my-codebuild-project" }, permissions: [ { actions: ["codebuild:StartBuild"], resources: ["*"] } ] }); ``` ### static wait ```ts StepFunctions.wait(args) ``` #### Parameters - `args` [`WaitArgs`](/docs/component/aws/step-functions/wait#waitargs) **Returns** [`Wait`](/docs/component/aws/step-functions/wait) A `Wait` state is used to wait for a specific amount of time before continuing to the next state. For example, wait for 10 seconds before continuing to the next state. ```ts title="sst.config.ts" sst.aws.StepFunctions.wait({ name: "Wait", time: 10 }); ``` Alternatively, you can wait until a specific timestamp. ```ts title="sst.config.ts" sst.aws.StepFunctions.wait({ name: "Wait", timestamp: "2026-01-01T00:00:00Z" }); ``` ### static ecsRunTask ```ts StepFunctions.ecsRunTask(args) ``` #### Parameters - `args` [`EcsRunTaskArgs`](/docs/component/aws/step-functions/task#ecsruntaskargs) **Returns** [`Task`](/docs/component/aws/step-functions/task) Create a `Task` state that runs an ECS task using the [`Task`](/docs/component/aws/task) component. [Learn more](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html). ```ts title="sst.config.ts" const myCluster = new sst.aws.Cluster("MyCluster"); const myTask = new sst.aws.Task("MyTask", { cluster: myCluster }); sst.aws.StepFunctions.ecsRunTask({ name: "RunTask", task: myTask }); ``` ### static eventBridgePutEvents ```ts StepFunctions.eventBridgePutEvents(args) ``` #### Parameters - `args` [`EventBridgePutEventsArgs`](/docs/component/aws/step-functions/task#eventbridgeputeventsargs) **Returns** [`Task`](/docs/component/aws/step-functions/task) Create a `Task` state that sends custom events to one or more EventBridge buses using the [`Bus`](/docs/component/aws/bus) component. [Learn more](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutEvents.html). ```ts title="sst.config.ts" const myBus = new sst.aws.EventBus("MyBus"); sst.aws.StepFunctions.eventBridgePutEvents({ name: "EventBridgePutEvents", events: [ { bus: myBus, source: "my-source" } ] }); ``` ### static lambdaInvoke ```ts StepFunctions.lambdaInvoke(args) ``` #### Parameters - `args` [`LambdaInvokeArgs`](/docs/component/aws/step-functions/task#lambdainvokeargs) **Returns** [`Task`](/docs/component/aws/step-functions/task) Create a `Task` state that invokes a Lambda function. [Learn more](https://docs.aws.amazon.com/lambda/latest/api/API_Invoke.html). ```ts title="sst.config.ts" sst.aws.StepFunctions.lambdaInvoke({ name: "LambdaInvoke", function: "src/index.handler" }); ``` Customize the function. ```ts title="sst.config.ts" sst.aws.StepFunctions.lambdaInvoke({ name: "LambdaInvoke", function: { handler: "src/index.handler" timeout: "60 seconds", } }); ``` Pass in an existing `Function` component. ```ts title="sst.config.ts" const myLambda = new sst.aws.Function("MyLambda", { handler: "src/index.handler" }); sst.aws.StepFunctions.lambdaInvoke({ name: "LambdaInvoke", function: myLambda }); ``` Or pass in the ARN of an existing Lambda function. ```ts title="sst.config.ts" sst.aws.StepFunctions.lambdaInvoke({ name: "LambdaInvoke", function: "arn:aws:lambda:us-east-1:123456789012:function:my-function" }); ``` ### static snsPublish ```ts StepFunctions.snsPublish(args) ``` #### Parameters - `args` [`SnsPublishArgs`](/docs/component/aws/step-functions/task#snspublishargs) **Returns** [`Task`](/docs/component/aws/step-functions/task) Create a `Task` state that publishes a message to an SNS topic. [Learn more](https://docs.aws.amazon.com/sns/latest/api/API_Publish.html). ```ts title="sst.config.ts" const myTopic = new sst.aws.SnsTopic("MyTopic"); sst.aws.StepFunctions.snsPublish({ name: "SnsPublish", topic: myTopic, message: "Hello, world!" }); ``` ### static sqsSendMessage ```ts StepFunctions.sqsSendMessage(args) ``` #### Parameters - `args` [`SqsSendMessageArgs`](/docs/component/aws/step-functions/task#sqssendmessageargs) **Returns** [`Task`](/docs/component/aws/step-functions/task) Create a `Task` state that sends a message to an SQS queue. [Learn more](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html). ```ts title="sst.config.ts" const myQueue = new sst.aws.Queue("MyQueue"); sst.aws.StepFunctions.sqsSendMessage({ name: "SqsSendMessage", queue: myQueue, messageBody: "Hello, world!" }); ``` --- ## Choice Reference doc for the `sst.step-functions.Choice` component. https://sst.dev/docs/component/aws/step-functions/choice The `Choice` state is internally used by the `StepFunctions` component to add a [Choice workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-choice.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `choice` method of the `StepFunctions` component. --- ## Constructor ```ts new Choice(args) ``` #### Parameters - `args` [`ChoiceArgs`](#choiceargs) ## ChoiceArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ## Methods ### otherwise ```ts otherwise(next) ``` #### Parameters - `next` [`State`](/docs/component/aws/step-functions/state) **Returns** [`Choice`](.) Add a default next state to the `Choice` state. If no other condition matches, continue execution with the given state. ### when ```ts when(condition, next) ``` #### Parameters - `condition` `"\{% $\{string\} %\}"` The JSONata condition to evaluate. - `next` [`State`](/docs/component/aws/step-functions/state) The state to transition to. **Returns** [`Choice`](.) Add a matching condition to the `Choice` state. If the given condition matches, it'll continue execution to the given state. The condition needs to be a JSONata expression that evaluates to a boolean. ```ts sst.aws.StepFunctions.choice({ // ... }) .when( "{% $states.input.status === 'unpaid' %}", state ); ``` --- ## Fail Reference doc for the `sst.step-functions.Fail` component. https://sst.dev/docs/component/aws/step-functions/fail The `Fail` state is internally used by the `StepFunctions` component to add a [Fail workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-fail.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `fail` method of the `StepFunctions` component. --- ## Constructor ```ts new Fail(args) ``` #### Parameters - `args` [`FailArgs`](#failargs) ## FailArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### cause? **Type** `Input` A custom string that describes the cause of the error. ```ts { cause: "User not found" } ``` Alternatively, you can specify a JSONata expression that evaluates to a string. ```ts { cause: "{% $states.input.user %}" } ``` ### error? **Type** `Input` An error name that you can provide to perform error handling using `retry` or `catch`. ```ts { error: "UserNotFound" } ``` Alternatively, you can specify a JSONata expression that evaluates to a string. ```ts { error: "{% $states.input.error %}" } ``` ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). --- ## Map Reference doc for the `sst.step-functions.Map` component. https://sst.dev/docs/component/aws/step-functions/map The `Map` state is internally used by the `StepFunctions` component to add a [Map workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-map.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `map` method of the `StepFunctions` component. --- ## Constructor ```ts new Map(args) ``` #### Parameters - `args` [`MapArgs`](#mapargs) ## MapArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### itemSelector? **Type** `Input>>` Reformat the values of the input array items before they're passed on to each state iteration. For example, you can pass in what you want the fields to be. ```ts { "itemSelector": { "size": 10, "value.$": "$$.Map.Item.Value" } } ``` When applied to the following list of items. ```ts [ { "resize": "true", "format": "jpg" }, { "resize": "false", "format": "png" } ] ``` A transformed item will look like. ```ts { "size": 10, "value": { "resize": "true", "format": "jpg" } } ``` Learn more about [`ItemSelector`](https://docs.aws.amazon.com/step-functions/latest/dg/input-output-itemselector.html). ### items? **Type** `Input` The list of items to process. For example, you can specify an array of items. ```ts { items: ["item1", "item2", "item3"] } ``` Or, specify a JSONata expression that evaluates to an array of items. ```ts { items: "{% $states.input.items %}" } ``` ### maxConcurrency? **Type** `Input` **Default** `0` An upper bound on the number of `Map` state iterations that can run in parallel. Takes an integer or a JSONata expression that evaluates to an integer. Default to 0, which means there's no limit on the concurrency. For example, to limit it to 10 concurrent iterations. ```ts { maxConcurrency: 10 } ``` ### mode? **Type** `Input<"standard" | "express" | "inline">` **Default** `"inline"` The processing mode for the `Map` state. The `inline` mode is the default and has limited concurrency. In this mode, each item in the `Map` state runs as a part of the current workflow. The `standard` and `express` mode have high concurrency. In these mode, each item in the `Map` state runs as a child workflow. This enables high concurrency of up to 10,000 parallel child workflows. Each child workflow has its own, separate execution history. - In `standard` mode, each child runs as a StepFunctions Standard workflow. - In `express` mode, each child runs as a StepFunctions Express workflow. :::note `Map` state with `standard` or `express` mode is not supported in `express` type StepFunctions. ::: ```js { type: "express" } ``` ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### processor **Type** [`State`](/docs/component/aws/step-functions/state) The state to execute for each item in the array. For example, to iterate over an array of items and execute a Lambda function for each item. ```ts title="sst.config.ts" const processor = sst.aws.StepFunctions.lambdaInvoke({ name: "Processor", function: "src/processor.handler" }); sst.aws.StepFunctions.map({ processor, name: "Map", items: "{% $states.input.items %}" }); ``` ## Methods ### catch ```ts catch(state, args?) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) The state to transition to on error. - `args?` [`CatchArgs`](/docs/component/aws/step-functions/state#catchargs) Properties to customize error handling. **Returns** [`Map`](.) Add a catch behavior to the `Map` state. So if the state fails with any of the specified errors, it'll continue execution to the given `state`. This defaults to. ```ts title="sst.config.ts" {5} sst.aws.StepFunctions.map({ // ... }) .catch({ errors: ["States.ALL"] }); ``` ### next ```ts next(state) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) The state to transition to. **Returns** [`State`](/docs/component/aws/step-functions/state) Add a next state to the `Map` state. If the state completes successfully, continue execution to the given `state`. ```ts title="sst.config.ts" sst.aws.StepFunctions.map({ // ... }) .next(state); ``` ### retry ```ts retry(args?) ``` #### Parameters - `args?` [`RetryArgs`](/docs/component/aws/step-functions/state#retryargs) Properties to define the retry behavior. **Returns** [`Map`](.) Add a retry behavior to the `Map` state. If the state fails with any of the specified errors, retry the execution. This defaults to. ```ts title="sst.config.ts" {5-8} sst.aws.StepFunctions.map({ // ... }) .retry({ errors: ["States.ALL"], interval: "1 second", maxAttempts: 3, backoffRate: 2 }); ``` --- ## Parallel Reference doc for the `sst.step-functions.Parallel` component. https://sst.dev/docs/component/aws/step-functions/parallel The `Parallel` state is internally used by the `StepFunctions` component to add a [Parallel workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-parallel.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `parallel` method of the `StepFunctions` component. --- ## Constructor ```ts new Parallel(args) ``` #### Parameters - `args` [`ParallelArgs`](#parallelargs) ## ParallelArgs ### arguments? **Type** `Input>>` The arguments to be passed to the APIs of the connected resources. Values can include outputs from other resources and JSONata expressions. ```ts { arguments: { product: "{% $states.input.order.product %}", url: api.url, count: 32 } } ``` ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ## Methods ### branch ```ts branch(branch) ``` #### Parameters - `branch` [`State`](/docs/component/aws/step-functions/state) The state to add as a branch. **Returns** [`Parallel`](.) Add a branch state to the `Parallel` state. Each branch runs concurrently. ```ts title="sst.config.ts" const parallel = sst.aws.StepFunctions.parallel({ name: "Parallel" }); parallel.branch(processorA); parallel.branch(processorB); ``` ### catch ```ts catch(state, args?) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) The state to transition to on error. - `args?` [`CatchArgs`](/docs/component/aws/step-functions/state#catchargs) Properties to customize error handling. **Returns** [`Parallel`](.) Add a catch behavior to the `Parallel` state. So if the state fails with any of the specified errors, it'll continue execution to the given `state`. This defaults to. ```ts title="sst.config.ts" {5} sst.aws.StepFunctions.parallel({ // ... }) .catch({ errors: ["States.ALL"] }); ``` ### next ```ts next(state) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) The state to transition to. **Returns** [`State`](/docs/component/aws/step-functions/state) Add a next state to the `Parallel` state. If all branches complete successfully, this'll continue execution to the given `state`. ```ts title="sst.config.ts" sst.aws.StepFunctions.parallel({ // ... }) .next(state); ``` ### retry ```ts retry(args?) ``` #### Parameters - `args?` [`RetryArgs`](/docs/component/aws/step-functions/state#retryargs) Properties to define the retry behavior. **Returns** [`Parallel`](.) Add a retry behavior to the `Parallel` state. If the state fails with any of the specified errors, retry execution using the specified parameters. This defaults to. ```ts title="sst.config.ts" {5-8} sst.aws.StepFunctions.parallel({ // ... }) .retry({ errors: ["States.ALL"], interval: "1 second", maxAttempts: 3, backoffRate: 2 }); ``` --- ## Pass Reference doc for the `sst.step-functions.Pass` component. https://sst.dev/docs/component/aws/step-functions/pass The `Pass` state is internally used by the `StepFunctions` component to add a [Pass workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-pass.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `pass` method of the `StepFunctions` component. --- ## Constructor ```ts new Pass(args) ``` #### Parameters - `args` [`PassArgs`](#passargs) ## PassArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ## Methods ### next ```ts next(state) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) **Returns** [`State`](/docs/component/aws/step-functions/state) Add a next state to the `Pass` state. After this state completes, it'll transition to the given `state`. ```ts title="sst.config.ts" sst.aws.StepFunctions.pass({ // ... }) .next(state); ``` --- ## State Reference doc for the `sst.step-functions.State` component. https://sst.dev/docs/component/aws/step-functions/state The `State` class is the base class for all states in `StepFunctions` state machine. :::note This component is not intended to be created directly. ::: This is used for reference only. --- ## Constructor ```ts new State(args) ``` #### Parameters - `args` [`StateArgs`](#stateargs) ## StateArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ## CatchArgs ### errors? **Type** `string[]` **Default** `["States.ALL"]` A list of errors that are being caught. By default, this catches all errors. ## RetryArgs ### backoffRate? **Type** `number` **Default** `2` The backoff rate. This is a multiplier that increases the interval between retries. For example, if the interval is `1 second` and the backoff rate is `2`, the first retry will happen after `1 second`, and the second retry will happen after `2 * 1 second = 2 seconds`. ### errors? **Type** `string[]` **Default** `["States.ALL"]` A list of errors that are being retried. By default, this retries all errors. ### interval? **Type** `"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days"` **Default** `"1 second"` The amount of time to wait before the first retry attempt. The maximum value is `99999999 seconds`. Following attempts will retry based on the `backoffRate` multiplier. ### jitterStrategy? **Type** `"FULL" | "NONE"` **Default** `"NONE"` Whether to add jitter to the retry intervals. Jitter helps reduce simultaneous retries by adding randomness to the wait times. - `"FULL"` - Adds jitter to retry intervals - `"NONE"` - No jitter (default) ```ts { jitterStrategy: "FULL" } ``` ### maxAttempts? **Type** `number` **Default** `3` The maximum number of retries before it falls back to the normal error handling. A value of `0` means the error won't be retried. The maximum value is `99999999`. ### maxDelay? **Type** `"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days"` The maximum delay between retry attempts. This limits the exponential growth of wait times when using `backoffRate`. Must be greater than `0` and less than `31622401 seconds`. For example, if the interval is `1 second`, the backoff rate is `2`, and the max delay is `5 seconds`, the retry attempts will be: `1s`, `2s`, `4s`, `5s`, `5s`, ... (capped at 5 seconds). ```ts { maxDelay: "10 seconds" } ``` --- ## Succeed Reference doc for the `sst.step-functions.Succeed` component. https://sst.dev/docs/component/aws/step-functions/succeed The `Succeed` state is internally used by the `StepFunctions` component to add a [Succeed workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-succeed.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `succeed` method of the `StepFunctions` component. --- ## Constructor ```ts new Succeed(args) ``` #### Parameters - `args` [`SucceedArgs`](#succeedargs) ## SucceedArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). --- ## Task Reference doc for the `sst.step-functions.Task` component. https://sst.dev/docs/component/aws/step-functions/task The `Task` state is internally used by the `StepFunctions` component to add a [Task workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-task.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `task` method of the `StepFunctions` component. It's also returned by convenience methods like `lambdaInvoke`, `snsPublish`, `sqsSendMessage`, and more. --- ## Constructor ```ts new Task(args) ``` #### Parameters - `args` [`TaskArgs`](#taskargs) ## TaskArgs ### arguments? **Type** `Input>>` The arguments to be passed to the APIs of the connected resources. Values can include outputs from other resources and JSONata expressions. ```ts { arguments: { product: "{% $states.input.order.product %}", url: api.url, count: 32 } } ``` ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### integration? **Type** `Input<"response" | "sync" | "token">` **Default** `"response"` Specifies how a `Task` state integrates with the specified AWS service. The `response` integration is the default. The `Task` state calls a service and progress to the next state immediately after it gets an HTTP response. In `sync` integration, the `Task` state waits for the service to complete the job (ie. Amazon ECS task, AWS CodeBuild build, etc.) before progressing to the next state. In `token` integration, the `Task` state calls a service and pauses until a task token is returned. To resume execution, call the [`SendTaskSuccess`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskSuccess.html) or [`SendTaskFailure`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskFailure.html) API with the task token. Learn more about [service integration patterns](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html). ```ts { integration: "token" } ``` ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### permissions? **Type** `Object[]` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the task needs to access. These permissions are used to create the task's IAM role. For example, allow the task to read and write to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Allow the task to perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Granting the task permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] } ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### resource **Type** `Input` The ARN of the task. Follows the format. ```ts { resource: "arn:aws:states:::service:task_type:name" } ``` For example, to start an AWS CodeBuild build. ```ts { resource: "arn:aws:states:::codebuild:startBuild" } ``` Learn more about [task ARNs](https://docs.aws.amazon.com/step-functions/latest/dg/state-task.html#task-types). ### timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days" | "\{% $\{string\} %\}">` **Default** `"60 seconds"` for HTTP tasks, `"99999999 seconds"` for all other tasks. Specifies the maximum time a task can run before it times out with the `States.Timeout` error and fails. ```ts { timeout: "10 seconds" } ``` Alternatively, you can specify a JSONata expression that evaluates to a number in seconds. ```ts { time: "{% $states.input.timeout %}" } ``` ## Methods ### catch ```ts catch(state, args?) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) The state to transition to on error. - `args?` [`CatchArgs`](/docs/component/aws/step-functions/state#catchargs) Properties to customize error handling. **Returns** [`Task`](.) Add a catch behavior to the `Task` state. So if the state fails with any of the specified errors, it'll continue execution to the given `state`. This defaults to. ```ts title="sst.config.ts" {5} sst.aws.StepFunctions.task({ // ... }) .catch({ errors: ["States.ALL"] }); ``` ### next ```ts next(state) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) The state to transition to. **Returns** [`State`](/docs/component/aws/step-functions/state) Add a next state to the `Task` state. If the state completes successfully, continue execution to the given `state`. ```ts title="sst.config.ts" sst.aws.StepFunctions.task({ // ... }) .next(state); ``` ### retry ```ts retry(args?) ``` #### Parameters - `args?` [`RetryArgs`](/docs/component/aws/step-functions/state#retryargs) Properties to define the retry behavior. **Returns** [`Task`](.) Add a retry behavior to the `Task` state. If the state fails with any of the specified errors, retry the execution. This defaults to. ```ts title="sst.config.ts" {5-8} sst.aws.StepFunctions.task({ // ... }) .retry({ errors: ["States.ALL"], interval: "1 second", maxAttempts: 3, backoffRate: 2 }); ``` ## EcsRunTaskArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### environment? **Type** `Input>>` The environment variables to apply to the ECS task. Values can include outputs from other resources and JSONata expressions. ```ts { environment: { MY_ENV: "{% $states.input.foo %}", MY_URL: api.url, MY_KEY: 1 } } ``` ### integration? **Type** `Input<"response" | "sync" | "token">` **Default** `"response"` Specifies how a `Task` state integrates with the specified AWS service. The `response` integration is the default. The `Task` state calls a service and progress to the next state immediately after it gets an HTTP response. In `sync` integration, the `Task` state waits for the service to complete the job (ie. Amazon ECS task, AWS CodeBuild build, etc.) before progressing to the next state. In `token` integration, the `Task` state calls a service and pauses until a task token is returned. To resume execution, call the [`SendTaskSuccess`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskSuccess.html) or [`SendTaskFailure`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskFailure.html) API with the task token. Learn more about [service integration patterns](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html). ```ts { integration: "token" } ``` ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### task **Type** [`Task`](.) The ECS `Task` to run. ```ts title="sst.config.ts" {6} const myCluster = new sst.aws.Cluster("MyCluster"); const myTask = new sst.aws.Task("MyTask", { cluster: myCluster }); sst.aws.StepFunctions.ecsRunTask({ name: "RunTask", task: myTask }); ``` ### timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days" | "\{% $\{string\} %\}">` **Default** `"99999999 seconds"` Specifies the maximum time a task can run before it times out with the `States.Timeout` error and fails. ```ts { timeout: "10 seconds" } ``` Alternatively, you can specify a JSONata expression that evaluates to a number in seconds. ```ts { time: "{% $states.input.timeout %}" } ``` ## EventBridgePutEventsArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### events **Type** `Object[]` - [`bus`](#events-bus) - [`detail?`](#events-detail) - [`detailType?`](#events-detailtype) - [`source?`](#events-source) A list of events to send to the EventBridge. ```ts { events: [ { bus: myBus, source: "my-application", detailType: "order-created", detail: { orderId: "{% $states.input.orderId %}", customerId: "{% $states.input.customer.id %}", items: "{% $states.input.items %}" } } ] } ``` bus **Type** [`Bus`](/docs/component/aws/bus) The `Bus` component to send the event to. detail? **Type** `Input>>` The event payload containing the event details as a JSON object. Values can also include a JSONata expression. ```ts { detail: { type: "order", message: "{% $states.input.message %}" } } ``` detailType? **Type** `Input` The detail type of the event. This helps subscribers filter and route events. This can be a string or JSONata expression. source? **Type** `Input` The source of the event. This string or JSONata expression identifies the service or component that generated it. ### integration? **Type** `Input<"response" | "sync" | "token">` **Default** `"response"` Specifies how a `Task` state integrates with the specified AWS service. The `response` integration is the default. The `Task` state calls a service and progress to the next state immediately after it gets an HTTP response. In `sync` integration, the `Task` state waits for the service to complete the job (ie. Amazon ECS task, AWS CodeBuild build, etc.) before progressing to the next state. In `token` integration, the `Task` state calls a service and pauses until a task token is returned. To resume execution, call the [`SendTaskSuccess`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskSuccess.html) or [`SendTaskFailure`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskFailure.html) API with the task token. Learn more about [service integration patterns](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html). ```ts { integration: "token" } ``` ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days" | "\{% $\{string\} %\}">` **Default** `"99999999 seconds"` Specifies the maximum time a task can run before it times out with the `States.Timeout` error and fails. ```ts { timeout: "10 seconds" } ``` Alternatively, you can specify a JSONata expression that evaluates to a number in seconds. ```ts { time: "{% $states.input.timeout %}" } ``` ## LambdaInvokeArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### function **Type** [`Function`](/docs/component/aws/function)` | Input` The `Function` to invoke. ### integration? **Type** `Input<"response" | "sync" | "token">` **Default** `"response"` Specifies how a `Task` state integrates with the specified AWS service. The `response` integration is the default. The `Task` state calls a service and progress to the next state immediately after it gets an HTTP response. In `sync` integration, the `Task` state waits for the service to complete the job (ie. Amazon ECS task, AWS CodeBuild build, etc.) before progressing to the next state. In `token` integration, the `Task` state calls a service and pauses until a task token is returned. To resume execution, call the [`SendTaskSuccess`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskSuccess.html) or [`SendTaskFailure`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskFailure.html) API with the task token. Learn more about [service integration patterns](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html). ```ts { integration: "token" } ``` ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### payload? **Type** `Input<"\{% $\{string\} %\}" | Record>>` The payload to send to the Lambda function. Values can include outputs from other resources and JSONata expressions. ```ts { payload: { env: "{% $states.input.foo %}", url: api.url, key: 1 } } ``` Or, you can pass in a JSONata expression that evaluates to the full payload. ```ts { payload: "{% $states.input %}" } ``` ### timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days" | "\{% $\{string\} %\}">` **Default** `"99999999 seconds"` Specifies the maximum time a task can run before it times out with the `States.Timeout` error and fails. ```ts { timeout: "10 seconds" } ``` Alternatively, you can specify a JSONata expression that evaluates to a number in seconds. ```ts { time: "{% $states.input.timeout %}" } ``` ## SnsPublishArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### integration? **Type** `Input<"response" | "sync" | "token">` **Default** `"response"` Specifies how a `Task` state integrates with the specified AWS service. The `response` integration is the default. The `Task` state calls a service and progress to the next state immediately after it gets an HTTP response. In `sync` integration, the `Task` state waits for the service to complete the job (ie. Amazon ECS task, AWS CodeBuild build, etc.) before progressing to the next state. In `token` integration, the `Task` state calls a service and pauses until a task token is returned. To resume execution, call the [`SendTaskSuccess`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskSuccess.html) or [`SendTaskFailure`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskFailure.html) API with the task token. Learn more about [service integration patterns](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html). ```ts { integration: "token" } ``` ### message **Type** `Input` The message to send to the SNS topic. ### messageAttributes? **Type** `Input>>` The message attributes to send to the SNS topic. Values can include outputs from other resources and JSONata expressions. ```ts { messageAttributes: { env: "{% $states.input.foo %}", url: api.url, key: 1 } } ``` ### messageDeduplicationId? **Type** `Input` The message deduplication ID to send to the SNS topic. This applies to FIFO topics only. This is a string that's used to deduplicate messages sent within the minimum 5 minute interval. ### messageGroupId? **Type** `Input` The message group ID to send to the SNS topic. This only applies to FIFO topics. ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### subject? **Type** `Input` An optional subject line when the message is delivered to email endpoints. ### timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days" | "\{% $\{string\} %\}">` **Default** `"99999999 seconds"` Specifies the maximum time a task can run before it times out with the `States.Timeout` error and fails. ```ts { timeout: "10 seconds" } ``` Alternatively, you can specify a JSONata expression that evaluates to a number in seconds. ```ts { time: "{% $states.input.timeout %}" } ``` ### topic **Type** [`SnsTopic`](/docs/component/aws/sns-topic) The `SnsTopic` component to publish the message to. ## SqsSendMessageArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### integration? **Type** `Input<"response" | "sync" | "token">` **Default** `"response"` Specifies how a `Task` state integrates with the specified AWS service. The `response` integration is the default. The `Task` state calls a service and progress to the next state immediately after it gets an HTTP response. In `sync` integration, the `Task` state waits for the service to complete the job (ie. Amazon ECS task, AWS CodeBuild build, etc.) before progressing to the next state. In `token` integration, the `Task` state calls a service and pauses until a task token is returned. To resume execution, call the [`SendTaskSuccess`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskSuccess.html) or [`SendTaskFailure`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskFailure.html) API with the task token. Learn more about [service integration patterns](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html). ```ts { integration: "token" } ``` ### messageAttributes? **Type** `Input>>` The message attributes to send to the SQS queue. Values can include outputs from other resources and JSONata expressions. ```ts { messageAttributes: { env: "{% $states.input.foo %}", url: api.url, key: 1 } } ``` ### messageBody **Type** `Input>>` The message body to send to the SQS queue. The maximum size is 256KB. ### messageDeduplicationId? **Type** `Input` The message deduplication ID to send to the SQS queue. This applies to FIFO queues only. This is a string that's used to deduplicate messages sent within the minimum 5 minute interval. ### messageGroupId? **Type** `Input` The message group ID to send to the SQS queue. This only applies to FIFO queues. ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### queue **Type** [`Queue`](/docs/component/aws/queue) The `Queue` component to send the message to. ### timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days" | "\{% $\{string\} %\}">` **Default** `"99999999 seconds"` Specifies the maximum time a task can run before it times out with the `States.Timeout` error and fails. ```ts { timeout: "10 seconds" } ``` Alternatively, you can specify a JSONata expression that evaluates to a number in seconds. ```ts { time: "{% $states.input.timeout %}" } ``` --- ## Wait Reference doc for the `sst.step-functions.Wait` component. https://sst.dev/docs/component/aws/step-functions/wait The `Wait` state is internally used by the `StepFunctions` component to add a [Wait workflow state](https://docs.aws.amazon.com/step-functions/latest/dg/state-wait.html) to a state machine. :::note This component is not intended to be created directly. ::: You'll find this component returned by the `wait` method of the `StepFunctions` component. --- ## Constructor ```ts new Wait(args) ``` #### Parameters - `args` [`WaitArgs`](#waitargs) ## WaitArgs ### assign? **Type** `Record` Store variables that can be accessed by any state later in the workflow, instead of passing it through each state. This takes a set of key/value pairs. Where the key is the name of the variable that can be accessed by any subsequent state. The value can be any JSON value; object, array, string, number, boolean, null. ```ts { assign: { productName: "product1", count: 42, available: true } } ``` Or, you can pass in a JSONata expression. ```ts { assign: { product: "{% $states.input.order.product %}", currentPrice: "{% $states.result.Payload.current_price %}" } } ``` Learn more about [passing data between states with variables](https://docs.aws.amazon.com/step-functions/latest/dg/workflow-variables.html). ### name **Type** `string` The name of the state. This needs to be unique within the state machine. ### output? **Type** `Input | "\{% $\{string\} %\}">` Transform the output of the state. When specified, the value overrides the default output from the state. This takes any JSON value; object, array, string, number, boolean, null. ```ts { output: { charged: true } } ``` Or, you can pass in a JSONata expression. ```ts { output: { product: "{% $states.input.product %}" } } ``` Learn more about [transforming data with JSONata](https://docs.aws.amazon.com/step-functions/latest/dg/transforming-data.html). ### time? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours" | "$\{number\} second" | "$\{number\} seconds" | "$\{number\} day" | "$\{number\} days" | "\{% $\{string\} %\}">` Specify the amount of time to wait before starting the next state. ```ts { time: "10 seconds" } ``` Alternatively, you can specify a JSONata expression that evaluates to a number in seconds. ```ts { time: "{% $states.input.wait_time %}" } ``` Here `wait_time` is a number in seconds. ### timestamp? **Type** `Input` A timestamp to wait till. Timestamps must conform to the RFC3339 profile of ISO 8601 and it needs: 1. An uppercase T as a delimiter between the date and time. 2. An uppercase Z to denote that a time zone offset is not present. ```ts { timestamp: "2026-01-01T00:00:00Z" } ``` Alternatively, you can use a JSONata expression to evaluate to a timestamp that conforms to the above format. ```ts { timestamp: "{% $states.input.timestamp %}" } ``` ## Methods ### next ```ts next(state) ``` #### Parameters - `state` [`State`](/docs/component/aws/step-functions/state) **Returns** [`State`](/docs/component/aws/step-functions/state) Add a next state to the `Wait` state. After the wait completes, it'll transition to the given `state`. ```ts title="sst.config.ts" sst.aws.StepFunctions.wait({ name: "Wait", time: "10 seconds" }) .next(state); ``` --- ## SvelteKit Reference doc for the `sst.aws.SvelteKit` component. https://sst.dev/docs/component/aws/svelte-kit The `SvelteKit` component lets you deploy a [SvelteKit](https://kit.svelte.dev/) app to AWS. #### Minimal example Deploy a SvelteKit app that's in the project root. ```js title="sst.config.ts" new sst.aws.SvelteKit("MyWeb"); ``` #### Change the path Deploys the SvelteKit app in the `my-svelte-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.SvelteKit("MyWeb", { path: "my-svelte-app/" }); ``` #### Add a custom domain Set a custom domain for your SvelteKit app. ```js {2} title="sst.config.ts" new sst.aws.SvelteKit("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.SvelteKit("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your SvelteKit app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.SvelteKit("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your SvelteKit app. ```ts title="src/routes/+page.server.ts" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new SvelteKit(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`SvelteKitArgs`](#sveltekitargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## SvelteKitArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the SvelteKit app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your SvelteKit app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the SvelteKit app to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your SvelteKit app is run in dev mode; it's not deployed. ::: Instead of deploying your SvelteKit app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your SvelteKit app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set [environment variables](https://vitejs.dev/guide/env-and-mode.html#env-files) in your SvelteKit app. These are made available: 1. In `vite build`, they are loaded into `process.env`. 2. Locally while running through `sst dev`. :::tip You can also `link` resources to your SvelteKit app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: ```js { environment: { API_URL: api.url, STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your SvelteKit app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your SvelteKit app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your SvelteKit app is located. This path is relative to your `sst.config.ts`. By default it assumes your SvelteKit app is in the root of your SST app. If your SvelteKit app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your SvelteKit app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your SvelteKit app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router` as a: - A path like `/docs` - A subdomain like `docs.example.com` - Or a combined pattern like `dev.example.com/docs` To serve your SvelteKit app **from a path**, you'll need to configure the root domain in your `Router` component. ```ts title="sst.config.ts" {2} const router = new sst.aws.Router("Router", { domain: "example.com" }); ``` Now set the `router` and the `path`. ```ts {3,4} { router: { instance: router, path: "/docs" } } ``` You also need to set the [`base`](https://kit.svelte.dev/docs/configuration#paths) to `/docs` in your `svelte.config.js` without a trailing slash. :::caution If routing to a path, you need to set that as the base path in your SvelteKit app as well. ::: ```js title="svelte.config.js" {4} kit: { paths: { base: "/docs" } } }; ``` To serve your SvelteKit app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your SvelteKit app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` And set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set the base path in your `svelte.config.js`, like above. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the SvelteKit app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the SvelteKit app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## TanStackStart Reference doc for the `sst.aws.TanStackStart` component. https://sst.dev/docs/component/aws/tan-stack-start The `TanStackStart` component lets you deploy a [TanStack Start](https://tanstack.com/start/latest) app to AWS. :::note You need to make sure the `vite.config.ts` is configured with the `aws-lambda` nitro preset. ::: #### Minimal example Deploy a TanStack Start app that's in the project root. ```js title="sst.config.ts" new sst.aws.TanStackStart("MyWeb"); ``` #### Change the path Deploys the TanStack Start app in the `my-app/` directory. ```js {2} title="sst.config.ts" new sst.aws.TanStackStart("MyWeb", { path: "my-app/" }); ``` #### Add a custom domain Set a custom domain for your TanStack Start app. ```js {2} title="sst.config.ts" new sst.aws.TanStackStart("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} title="sst.config.ts" new sst.aws.TanStackStart("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Link resources [Link resources](/docs/linking/) to your TanStack Start app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.TanStackStart("MyWeb", { link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your TanStack Start app. ```ts title="src/app.tsx" console.log(Resource.MyBucket.name); ``` --- ## Constructor ```ts new TanStackStart(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`TanStackStartArgs`](#tanstackstartargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## TanStackStartArgs ### assets? **Type** `Input` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`nonVersionedFilesCacheHeader?`](#assets-nonversionedfilescacheheader) - [`purge?`](#assets-purge) - [`textEncoding?`](#assets-textencoding) - [`versionedFilesCacheHeader?`](#assets-versionedfilescacheheader) Configure how the TanStack Start app assets are uploaded to S3. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", versionedFilesCacheHeader: "public,max-age=31536000,immutable", nonVersionedFilesCacheHeader: "public,max-age=0,s-maxage=86400,stale-while-revalidate=8640" } } ``` fileOptions? **Type** `Input` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. Apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" } ] } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. nonVersionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=0,s-maxage=86400,stale-while-revalidate=8640"` The `Cache-Control` header used for non-versioned files, like `index.html`. This is used by both CloudFront and the browser cache. The default is set to not cache on browsers, and cache for 1 day on CloudFront. ```js { assets: { nonVersionedFilesCacheHeader: "public,max-age=0,no-cache" } } ``` purge? **Type** `Input` **Default** `false` Configure if files from previous deployments should be purged from the bucket. ```js { assets: { purge: false } } ``` textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` versionedFilesCacheHeader? **Type** `Input` **Default** `"public,max-age=31536000,immutable"` The `Cache-Control` header used for versioned files, like `main-1234.css`. This is used by both CloudFront and the browser cache. The default `max-age` is set to 1 year. ```js { assets: { versionedFilesCacheHeader: "public,max-age=31536000,immutable" } } ``` ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your TanStack Start app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### cachePolicy? **Type** `Input` **Default** A new cache policy is created Configure the TanStack Start app to use an existing CloudFront cache policy. :::note CloudFront has a limit of 20 cache policies per account, though you can request a limit increase. ::: By default, a new cache policy is created for it. This allows you to reuse an existing policy instead of creating a new one. ```js { cachePolicy: "658327ea-f89d-4fab-a63d-7e88639e58f6" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. Instead of deploying your TanStack Start app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` - [`aliases?`](#domain-aliases) - [`cert?`](#domain-cert) - [`dns?`](#domain-dns) - [`name`](#domain-name) - [`redirects?`](#domain-redirects) Set a custom domain for your TanStack Start app. Automatically manages domains hosted on AWS Route 53, Cloudflare, and Vercel. For other providers, you'll need to pass in a `cert` that validates domain ownership and add the DNS records. :::tip Built-in support for AWS Route 53, Cloudflare, and Vercel. And manual setup for other providers. ::: By default this assumes the domain is hosted on Route 53. ```js { domain: "example.com" } ``` For domains hosted on Cloudflare. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify a `www.` version of the custom domain. ```js { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` aliases? **Type** `Input` Alias domains that should be used. Unlike the `redirect` option, this keeps your visitors on this alias domain. So if your users visit `app2.domain.com`, they will stay on `app2.domain.com` in their browser. ```js {4} { domain: { name: "app1.domain.com", aliases: ["app2.domain.com"] } } ``` cert? **Type** `Input` The ARN of an ACM (AWS Certificate Manager) certificate that proves ownership of the domain. By default, a certificate is created and validated automatically. The certificate will be created in the `us-east-1` region as required by AWS CloudFront. If you are creating your own certificate, you must also create it in `us-east-1`. :::tip You need to pass in a `cert` for domains that are not hosted on supported `dns` providers. ::: To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. 3. Add the DNS records in your provider to point to the CloudFront distribution URL. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` dns? **Type** `Input` **Default** `sst.aws.dns` The DNS provider to use for the domain. Defaults to the AWS. Takes an adapter that can create the DNS records on the provider. This can automate validating the domain and setting up the DNS routing. Supports Route 53, Cloudflare, and Vercel adapters. For other providers, you'll need to set `dns` to `false` and pass in a certificate validating ownership via `cert`. Specify the hosted zone ID for the Route 53 domain. ```js { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` Use a domain hosted on Cloudflare, needs the Cloudflare provider. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Use a domain hosted on Vercel, needs the Vercel provider. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` name **Type** `Input` The custom domain you want to use. ```js { domain: { name: "example.com" } } ``` Can also include subdomains based on the current stage. ```js { domain: { name: `${$app.stage}.example.com` } } ``` redirects? **Type** `Input` Alternate domains to be used. Visitors to the alternate domains will be redirected to the main `name`. :::note Unlike the `aliases` option, this will redirect visitors back to the main `name`. ::: Use this to create a `www.` version of your domain and redirect visitors to the apex domain. ```js {4} { domain: { name: "domain.com", redirects: ["www.domain.com"] } } ``` ### edge? **Type** `Input` - [`viewerRequest?`](#edge-viewerrequest) `Input` - [`injection`](#edge-viewerrequest-injection) - [`kvStore?`](#edge-viewerrequest-kvstore) - [`viewerResponse?`](#edge-viewerresponse) `Input` - [`injection`](#edge-viewerresponse-injection) - [`kvStore?`](#edge-viewerresponse-kvstore) Configure CloudFront Functions to customize the behavior of HTTP requests and responses at the edge. viewerRequest? **Type** `Input` Configure the viewer request function. The viewer request function can be used to modify incoming requests before they reach your origin server. For example, you can redirect users, rewrite URLs, or add headers. injection **Type** `Input` The code to inject into the viewer request function. By default, a viewer request function is created to: - Disable CloudFront default URL if custom domain is set - Add the `x-forwarded-host` header - Route assets requests to S3 (static files stored in the bucket) - Route server requests to server functions (dynamic rendering) The function manages routing by: 1. First checking if the requested path exists in S3 (with variations like adding index.html) 2. Serving a custom 404 page from S3 if configured and the path isn't found 3. Routing image optimization requests to the image optimizer function 4. Routing all other requests to the nearest server function The given code will be injected at the beginning of this function. ```js async function handler(event) { // User injected code // Default behavior code return event.request; } ``` To add a custom header to all requests. ```js { edge: { viewerRequest: { injection: `event.request.headers["x-foo"] = { value: "bar" };` } } } ``` You can use this to add basic auth, [check out an example](/docs/examples/#aws-nextjs-basic-auth). kvStore? **Type** `Input` The KV store to associate with the viewer request function. ```js { edge: { viewerRequest: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` viewerResponse? **Type** `Input` Configure the viewer response function. The viewer response function can be used to modify outgoing responses before they are sent to the client. For example, you can add security headers or change the response status code. By default, no viewer response function is set. A new function will be created with the provided code. injection **Type** `Input` The code to inject into the viewer response function. ```js async function handler(event) { // User injected code return event.response; } ``` To add a custom header to all responses. ```js { edge: { viewerResponse: { injection: `event.response.headers["x-foo"] = { value: "bar" };` } } } ``` kvStore? **Type** `Input` The KV store to associate with the viewer response function. ```js { edge: { viewerResponse: { kvStore: "arn:aws:cloudfront::123456789012:key-value-store/my-store" } } } ``` ### environment? **Type** `Input>>` Set in your TanStack Start app. These are made available: 1. In `vite build`, they are loaded into `process.env`. 2. Locally while running `sst dev`. :::tip You can also `link` resources to your TanStack Start app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: ```js { environment: { API_URL: api.url, STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### invalidation? **Type** `Input` - [`paths?`](#invalidation-paths) - [`wait?`](#invalidation-wait) **Default** `{paths: "all", wait: false}` Configure how the CloudFront cache invalidations are handled. This is run after your TanStack Start app has been deployed. :::tip You get 1000 free invalidations per month. After that you pay $0.005 per invalidation path. [Read more here](https://aws.amazon.com/cloudfront/pricing/). ::: Wait for all paths to be invalidated. ```js { invalidation: { paths: "all", wait: true } } ``` paths? **Type** `Input` **Default** `"all"` The paths to invalidate. You can either pass in an array of glob patterns to invalidate specific files. Or you can use one of these built-in options: - `all`: All files will be invalidated when any file changes - `versioned`: Only versioned files will be invalidated when versioned files change :::note Each glob pattern counts as a single invalidation. Whereas, invalidating `/*` counts as a single invalidation. ::: Invalidate the `index.html` and all files under the `products/` route. ```js { invalidation: { paths: ["/index.html", "/products/*"] } } ``` This counts as two invalidations. wait? **Type** `Input` **Default** `false` Configure if `sst deploy` should wait for the CloudFront cache invalidation to finish. :::tip For non-prod environments it might make sense to pass in `false`. ::: Waiting for this process to finish ensures that new content will be available after the deploy finishes. However, this process can sometimes take more than 5 mins. ```js { invalidation: { wait: true } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your TanStack Start app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your TanStack Start app is located. This path is relative to your `sst.config.ts`. By default it assumes your TanStack Start app is in the root of your SST app. If your TanStack Start app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the [server function](#nodes-server) in your TanStack Start app needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow reading and writing to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Grant permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### protection? **Type** `Input<"none" | "oac" | "oac-with-edge-signing" | Object>` - [`edgeFunction?`](#protection-edgefunction) `Object` - [`arn?`](#protection-edgefunction-arn) - [`memory?`](#protection-edgefunction-memory) - [`timeout?`](#protection-edgefunction-timeout) - [`mode`](#protection-mode) **Default** `"none"` The available options are: - `"none"`: Lambda URLs are publicly accessible. - `"oac"`: Lambda URLs protected by CloudFront Origin Access Control. Requires manual `x-amz-content-sha256` header for POST requests. Use when you control all POST requests. - `"oac-with-edge-signing"`: Full protection with automatic header signing via Lambda@Edge. Works with external webhooks and callbacks. Higher cost and latency but works out of the box. :::note When using `"oac-with-edge-signing"`, request bodies are limited to 1MB due to Lambda@Edge payload limits. For file uploads larger than 1MB, consider using presigned S3 URLs or the `"oac"` mode with manual header signing. ::: :::note When removing a stage that uses `"oac-with-edge-signing"`, deletion may take 5-10 minutes while AWS removes the Lambda@Edge replicated functions from all edge locations. ::: Configure Lambda function protection through CloudFront. ```js // No protection (default) { protection: "none" } ``` ```js // OAC protection, manual header signing required { protection: "oac" } ``` ```js // Full protection with automatic Lambda@Edge { protection: "oac-with-edge-signing" } ``` ```js // Custom Lambda@Edge configuration { protection: { mode: "oac-with-edge-signing", edgeFunction: { memory: "256 MB", timeout: "10 seconds" } } } ``` ```js // Use existing Lambda@Edge function { protection: { mode: "oac-with-edge-signing", edgeFunction: { arn: "arn:aws:lambda:us-east-1:123456789012:function:my-signing-function:1" } } } ``` edgeFunction? **Type** `Object` arn? **Type** `Input<"arn:aws:lambda:${string}">` Custom Lambda@Edge function ARN to use for request signing. If provided, this function will be used instead of creating a new one. Must be a qualified ARN (with version) and deployed in us-east-1. memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"128 MB"` Memory size for the auto-created Lambda@Edge function. Only used when arn is not provided. timeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` Timeout for the auto-created Lambda@Edge function. Only used when arn is not provided. mode **Type** `"oac-with-edge-signing"` ### regions? **Type** `Input` **Default** The default region of the SST app Regions that the server function will be deployed to. By default, the server function is deployed to a single region, this is the default region of your SST app. :::note This does not use Lambda@Edge, it deploys multiple Lambda functions instead. ::: To deploy it to multiple regions, you can pass in a list of regions. And any requests made will be routed to the nearest region based on the user's location. ```js { regions: ["us-east-1", "eu-west-1"] } ``` ### router? **Type** `Object` - [`connectionAttempts?`](#router-connectionattempts) - [`connectionTimeout?`](#router-connectiontimeout) - [`domain?`](#router-domain) - [`instance`](#router-instance) - [`keepAliveTimeout?`](#router-keepalivetimeout) - [`path?`](#router-path) - [`readTimeout?`](#router-readtimeout) - [`rewrite?`](#router-rewrite) `Input` - [`regex`](#router-rewrite-regex) - [`to`](#router-rewrite-to) Serve your TanStack Start app through a `Router` instead of a standalone CloudFront distribution. By default, this component creates a new CloudFront distribution. But you might want to serve it through the distribution of your `Router`. :::note TanStack Start does not currently support base paths and can only be routed from the root `/` path. ::: To serve your TanStack Start app **from a subdomain**, you'll need to configure the domain in your `Router` component to match both the root and the subdomain. ```ts title="sst.config.ts" {3,4} const router = new sst.aws.Router("Router", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` Now set the `domain` in the `router` prop. ```ts {4} { router: { instance: router, domain: "docs.example.com" } } ``` Finally, to serve your TanStack Start app **from a combined pattern** like `dev.example.com/docs`, you'll need to configure the domain in your `Router` to match the subdomain, and set the `domain` and the `path`. ```ts {4,5} { router: { instance: router, domain: "dev.example.com", path: "/docs" } } ``` Also, make sure to set this as the `base` in your `vite.config.ts`, and in your Nitro plugin config. connectionAttempts? **Type** `Input` **Default** `3` The number of times that CloudFront attempts to connect to the origin. Must be between 1 and 3. ```js router: { instance: router, connectionAttempts: 1 } ``` connectionTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"10 seconds"` The number of seconds that CloudFront waits before timing out and closing the connection to the origin. Must be between 1 and 10 seconds. ```js router: { instance: router, connectionTimeout: "3 seconds" } ``` domain? **Type** `Input` Route requests matching a specific domain pattern. You can serve your resource from a subdomain. For example, if you want to make it available at `https://dev.example.com`, set the `Router` to match the domain or a wildcard. ```ts {2} title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "*.example.com" }); ``` Then set the domain pattern. ```ts {3} router: { instance: router, domain: "dev.example.com" } ``` While `dev.example.com` matches `*.example.com`. Something like `docs.dev.example.com` will not match `*.example.com`. :::tip Nested wildcards domain patterns are not supported. ::: You'll need to add `*.dev.example.com` as an alias. instance **Type** `Input<`[`Router`](/docs/component/aws/router)`>` The `Router` component to use for routing requests. Let's say you have a Router component. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` You can attach it to the Router, instead of creating a standalone CloudFront distribution. ```ts router: { instance: router } ``` keepAliveTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The number of seconds that CloudFront should try to maintain the connection to the destination after receiving the last packet of the response. Must be between 1 and 60 seconds. ```js router: { instance: router, keepAliveTimeout: "10 seconds" } ``` path? **Type** `Input` **Default** `"/"` Route requests matching a specific path prefix. ```ts {3} router: { instance: router, path: "/docs" } ``` readTimeout? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The number of seconds that CloudFront waits for a response after routing a request to the destination. Must be between 1 and 60 seconds. When compared to the `connectionTimeout`, this is the total time for the request. ```js router: { instance: router, readTimeout: "60 seconds" } ``` rewrite? **Type** `Input` Rewrite the request path. If the route path is `/api/*` and a request comes in for `/api/users/profile`, the request path the destination sees is `/api/users/profile`. If you want to serve the route from the root, you can rewrite the request path to `/users/profile`. ```js router: { instance: router, path: "/api", rewrite: { regex: "^/api/(.*)$", to: "/$1" } } ``` regex **Type** `Input` The regex to match the request path. to **Type** `Input` The replacement for the matched path. ### server? **Type** `Object` - [`architecture?`](#server-architecture) - [`install?`](#server-install) - [`layers?`](#server-layers) - [`loader?`](#server-loader) - [`memory?`](#server-memory) - [`runtime?`](#server-runtime) - [`timeout?`](#server-timeout) **Default** `{architecture: "x86_64", memory: "1024 MB"}` Configure the Lambda function used for server. architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the server function. ```js { server: { architecture: "arm64" } } ``` install? **Type** `Input` Dependencies that need to be excluded from the server function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to the `install` list. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. You however still need to have them in your `package.json`. :::caution Packages listed here still need to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { server: { install: ["sharp"] } } ``` layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the server function. ```js { server: { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } } ``` loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { server: { loader: { ".png": "file" } } } ``` memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated to the server function. Takes values between 128 MB and 10240 MB in 1 MB increments. ```js { server: { memory: "2048 MB" } } ``` runtime? **Type** `Input<"nodejs18.x" | "nodejs20.x" | "nodejs22.x" | "nodejs24.x">` **Default** `"nodejs24.x"` The runtime environment for the server function. ```js { server: { runtime: "nodejs24.x" } } ``` timeout? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"20 seconds"` The maximum amount of time the server function can run. While Lambda supports timeouts up to 900 seconds, your requests are served through AWS CloudFront. And it has a default limit of 60 seconds. :::tip If you need a timeout longer than 60 seconds, you'll need to request a limit increase. ::: You can increase this to 180 seconds for your account by contacting AWS Support and [requesting a limit increase](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). ```js { server: { timeout: "50 seconds" } } ``` If you need a timeout longer than what CloudFront supports, we recommend using a separate Lambda `Function` with the `url` enabled instead. ### transform? **Type** `Object` - [`assets?`](#transform-assets) - [`cdn?`](#transform-cdn) - [`imageOptimizer?`](#transform-imageoptimizer) - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`BucketArgs`](/docs/component/aws/bucket#bucketargs)` | (args: `[`BucketArgs`](/docs/component/aws/bucket#bucketargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Bucket resource used for uploading the assets. cdn? **Type** [`CdnArgs`](/docs/component/aws/cdn#cdnargs)` | (args: `[`CdnArgs`](/docs/component/aws/cdn#cdnargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudFront CDN resource. imageOptimizer? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the image optimizer Function resource. server? **Type** [`FunctionArgs`](/docs/component/aws/function#functionargs)` | (args: `[`FunctionArgs`](/docs/component/aws/function#functionargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the server Function resource. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the server function to connect to private subnets in a virtual private cloud or VPC. This allows it to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ### warm? **Type** `Input` **Default** `0` The number of instances of the [server function](#nodes-server) to keep warm. This is useful for cases where you are experiencing long cold starts. The default is to not keep any instances warm. This works by starting a serverless cron job to make _n_ concurrent requests to the server function every few minutes. Where _n_ is the number of instances to keep warm. ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`cdn`](#nodes-cdn) - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** `undefined | `[`Bucket`](/docs/component/aws/bucket) The Amazon S3 Bucket that stores the assets. cdn **Type** `undefined | `[`Cdn`](/docs/component/aws/cdn) The Amazon CloudFront CDN that serves the site. server **Type** `undefined | Output<`[`Function`](/docs/component/aws/function)`>` The AWS Lambda server function that renders the site. ### url **Type** `Output` The URL of the TanStack Start app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `string` The URL of the TanStack Start app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated CloudFront URL. --- ## Task Reference doc for the `sst.aws.Task` component. https://sst.dev/docs/component/aws/task The `Task` component lets you create containers that are used for long running asynchronous work, like data processing. It uses [Amazon ECS](https://aws.amazon.com/ecs/) on [AWS Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html). #### Create a Task Tasks are run inside an ECS Cluster. If you haven't already, create one. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); ``` Add the task to it. ```ts title="sst.config.ts" const task = new sst.aws.Task("MyTask", { cluster }); ``` #### Configure the container image By default, the task will look for a Dockerfile in the root directory. Optionally, configure the image context and dockerfile. ```ts title="sst.config.ts" new sst.aws.Task("MyTask", { cluster, image: { context: "./app", dockerfile: "Dockerfile" } }); ``` To add multiple containers in the task, pass in an array of containers args. ```ts title="sst.config.ts" new sst.aws.Task("MyTask", { cluster, containers: [ { name: "app", image: "nginxdemos/hello:plain-text" }, { name: "admin", image: { context: "./admin", dockerfile: "Dockerfile" } } ] }); ``` This is useful for running sidecar containers. #### Link resources [Link resources](/docs/linking/) to your task. This will grant permissions to the resources and allow you to access it in your app. ```ts {5} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Task("MyTask", { cluster, link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your task. ```ts title="app.ts" console.log(Resource.MyBucket.name); ``` #### Task SDK With the [Task JS SDK](/docs/component/aws/task#sdk), you can run your tasks, stop your tasks, and get the status of your tasks. For example, you can link the task to a function in your app. ```ts title="sst.config.ts" {3} new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [task] }); ``` Then from your function run the task. ```ts title="src/lambda.ts" const runRet = await task.run(Resource.MyTask); const taskArn = runRet.arn; ``` If you are not using Node.js, you can use the AWS SDK instead. Here's [how to run a task](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html). --- ### Cost By default, this uses a _Linux/X86_ _Fargate_ container with 0.25 vCPUs at $0.04048 per vCPU per hour and 0.5 GB of memory at $0.004445 per GB per hour. It includes 20GB of _Ephemeral Storage_ for free with additional storage at $0.000111 per GB per hour. When using an SST VPC, each task also gets a public IPv4 address at $0.005 per hour. It works out to $0.04048 x 0.25 + $0.004445 x 0.5 + $0.005. Or **$0.02 per hour** your task runs for. Adjust this for the `cpu`, `memory` and `storage` you are using. And check the prices for _Linux/ARM_ if you are using `arm64` as your `architecture`. The above are rough estimates for _us-east-1_, check out the [Fargate pricing](https://aws.amazon.com/fargate/pricing/) and the [Public IPv4 Address pricing](https://aws.amazon.com/vpc/pricing/) for more details. --- ## Constructor ```ts new Task(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`TaskArgs`](#taskargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## TaskArgs ### architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The CPU architecture of the container. ```js { architecture: "arm64" } ``` ### cluster **Type** [`Cluster`](/docs/component/aws/cluster) The ECS Cluster to use. Create a new `Cluster` in your app, if you haven't already. ```js title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const myCluster = new sst.aws.Cluster("MyCluster", { vpc }); ``` And pass it in. ```js { cluster: myCluster } ``` ### command? **Type** `Input[]>` The command to override the default command in the container. ```js { command: ["npm", "run", "start"] } ``` ### containers? **Type** `Input[]` - [`command?`](#containers-command) - [`cpu?`](#containers-cpu) - [`entrypoint?`](#containers-entrypoint) - [`environment?`](#containers-environment) - [`environmentFiles?`](#containers-environmentfiles) - [`image?`](#containers-image) `Input` - [`args?`](#containers-image-args) - [`cache?`](#containers-image-cache) - [`context?`](#containers-image-context) - [`dockerfile?`](#containers-image-dockerfile) - [`secrets?`](#containers-image-secrets) - [`tags?`](#containers-image-tags) - [`target?`](#containers-image-target) - [`logging?`](#containers-logging) `Input` - [`name?`](#containers-logging-name) - [`retention?`](#containers-logging-retention) - [`memory?`](#containers-memory) - [`name`](#containers-name) - [`ssm?`](#containers-ssm) - [`volumes?`](#containers-volumes) `Input[]` - [`efs`](#containers-volumes-efs) `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` - [`accessPoint`](#containers-volumes-efs-accesspoint) - [`fileSystem`](#containers-volumes-efs-filesystem) - [`path`](#containers-volumes-path) The containers to run in the task. :::tip You can optionally run multiple containers in a task. ::: By default this starts a single container. To add multiple containers in the task, pass in an array of containers args. ```ts { containers: [ { name: "app", image: "nginxdemos/hello:plain-text" }, { name: "admin", image: { context: "./admin", dockerfile: "Dockerfile" } } ] } ``` If you specify `containers`, you cannot list the above args at the top-level. For example, you **cannot** pass in `image` at the top level. ```diff lang="ts" { - image: "nginxdemos/hello:plain-text", containers: [ { name: "app", image: "nginxdemos/hello:plain-text" }, { name: "admin", image: "nginxdemos/hello:plain-text" } ] } ``` You will need to pass in `image` as a part of the `containers`. command? **Type** `Input` The command to override the default command in the container. Same as the top-level [`command`](#command). cpu? **Type** `"$\{number\} vCPU"` The amount of CPU allocated to the container. By default, a container can use up to all the CPU allocated to all the containers. If set, this container is capped at this allocation even if more idle CPU is available. The sum of all the containers' CPU must be less than or equal to the total available CPU. ```js { cpu: "0.25 vCPU" } ``` entrypoint? **Type** `Input` The entrypoint to override the default entrypoint in the container. Same as the top-level [`entrypoint`](#entrypoint). environment? **Type** `Input>>` Key-value pairs of values that are set as container environment variables. Same as the top-level [`environment`](#environment). environmentFiles? **Type** `Input[]>` A list of Amazon S3 file paths of environment files to load environment variables from. Same as the top-level [`environmentFiles`](#environmentFiles). image? **Type** `Input` Configure the Docker image for the container. Same as the top-level [`image`](#image). args? **Type** `Input>>` Key-value pairs of build args. Same as the top-level [`image.args`](#image-args). cache? **Type** `Input` **Default** `true` Controls whether Docker build cache is enabled. Same as the top-level [`image.cache`](#image-cache). context? **Type** `Input` The path to the Docker build context. Same as the top-level [`image.context`](#image-context). dockerfile? **Type** `Input` The path to the Dockerfile. Same as the top-level [`image.dockerfile`](#image-dockerfile). secrets? **Type** `Input>>` Key-value pairs of [build secrets](https://docs.docker.com/build/building/secrets/) to pass to the Docker build. Unlike build args, secrets are not persisted in the final image. They are available in the Dockerfile via [`--mount=type=secret`](https://docs.docker.com/build/building/secrets/#secret-mounts). ```js { secrets: { MY_TOKEN: "my-secret-token", } } ``` Then in the Dockerfile, reference it as a file: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN \ cat /run/secrets/MY_TOKEN ``` Or as an environment variable: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN,env=MY_TOKEN \ echo $MY_TOKEN ``` tags? **Type** `Input[]>` Tags to apply to the Docker image. ```js { tags: ["v1.0.0", "commit-613c1b2"] } ``` target? **Type** `Input` The stage to build up to. Same as the top-level [`image.target`](#image-target). logging? **Type** `Input` Configure the logs in CloudWatch. Same as the top-level [`logging`](#logging). name? **Type** `Input` The name of the CloudWatch log group. Same as the top-level [`logging.name`](#logging-name). retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` The duration the logs are kept in CloudWatch. Same as the top-level [`logging.retention`](#logging-retention). memory? **Type** `"$\{number\} GB"` The amount of memory allocated to the container. By default, a container can use up to all the memory allocated to all the containers. If set, the container is capped at this allocation. If exceeded, the container will be killed even if there is idle memory available. The sum of all the containers' memory must be less than or equal to the total available memory. ```js { memory: "0.5 GB" } ``` name **Type** `Input` The name of the container. This is used as the `--name` option in the Docker run command. ssm? **Type** `Input>>` Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables. Same as the top-level [`ssm`](#ssm). volumes? **Type** `Input[]` Mount Amazon EFS file systems into the container. Same as the top-level [`volumes`](#volumes). efs **Type** `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` The Amazon EFS file system to mount. accessPoint **Type** `Input` The ID of the EFS access point. fileSystem **Type** `Input` The ID of the EFS file system. path **Type** `Input` The path to mount the volume. ### cpu? **Type** `"0.25 vCPU" | "0.5 vCPU" | "1 vCPU" | "2 vCPU" | "4 vCPU" | "8 vCPU" | "16 vCPU"` **Default** `"0.25 vCPU"` The amount of CPU allocated to the container. If there are multiple containers, this is the total amount of CPU shared across all the containers. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { cpu: "1 vCPU" } ``` ### dev? **Type** `false | Object` - [`command?`](#dev-command) - [`directory?`](#dev-directory) Configure how this component works in `sst dev`. :::note In `sst dev` a _stub_ version of your task is deployed. ::: By default, your task in not deployed in `sst dev`. Instead, you can set the `dev.command` and it'll run locally in a **Tasks** tab in the `sst dev` multiplexer. Here's what happens when you run `sst dev`: 1. A _stub_ version of your task is deployed. This is a minimal image that starts up faster. 2. When your task is started through the SDK, the stub version is provisioned. This can take roughly **10 - 20 seconds**. 3. The stub version proxies the payload to your local machine using the same events system used by [Live](/docs/live/). 4. The `dev.command` is called to run your task locally. Once complete, the stub version of your task is stopped as well. The advantage with this approach is that you can test your task locally even it's invoked remotely, or through a cron job. :::note You are charged for the time it takes to run the stub version of your task. ::: Since the stub version runs while your task is running, you are charged for the time it takes to run. This is roughly **$0.02 per hour**. To disable this and deploy your task in `sst dev`, pass in `false`. Read more about [Live](/docs/live/) and [`sst dev`](/docs/reference/cli/#dev). command? **Type** `Input` The command that `sst dev` runs in dev mode. directory? **Type** `Input` **Default** Uses the `image.dockerfile` path Change the directory from where the `command` is run. ### entrypoint? **Type** `Input` The entrypoint that overrides the default entrypoint in the container. ```js { entrypoint: ["/usr/bin/my-entrypoint"] } ``` ### environment? **Type** `Input>>` Key-value pairs of values that are set as [container environment variables](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html). The keys need to: 1. Start with a letter. 2. Be at least 2 characters long. 3. Contain only letters, numbers, or underscores. ```js { environment: { DEBUG: "true" } } ``` ### environmentFiles? **Type** `Input[]>` A list of Amazon S3 object ARNs pointing to [environment files](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/use-environment-file.html) used to load environment variables into the container. Each file must be a plain text file in `.env` format. Create an S3 bucket and upload an environment file. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("EnvBucket"); const file = new aws.s3.BucketObjectv2("EnvFile", { bucket: bucket.name, key: "test.env", content: ["FOO=hello", "BAR=world"].join("\n"), }); ``` And pass in the ARN of the environment file. ```js title="sst.config.ts" { environmentFiles: [file.arn] } ``` ### executionRole? **Type** `Input` **Default** Creates a new role Assigns the given IAM role name to AWS ECS to launch and manage the containers. This allows you to pass in a previously created role. By default, a new IAM role is created. ```js { executionRole: "my-execution-role" } ``` ### image? **Type** `Input` - [`args?`](#image-args) - [`cache?`](#image-cache) - [`context?`](#image-context) - [`dockerfile?`](#image-dockerfile) - [`secrets?`](#image-secrets) - [`tags?`](#image-tags) - [`target?`](#image-target) **Default** Build a Docker image from the Dockerfile in the root directory. Configure the Docker build command for building the image or specify a pre-built image. Building a Docker image. Prior to building the image, SST will automatically add the `.sst` directory to the `.dockerignore` if not already present. ```js { image: { context: "./app", dockerfile: "Dockerfile", args: { MY_VAR: "value" } } } ``` Alternatively, you can pass in a pre-built image. ```js { image: "nginxdemos/hello:plain-text" } ``` args? **Type** `Input>>` Key-value pairs of [build args](https://docs.docker.com/build/guide/build-args/) to pass to the Docker build command. ```js { args: { MY_VAR: "value" } } ``` cache? **Type** `Input` **Default** `true` Controls whether Docker build cache is enabled. Disable Docker build caching, useful for environments like Localstack where ECR cache export is not supported. ```js { image: { cache: false } } ``` context? **Type** `Input` **Default** `"."` The path to the [Docker build context](https://docs.docker.com/build/building/context/#local-context). The path is relative to your project's `sst.config.ts`. To change where the Docker build context is located. ```js { context: "./app" } ``` dockerfile? **Type** `Input` **Default** `"Dockerfile"` The path to the [Dockerfile](https://docs.docker.com/reference/cli/docker/image/build/#file). The path is relative to the build `context`. To use a different Dockerfile. ```js { dockerfile: "Dockerfile.prod" } ``` secrets? **Type** `Input>>` Key-value pairs of [build secrets](https://docs.docker.com/build/building/secrets/) to pass to the Docker build. Unlike build args, secrets are not persisted in the final image. They are available in the Dockerfile via [`--mount=type=secret`](https://docs.docker.com/build/building/secrets/#secret-mounts). ```js { secrets: { MY_TOKEN: "my-secret-token", } } ``` Then in the Dockerfile, reference it as a file: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN \ cat /run/secrets/MY_TOKEN ``` Or as an environment variable: ```dockerfile title="Dockerfile" RUN --mount=type=secret,id=MY_TOKEN,env=MY_TOKEN \ echo $MY_TOKEN ``` tags? **Type** `Input[]>` Tags to apply to the Docker image. ```js { tags: ["v1.0.0", "commit-613c1b2"] } ``` target? **Type** `Input` The stage to build up to in a [multi-stage Dockerfile](https://docs.docker.com/build/building/multi-stage/#stop-at-a-specific-build-stage). ```js { target: "stage1" } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your containers. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your app using the [SDK](/docs/reference/sdk/). Takes a list of components to link to the containers. ```js { link: [bucket, stripeKey] } ``` ### logging? **Type** `Input` - [`name?`](#logging-name) - [`retention?`](#logging-retention) **Default** `{ retention: "1 month" }` Configure the logs in CloudWatch. ```js { logging: { retention: "forever" } } ``` name? **Type** `Input` **Default** `"/sst/cluster/${CLUSTER_NAME}/${SERVICE_NAME}/${CONTAINER_NAME}"` The name of the CloudWatch log group. If omitted, the log group name is generated based on the cluster name, service name, and container name. retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `"1 month"` The duration the logs are kept in CloudWatch. ### memory? **Type** `"$\{number\} GB"` **Default** `"0.5 GB"` The amount of memory allocated to the container. If there are multiple containers, this is the total amount of memory shared across all the containers. :::note [View the valid combinations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-tasks-services.html#fargate-tasks-size) of CPU and memory. ::: ```js { memory: "2 GB" } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that you need to access. These permissions are used to create the [task role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html). :::tip If you `link` the service to a resource, the permissions to access it are automatically added. ::: Allow the container to read and write to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Allow the container to perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] }, ] } ``` Granting the container permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] }, ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### public? **Type** `boolean` **Default** false Make the task publicly accessible from the internet. :::note Tasks in an SST VPC are placed in public subnets with a public IP by default for outbound internet access. The `public` property controls whether the task is _reachable_ from the internet. ::: If you are using a custom VPC, you must also set `vpc.publicSubnets` on the Cluster. ```ts { public: true } ``` ### ssm? **Type** `Input>>` Key-value pairs of AWS Systems Manager Parameter Store parameter ARNs or AWS Secrets Manager secret ARNs. The values will be loaded into the container as environment variables. ```js { ssm: { DATABASE_PASSWORD: "arn:aws:secretsmanager:us-east-1:123456789012:secret:my-secret-123abc" } } ``` ### storage? **Type** `"$\{number\} GB"` **Default** `"20 GB"` The amount of ephemeral storage (in GB) allocated to the container. ```js { storage: "100 GB" } ``` ### taskRole? **Type** `Input` **Default** Creates a new role Assigns the given IAM role name to the containers. This allows you to pass in a previously created role. :::caution When you pass in a role, it will not update it if you add `permissions` or `link` resources. ::: By default, a new IAM role is created. It'll update this role if you add `permissions` or `link` resources. However, if you pass in a role, you'll need to update it manually if you add `permissions` or `link` resources. ```js { taskRole: "my-task-role" } ``` ### transform? **Type** `Object` - [`executionRole?`](#transform-executionrole) - [`image?`](#transform-image) - [`logGroup?`](#transform-loggroup) - [`taskDefinition?`](#transform-taskdefinition) - [`taskRole?`](#transform-taskrole) [Transform](/docs/components#transform) how this component creates its underlying resources. executionRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Execution IAM Role resource. image? **Type** [`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)` | (args: `[`ImageArgs`](https://www.pulumi.com/registry/packages/docker-build/api-docs/image/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Docker Image resource. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch log group resource. taskDefinition? **Type** [`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)` | (args: `[`TaskDefinitionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task Definition resource. taskRole? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the ECS Task IAM Role resource. ### volumes? **Type** `Input[]` - [`efs`](#volumes-efs) `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` - [`accessPoint`](#volumes-efs-accesspoint) - [`fileSystem`](#volumes-efs-filesystem) - [`path`](#volumes-path) Mount Amazon EFS file systems into the container. Create an EFS file system. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const fileSystem = new sst.aws.Efs("MyFileSystem", { vpc }); ``` And pass it in. ```js { volumes: [ { efs: fileSystem, path: "/mnt/efs" } ] } ``` Or pass in a the EFS file system ID. ```js { volumes: [ { efs: { fileSystem: "fs-12345678", accessPoint: "fsap-12345678" }, path: "/mnt/efs" } ] } ``` efs **Type** `Input<`[`Efs`](/docs/component/aws/efs)` | Object>` The Amazon EFS file system to mount. accessPoint **Type** `Input` The ID of the EFS access point. fileSystem **Type** `Input` The ID of the EFS file system. path **Type** `Input` The path to mount the volume. ## Properties ### nodes **Type** `Object` - [`executionRole`](#nodes-executionrole) - [`publicSecurityGroup`](#nodes-publicsecuritygroup) - [`taskDefinition`](#nodes-taskdefinition) - [`taskRole`](#nodes-taskrole) The underlying [resources](/docs/components/#nodes) this component creates. executionRole **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The Amazon ECS Execution Role. publicSecurityGroup **Type** `undefined | `[`SecurityGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/) The AWS Security Group for public tasks. Only created when `public` is `true`. taskDefinition **Type** `Output<`[`TaskDefinition`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/taskdefinition/)`>` The Amazon ECS Task Definition. taskRole **Type** [`Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) The Amazon ECS Task Role. ### taskDefinition **Type** `Output` The ARN of the ECS Task Definition. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `assignPublicIp` `boolean` Whether to assign a public IP address to the task. - `cluster` `string` The ARN of the cluster this task is deployed to. - `containers` `string[]` The names of the containers in the task. - `securityGroups` `(string | string)[]` The security groups for the task. - `subnets` `Input[]` The subnets for the task. - `taskDefinition` `string` The ARN of the ECS Task Definition. The `task` client SDK is available through the following. ```js title="src/app.ts" ``` If you are not using Node.js, you can use the AWS SDK instead. For example, you can call [`RunTask`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html) to run a task. --- ### describe ```ts task.describe(resource, task, options?) ``` #### Parameters - `resource` [`Resource`](#resource) - `task` `string` - `options?` [`Options`](#options) **Returns** `Promise<`[`DescribeResponse`](#describeresponse)`>` Get the details of a given task. :::note If a task had been stopped over an hour ago, it's not returned. ::: For example, let's say you had previously started a task. ```js title="src/app.ts" const runRet = await task.run(Resource.MyTask); const taskArn = runRet.tasks[0].taskArn; ``` You can use that to get the details of the task. ```js title="src/app.ts" const describeRet = await task.describe(Resource.MyTask, taskArn); console.log(describeRet.status); ``` If you are not using Node.js, you can use the AWS SDK and call [`DescribeTasks`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html). ### run ```ts task.run(resource, environment?, options?) ``` #### Parameters - `resource` [`Resource`](#resource) - `environment?` `Record` - `options?` [`RunOptions`](#runoptions) **Returns** `Promise<`[`RunResponse`](#runresponse)`>` Runs a task. For example, let's say you have defined a task. ```js title="sst.config.ts" new sst.aws.Task("MyTask", { cluster }); ``` You can then run the task in your application with the SDK. ```js title="src/app.ts" {4} const runRet = await task.run(Resource.MyTask); const taskArn = runRet.tasks[0].taskArn; ``` This internally calls an AWS SDK API that returns an array of tasks. But in our case, there's only one task. The `taskArn` is the ARN of the task. You can use it to call the `describe` or `stop` functions. You can also pass in any environment variables to the task. ```js title="src/app.ts" await task.run(Resource.MyTask, { MY_ENV_VAR: "my-value" }); ``` If you are not using Node.js, you can use the AWS SDK and call [`RunTask`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RunTask.html). ### stop ```ts task.stop(resource, task, options?) ``` #### Parameters - `resource` [`Resource`](#resource) - `task` `string` - `options?` [`Options`](#options) **Returns** `Promise<`[`StopResponse`](#stopresponse)`>` Stops a task. For example, let's say you had previously started a task. ```js title="src/app.ts" const runRet = await task.run(Resource.MyTask); const taskArn = runRet.tasks[0].taskArn; ``` You can stop the task with the following. ```js title="src/app.ts" const stopRet = await task.stop(Resource.MyTask, taskArn); ``` Stopping a task is asnychronous. When you call `stop`, AWS marks a task to be stopped, but it may take a few minutes for the task to actually stop. :::note Stopping a task in asyncrhonous. ::: In most cases you probably don't need to check if it has been stopped. But if necessary, you can use the `describe` function to get a task's status. If you are not using Node.js, you can use the AWS SDK and call [`StopTask`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_StopTask.html). ### DescribeResponse **Type** `Object` - [`arn`](#describeresponse-arn) - [`response`](#describeresponse-response) - [`status`](#describeresponse-status) arn **Type** `string` The ARN of the task. response **Type** [`@aws-sdk/client-ecs.DescribeTasksResponse`](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-ecs/Interface/DescribeTasksResponse/) The raw response from the AWS ECS DescribeTasks API. status **Type** `string` The status of the task. ### Options **Type** `Object` - [`aws?`](#options-aws) aws? **Type** `Object` Configure the options for the [aws4fetch](https://github.com/mhart/aws4fetch) [`AWSClient`](https://github.com/mhart/aws4fetch?tab=readme-ov-file#new-awsclientoptions) used internally by the SDK. ### Resource **Type** `Object` - [`assignPublicIp`](#resource-assignpublicip) - [`cluster`](#resource-cluster) - [`containers`](#resource-containers) - [`securityGroups`](#resource-securitygroups) - [`subnets`](#resource-subnets) - [`taskDefinition`](#resource-taskdefinition) assignPublicIp **Type** `boolean` Whether to assign a public IP address to the task. cluster **Type** `string` The ARN of the cluster. containers **Type** `string[]` The names of the containers in the task. securityGroups **Type** `string[]` The security groups to use for the task. subnets **Type** `string[]` The subnets to use for the task. taskDefinition **Type** `string` The ARN of the task definition. ### RunOptions **Type** `Object` - [`aws?`](#runoptions-aws) - [`capacity?`](#runoptions-capacity) - [`cpu?`](#runoptions-cpu) - [`memory?`](#runoptions-memory) - [`storage?`](#runoptions-storage) aws? **Type** `Object` Configure the options for the [aws4fetch](https://github.com/mhart/aws4fetch) [`AWSClient`](https://github.com/mhart/aws4fetch?tab=readme-ov-file#new-awsclientoptions) used internally by the SDK. capacity? **Type** `"fargate" | "spot"` **Default** `"fargate"` Configure the capacity provider; regular Fargate or Fargate Spot, for this task. cpu? **Type** `"$\{number\} vCPU"` Overrides the CPU allocated for this task in the task definition. memory? **Type** `"$\{number\} GB"` Overrides the memory allocated for this task in the task definition. storage? **Type** `"$\{number\} GB"` Overrides the ephemeral storage allocated for this task in the task definition. ### RunResponse **Type** `Object` - [`arn`](#runresponse-arn) - [`response`](#runresponse-response) - [`status`](#runresponse-status) arn **Type** `string` The ARN of the task. response **Type** [`@aws-sdk/client-ecs.RunTaskResponse`](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-ecs/Interface/RunTaskResponse/) The raw response from the AWS ECS RunTask API. status **Type** `string` The status of the task. ### StopResponse **Type** `Object` - [`arn`](#stopresponse-arn) - [`response`](#stopresponse-response) - [`status`](#stopresponse-status) arn **Type** `string` The ARN of the task. response **Type** [`@aws-sdk/client-ecs.StopTaskResponse`](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-ecs/Interface/StopTaskResponse/) The raw response from the AWS ECS StopTask API. status **Type** `string` The status of the task. --- ## Vector Reference doc for the `sst.aws.Vector` component. https://sst.dev/docs/component/aws/vector The `Vector` component lets you store and retrieve vector data in your app. - It uses a vector database powered by [RDS Postgres Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html). - Provides a [SDK](/docs/reference/sdk/) to query, put, and remove the vector data. #### Create the database ```ts title="sst.config.ts" const vector = new sst.aws.Vector("MyVectorDB", { dimension: 1536 }); ``` #### Link to a resource You can link it to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [vector] }); ``` Once linked, you can query it in your function code using the [SDK](/docs/reference/sdk/). ```ts title="app/page.tsx" await VectorClient("MyVectorDB").query({ vector: [32.4, 6.55, 11.2, 10.3, 87.9] }); ``` --- ## Constructor ```ts new Vector(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`VectorArgs`](#vectorargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## VectorArgs ### dimension **Type** `Input` The dimension size of each vector. The maximum supported dimension is 2000. To store vectors with greater dimension, use dimensionality reduction to reduce the dimension to 2000 or less. OpenAI supports [dimensionality reduction](https://platform.openai.com/docs/api-reference/embeddings/create#embeddings-create-dimensions) automatically when generating embeddings. :::caution Changing the dimension will cause the data to be cleared. ::: ```js { dimension: 1536 } ``` ### transform? **Type** `Object` - [`postgres?`](#transform-postgres) [Transform](/docs/components#transform) how this component creates its underlying resources. postgres? **Type** [`PostgresArgs`](/docs/component/aws/postgres-v1#postgresargs)` | (args: `[`PostgresArgs`](/docs/component/aws/postgres-v1#postgresargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Postgres component. ## Properties ### clusterID **Type** `Output` The ID of the RDS Postgres Cluster. ### nodes **Type** `Object` - [`postgres`](#nodes-postgres) The underlying [resources](/docs/components/#nodes) this component creates. postgres **Type** [`Postgres`](/docs/component/aws/postgres-v1) The Postgres database. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### VectorClient ```ts VectorClient(name) ``` #### Parameters - `name` `T` **Returns** [`VectorClientResponse`](#vectorclientresponse) Create a client to interact with the Vector database. ```ts title="src/lambda.ts" const client = VectorClient("MyVectorDB"); ``` Store a vector into the db ```ts title="src/lambda.ts" await client.put({ vector: [32.4, 6.55, 11.2, 10.3, 87.9], metadata: { type: "movie", genre: "comedy" }, }); ``` Query vectors that are similar to the given vector ```ts title="src/lambda.ts" const result = await client.query({ vector: [32.4, 6.55, 11.2, 10.3, 87.9], include: { type: "movie" }, exclude: { genre: "thriller" }, }); ``` ### PutEvent **Type** `Object` - [`metadata`](#putevent-metadata) - [`vector`](#putevent-vector) metadata **Type** `Record` Metadata for the event as JSON. This will be used to filter when querying and removing vectors. ```js { metadata: { type: "movie", id: "movie-123", name: "Spiderman" } } ``` vector **Type** `number[]` The vector to store in the database. ```js { vector: [32.4, 6.55, 11.2, 10.3, 87.9] } ``` ### QueryEvent **Type** `Object` - [`count?`](#queryevent-count) - [`exclude?`](#queryevent-exclude) - [`include?`](#queryevent-include) - [`threshold?`](#queryevent-threshold) - [`vector?`](#queryevent-vector) count? **Type** `number` **Default** `10` The number of results to return. ```js { count: 10 } ``` exclude? **Type** `Record` Exclude vectors with metadata that match the provided fields. Given this filter. ```js { include: { type: "movie", release: "2001" }, exclude: { name: "Spiderman" } } ``` This will match a vector with metadata: ```js { type: "movie", name: "A Beautiful Mind", release: "2001" } ``` But not a vector with the metadata: ```js { type: "book", name: "Spiderman", release: "2001" } ``` include? **Type** `Record` **Default** `{}` The metadata used to filter the vectors. Only vectors that match the provided fields will be returned. Given this filter. ```js { include: { type: "movie", release: "2001" } } ``` It will match a vector with the metadata: ```js { type: "movie", name: "Spiderman", release: "2001" } ``` But not a vector with this metadata: ```js { type: "book", name: "Spiderman", release: "2001" } ``` threshold? **Type** `number` **Default** `0` The threshold of similarity between the prompt and the queried vectors. Only vectors with a similarity score higher than the threshold will be returned. This will return values is between 0 and 1. - `0` means the prompt and the queried vectors are completely different. - `1` means the prompt and the queried vectors are identical. ```js { threshold: 0.5 } ``` vector? **Type** `number[]` The vector used to query the database. When omitted, performs a metadata-only query and returns `score: 0` for each result. ```js { vector: [32.4, 6.55, 11.2, 10.3, 87.9] } ``` ### QueryResponse **Type** `Object` - [`results`](#queryresponse-results) `Object[]` - [`metadata`](#queryresponse-results-metadata) - [`score`](#queryresponse-results-score) results **Type** `Object[]` List of results matching the query. metadata **Type** `Record` Metadata for the event that was provided when storing the vector. score **Type** `number` The similarity score between the prompt and the queried vector. ### RemoveEvent **Type** `Object` - [`include`](#removeevent-include) include **Type** `Record` The metadata used to filter the removal of vectors. Only vectors with metadata that match the provided fields will be removed. To remove vectors for movie with id `movie-123`: ```js { include: { id: "movie-123", } } ``` To remove vectors for all _movies_: ```js { include: { type: "movie", } } ``` ### VectorClientResponse **Type** `Object` - [`put`](#vectorclientresponse-put) - [`query`](#vectorclientresponse-query) - [`remove`](#vectorclientresponse-remove) put **Type** (event: [`PutEvent`](#putevent)) => `Promise` Store a vector into the database. ```ts title="src/lambda.ts" await client.put({ vector: [32.4, 6.55, 11.2, 10.3, 87.9], metadata: { type: "movie", genre: "comedy" }, }); ``` query **Type** (event: [`QueryEvent`](#queryevent)) => `Promise<`[`QueryResponse`](#queryresponse)`>` Query vectors that are similar to the given vector. Query by vector similarity. ```ts title="src/lambda.ts" const result = await client.query({ vector: [32.4, 6.55, 11.2, 10.3, 87.9], include: { type: "movie" }, exclude: { genre: "thriller" }, }); ``` Query by metadata only (no vector). Returns `score: 0` for each result. ```ts title="src/lambda.ts" const result = await client.query({ include: { type: "movie" }, }); ``` remove **Type** (event: [`RemoveEvent`](#removeevent)) => `Promise` Remove vectors from the database. ```ts title="src/lambda.ts" await client.remove({ include: { type: "movie" }, }); ``` ## Methods ### static get ```ts Vector.get(name, clusterID) ``` #### Parameters - `name` `string` The name of the component. - `clusterID` `Input` The RDS cluster id of the existing Vector database. **Returns** [`Vector`](.) Reference an existing Vector database with the given name. This is useful when you create a Vector database in one stage and want to share it in another. It avoids having to create a new Vector database in the other stage. :::tip You can use the `static get` method to share Vector databases across stages. ::: Imagine you create a vector database in the `dev` stage. And in your personal stage `frank`, instead of creating a new database, you want to share the same database from `dev`. ```ts title="sst.config.ts" const vector = $app.stage === "frank" ? sst.aws.Vector.get("MyVectorDB", "app-dev-myvectordb") : new sst.aws.Vector("MyVectorDB", { dimension: 1536 }); ``` Here `app-dev-myvectordb` is the ID of the underlying Postgres cluster created in the `dev` stage. You can find this by outputting the cluster ID in the `dev` stage. ```ts title="sst.config.ts" return { cluster: vector.clusterID }; ``` :::note The Vector component creates a Postgres cluster and lambda functions for interfacing with the VectorDB. The `static get` method only shares the underlying Postgres cluster. Each stage will have its own lambda functions. ::: --- ## Vpc.v1 Reference doc for the `sst.aws.Vpc.v1` component. https://sst.dev/docs/component/aws/vpc-v1 The `Vpc` component lets you add a VPC to your app, but it has been deprecated because it does not support modifying the number of Availability Zones (AZs) after VPC creation. For existing usage, rename `sst.aws.Vpc` to `sst.aws.Vpc.v1`. For new VPCs, use the latest [`Vpc`](/docs/component/aws/vpc) component instead. :::caution This component has been deprecated. ::: This creates a VPC with 2 Availability Zones by default. It also creates the following resources: 1. A security group. 2. A public subnet in each AZ. 3. A private subnet in each AZ. 4. An Internet Gateway, all the traffic from the public subnets are routed through it. 5. A NAT Gateway in each AZ. All the traffic from the private subnets are routed to the NAT Gateway in the same AZ. :::note By default, this creates two NAT Gateways, one in each AZ. And it roughly costs $33 per NAT Gateway per month. ::: NAT Gateways are billed per hour and per gigabyte of data processed. By default, this creates a NAT Gateway in each AZ. And this would be roughly $33 per NAT Gateway per month. Make sure to [review the pricing](https://aws.amazon.com/vpc/pricing/). #### Create a VPC ```ts title="sst.config.ts" new sst.aws.Vpc.v1("MyVPC"); ``` #### Create it with 3 Availability Zones ```ts title="sst.config.ts" {2} new sst.aws.Vpc.v1("MyVPC", { az: 3 }); ``` --- ## Constructor ```ts new Vpc.v1(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`VpcArgs`](#vpcargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## VpcArgs ### az? **Type** `Input` **Default** `2` Number of Availability Zones or AZs for the VPC. By default, it creates a VPC with 2 AZs since services like RDS and Fargate need at least 2 AZs. ```ts { az: 3 } ``` ### transform? **Type** `Object` - [`elasticIp?`](#transform-elasticip) - [`internetGateway?`](#transform-internetgateway) - [`natGateway?`](#transform-natgateway) - [`privateRouteTable?`](#transform-privateroutetable) - [`privateSubnet?`](#transform-privatesubnet) - [`publicRouteTable?`](#transform-publicroutetable) - [`publicSubnet?`](#transform-publicsubnet) - [`securityGroup?`](#transform-securitygroup) - [`vpc?`](#transform-vpc) [Transform](/docs/components#transform) how this component creates its underlying resources. elasticIp? **Type** [`EipArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/eip/#inputs)` | (args: `[`EipArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/eip/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 Elastic IP resource. internetGateway? **Type** [`InternetGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/internetgateway/#inputs)` | (args: `[`InternetGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/internetgateway/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 Internet Gateway resource. natGateway? **Type** [`NatGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/natgateway/#inputs)` | (args: `[`NatGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/natgateway/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 NAT Gateway resource. privateRouteTable? **Type** [`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)` | (args: `[`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 route table resource for the private subnet. privateSubnet? **Type** [`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)` | (args: `[`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 private subnet resource. publicRouteTable? **Type** [`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)` | (args: `[`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 route table resource for the public subnet. publicSubnet? **Type** [`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)` | (args: `[`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 public subnet resource. securityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 Security Group resource. vpc? **Type** [`VpcArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpc/#inputs)` | (args: `[`VpcArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpc/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 VPC resource. ## Properties ### id **Type** `Output` The VPC ID. ### nodes **Type** `Object` - [`elasticIps`](#nodes-elasticips) - [`internetGateway`](#nodes-internetgateway) - [`natGateways`](#nodes-natgateways) - [`privateRouteTables`](#nodes-privateroutetables) - [`privateSubnets`](#nodes-privatesubnets) - [`publicRouteTables`](#nodes-publicroutetables) - [`publicSubnets`](#nodes-publicsubnets) - [`securityGroup`](#nodes-securitygroup) - [`vpc`](#nodes-vpc) The underlying [resources](/docs/components/#nodes) this component creates. elasticIps **Type** `Output<`[`Eip`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/eip/)`[]>` The Amazon EC2 Elastic IP. internetGateway **Type** [`InternetGateway`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/internetgateway/) The Amazon EC2 Internet Gateway. natGateways **Type** `Output<`[`NatGateway`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/natgateway/)`[]>` The Amazon EC2 NAT Gateway. privateRouteTables **Type** `Output<`[`RouteTable`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/)`[]>` The Amazon EC2 route table for the private subnet. privateSubnets **Type** `Output<`[`Subnet`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/)`[]>` The Amazon EC2 private subnet. publicRouteTables **Type** `Output<`[`RouteTable`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/)`[]>` The Amazon EC2 route table for the public subnet. publicSubnets **Type** `Output<`[`Subnet`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/)`[]>` The Amazon EC2 public subnet. securityGroup **Type** [`SecurityGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/) The Amazon EC2 Security Group. vpc **Type** [`Vpc`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpc/) The Amazon EC2 VPC. ### privateSubnets **Type** `Output[]>` A list of private subnet IDs in the VPC. ### publicSubnets **Type** `Output[]>` A list of public subnet IDs in the VPC. ### securityGroups **Type** `Output[]` A list of VPC security group IDs. ## Methods ### static get ```ts Vpc.get(name, vpcID) ``` #### Parameters - `name` `string` The name of the component. - `vpcID` `Input` The ID of the existing VPC. **Returns** [`Vpc`](.) Reference an existing VPC with the given ID. This is useful when you create a VPC in one stage and want to share it in another stage. It avoids having to create a new VPC in the other stage. :::tip You can use the `static get` method to share VPCs across stages. ::: Imagine you create a VPC in the `dev` stage. And in your personal stage `frank`, instead of creating a new VPC, you want to share the VPC from `dev`. ```ts title="sst.config.ts" const vpc = $app.stage === "frank" ? sst.aws.Vpc.v1.get("MyVPC", "vpc-0be8fa4de860618bb") : new sst.aws.Vpc.v1("MyVPC"); ``` Here `vpc-0be8fa4de860618bb` is the ID of the VPC created in the `dev` stage. You can find this by outputting the VPC ID in the `dev` stage. ```ts title="sst.config.ts" return { vpc: vpc.id }; ``` --- ## Vpc Reference doc for the `sst.aws.Vpc` component. https://sst.dev/docs/component/aws/vpc The `Vpc` component lets you add a VPC to your app. It uses [Amazon VPC](https://docs.aws.amazon.com/vpc/). This is useful for services like RDS and Fargate that need to be hosted inside a VPC. This creates a VPC with 2 Availability Zones by default. It also creates the following resources: 1. A default security group blocking all incoming internet traffic. 2. A public subnet in each AZ. 3. A private subnet in each AZ. 4. An Internet Gateway. All the traffic from the public subnets are routed through it. 5. If `nat` is enabled, a NAT Gateway or NAT instance in each AZ. All the traffic from the private subnets are routed to the NAT in the same AZ. :::note By default, this does not create NAT Gateways or NAT instances. ::: #### Create a VPC ```ts title="sst.config.ts" new sst.aws.Vpc("MyVPC"); ``` #### Create it with 3 Availability Zones ```ts title="sst.config.ts" {2} new sst.aws.Vpc("MyVPC", { az: 3 }); ``` #### Enable NAT ```ts title="sst.config.ts" {2} new sst.aws.Vpc("MyVPC", { nat: "managed" }); ``` --- ### Cost By default, this component costs **$0.50 per month** for the CloudMap hosted zone used for service discovery. Following is the cost to enable the `nat` or `bastion` options. #### Managed NAT If you enable `nat` with the `managed` option, it uses a _NAT Gateway_ per `az` at $0.045 per hour, and $0.045 per GB processed per month. That works out to a minimum of $0.045 x 2 x 24 x 30 or **$65 per month**. Adjust this for the number of `az` and add $0.045 per GB processed per month. The above are rough estimates for _us-east-1_, check out the [NAT Gateway pricing](https://aws.amazon.com/vpc/pricing/) for more details. Standard [data transfer charges](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer) apply. #### EC2 NAT If you enable `nat` with the `ec2` option, it uses `t4g.nano` EC2 _On Demand_ instances per `az` at $0.0042 per hour, and $0.09 per GB processed per month for the first 10TB. That works out to a minimum of $0.0042 x 2 x 24 x 30 or **$6 per month**. Adjust this for the `nat.ec2.instance` you are using and add $0.09 per GB processed per month. The above are rough estimates for _us-east-1_, check out the [EC2 On-Demand pricing](https://aws.amazon.com/vpc/pricing/) and the [EC2 Data Transfer pricing](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer) for more details. #### Bastion If you enable `bastion`, it uses a single `t4g.nano` EC2 _On Demand_ instance at $0.0042 per hour, and $0.09 per GB processed per month for the first 10TB. That works out to $0.0042 x 24 x 30 or **$3 per month**. Add $0.09 per GB processed per month. However if `nat: "ec2"` is enabled, one of the NAT EC2 instances will be reused; making this **free**. The above are rough estimates for _us-east-1_, check out the [EC2 On-Demand pricing](https://aws.amazon.com/vpc/pricing/) and the [EC2 Data Transfer pricing](https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer) for more details. --- ## Constructor ```ts new Vpc(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`VpcArgs`](#vpcargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## VpcArgs ### az? **Type** `Input[]>` **Default** `2` Specify the Availability Zones or AZs for the VPC. You can specify a number of AZs or a list of AZs. If you specify a number, it will look up the availability zones in the region and automatically select that number of AZs. If you specify a list of AZs, it will use that list of AZs. By default, it creates a VPC with 2 availability zones since services like RDS and Fargate need at least 2 AZs. Create a VPC with 3 AZs ```ts { az: 3 } ``` Create a VPC with specific AZs ```ts { az: ["us-east-1a", "us-east-1b"] } ``` ### bastion? **Type** `Input` - [`instanceProfile`](#bastion-instanceprofile) **Default** `false` Configures a bastion host that can be used to connect to resources in the VPC. When enabled, an EC2 instance of type `t4g.nano` with the bastion AMI will be launched in a public subnet. The instance will have AWS SSM (AWS Session Manager) enabled for secure access without the need for SSH key. You can optionally provide an existing IAM instance profile by name for the bastion. It costs roughly $3 per month to run the `t4g.nano` instance. :::note If `nat: "ec2"` is enabled, the bastion host will reuse the NAT EC2 instance. ::: However if `nat: "ec2"` is enabled, the EC2 instance that NAT creates will be used as the bastion host. No additional EC2 instance will be created. If you are running `sst dev`, a tunnel will be automatically created to the bastion host. This uses a network interface to forward traffic from your local machine to the bastion host. You can learn more about [`sst tunnel`](/docs/reference/cli#tunnel). ```ts { bastion: true } ``` Use an existing instance profile by name. Bastion is automatically enabled when you provide an instance profile. ```ts { bastion: { instanceProfile: "my-bastion-profile" } } ``` instanceProfile **Type** `Input` The name of an existing IAM instance profile to use for the bastion. ### nat? **Type** `Input<"ec2" | "managed" | Object>` - [`ec2?`](#nat-ec2) `Input` - [`ami?`](#nat-ec2-ami) - [`instance`](#nat-ec2-instance) - [`role?`](#nat-ec2-role) - [`ip?`](#nat-ip) - [`type?`](#nat-type) **Default** NAT is disabled Configures NAT. Enabling NAT allows resources in private subnets to connect to the internet. There are two NAT options: 1. `"managed"` creates a [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) 2. `"ec2"` creates an [EC2 instance](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) with the [fck-nat](https://github.com/AndrewGuenther/fck-nat) AMI For `"managed"`, a NAT Gateway is created in each AZ. All the traffic from the private subnets are routed to the NAT Gateway in the same AZ. NAT Gateways are billed per hour and per gigabyte of data processed. A NAT Gateway for two AZs costs $65 per month. This is relatively expensive but it automatically scales based on the traffic. For `"ec2"`, an EC2 instance of type `t4g.nano` will be launched in each AZ with the [fck-nat](https://github.com/AndrewGuenther/fck-nat) AMI. All the traffic from the private subnets are routed to the Elastic Network Interface (ENI) of the EC2 instance in the same AZ. :::tip The `"ec2"` option uses fck-nat and is 10x cheaper than the `"managed"` NAT Gateway. ::: NAT EC2 instances are much cheaper than NAT Gateways, the `t4g.nano` instance type is around $3 per month. But you'll need to scale it up manually if you need more bandwidth. ```ts { nat: "managed" } ``` ec2? **Type** `Input` **Default** `{instance: "t4g.nano"}` Configures the NAT EC2 instance. ```ts { nat: { ec2: { instance: "t4g.large" } } } ``` ami? **Type** `Input` **Default** The latest `fck-nat` AMI The AMI to use for the NAT. By default, the latest public [`fck-nat`](https://github.com/AndrewGuenther/fck-nat) AMI is used. However, if the AMI is not available in the region you are deploying to or you want to use a custom AMI, you can specify a different AMI. ```ts { nat: { ec2: { ami: "ami-1234567890abcdef0" } } } ``` instance **Type** `Input` **Default** `"t4g.nano"` The type of instance to use for the NAT. role? **Type** `Input` **Default** A new IAM role is created The Name of an existing IAM role to use for the NAT instance. By default, a new IAM role with SSM managed instance core permissions is created. Use this to provide a custom role with additional permissions or to comply with organizational policies. ```ts { nat: { ec2: { role: "my-nat-instance-role" } } } ``` ip? **Type** `Input[]>` A list of Elastic IP allocation IDs to use for the NAT Gateways or NAT instances. The number of allocation IDs must match the number of AZs. By default, new Elastic IP addresses are created. ```ts { nat: { ip: ["eipalloc-0123456789abcdef0", "eipalloc-0123456789abcdef1"] } } ``` type? **Type** `Input<"ec2" | "managed">` Configures the type of NAT to create. - If `nat.ec2` is provided, `nat.type` defaults to `"ec2"`. - Otherwise, `nat.type` must be explicitly specified. ### transform? **Type** `Object` - [`bastionInstance?`](#transform-bastioninstance) - [`bastionSecurityGroup?`](#transform-bastionsecuritygroup) - [`elasticIp?`](#transform-elasticip) - [`internetGateway?`](#transform-internetgateway) - [`natGateway?`](#transform-natgateway) - [`natInstance?`](#transform-natinstance) - [`natSecurityGroup?`](#transform-natsecuritygroup) - [`privateRouteTable?`](#transform-privateroutetable) - [`privateSubnet?`](#transform-privatesubnet) - [`publicRouteTable?`](#transform-publicroutetable) - [`publicSubnet?`](#transform-publicsubnet) - [`securityGroup?`](#transform-securitygroup) - [`vpc?`](#transform-vpc) [Transform](/docs/components#transform) how this component creates its underlying resources. bastionInstance? **Type** [`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/instance/#inputs)` | (args: `[`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/instance/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 bastion instance resource. bastionSecurityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 bastion security group resource. elasticIp? **Type** [`EipArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/eip/#inputs)` | (args: `[`EipArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/eip/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 Elastic IP resource. internetGateway? **Type** [`InternetGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/internetgateway/#inputs)` | (args: `[`InternetGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/internetgateway/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 Internet Gateway resource. natGateway? **Type** [`NatGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/natgateway/#inputs)` | (args: `[`NatGatewayArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/natgateway/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 NAT Gateway resource. natInstance? **Type** [`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/instance/#inputs)` | (args: `[`InstanceArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/instance/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 NAT instance resource. natSecurityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 NAT security group resource. privateRouteTable? **Type** [`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)` | (args: `[`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 route table resource for the private subnet. privateSubnet? **Type** [`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)` | (args: `[`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 private subnet resource. publicRouteTable? **Type** [`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)` | (args: `[`RouteTableArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 route table resource for the public subnet. publicSubnet? **Type** [`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)` | (args: `[`SubnetArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 public subnet resource. securityGroup? **Type** [`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)` | (args: `[`SecurityGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 Security Group resource. vpc? **Type** [`VpcArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpc/#inputs)` | (args: `[`VpcArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpc/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the EC2 VPC resource. ## Properties ### bastion **Type** `Output` The bastion instance ID. ### id **Type** `Output` The VPC ID. ### nodes **Type** `Object` - [`bastionInstance`](#nodes-bastioninstance) - [`bastionSecurityGroup`](#nodes-bastionsecuritygroup) - [`cloudmapNamespace`](#nodes-cloudmapnamespace) - [`elasticIps`](#nodes-elasticips) - [`internetGateway`](#nodes-internetgateway) - [`natGateways`](#nodes-natgateways) - [`natInstances`](#nodes-natinstances) - [`natSecurityGroup`](#nodes-natsecuritygroup) - [`privateRouteTables`](#nodes-privateroutetables) - [`privateSubnets`](#nodes-privatesubnets) - [`publicRouteTables`](#nodes-publicroutetables) - [`publicSubnets`](#nodes-publicsubnets) - [`securityGroup`](#nodes-securitygroup) - [`vpc`](#nodes-vpc) The underlying [resources](/docs/components/#nodes) this component creates. bastionInstance **Type** `Output` The Amazon EC2 bastion instance. bastionSecurityGroup **Type** `Output` The Amazon EC2 Security Group for the bastion instance. cloudmapNamespace **Type** [`PrivateDnsNamespace`](https://www.pulumi.com/registry/packages/aws/api-docs/servicediscovery/privatednsnamespace/) The AWS Cloudmap namespace. elasticIps **Type** `Output<`[`Eip`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/eip/)`[]>` The Amazon EC2 Elastic IP. internetGateway **Type** [`InternetGateway`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/internetgateway/) The Amazon EC2 Internet Gateway. natGateways **Type** `Output<`[`NatGateway`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/natgateway/)`[]>` The Amazon EC2 NAT Gateway. natInstances **Type** `Output<`[`Instance`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/instance/)`[]>` The Amazon EC2 NAT instances. natSecurityGroup **Type** `Output` The Amazon EC2 Security Group for the NAT instances. privateRouteTables **Type** `Output<`[`RouteTable`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/)`[]>` The Amazon EC2 route table for the private subnet. privateSubnets **Type** `Output<`[`Subnet`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/)`[]>` The Amazon EC2 private subnet. publicRouteTables **Type** `Output<`[`RouteTable`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/routetable/)`[]>` The Amazon EC2 route table for the public subnet. publicSubnets **Type** `Output<`[`Subnet`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/subnet/)`[]>` The Amazon EC2 public subnet. securityGroup **Type** [`SecurityGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/securitygroup/) The Amazon EC2 Security Group. vpc **Type** [`Vpc`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpc/) The Amazon EC2 VPC. ### privateSubnets **Type** `Output[]>` A list of private subnet IDs in the VPC. ### publicSubnets **Type** `Output[]>` A list of public subnet IDs in the VPC. ### securityGroups **Type** `Output[]>` A list of VPC security group IDs. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `bastion` `undefined | string` The bastion instance ID. ## Methods ### static get ```ts Vpc.get(name, vpcId, opts?) ``` #### Parameters - `name` `string` The name of the component. - `vpcId` `Input` The ID of the existing VPC. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Vpc`](.) Reference an existing VPC with the given ID. This is useful when you create a VPC in one stage and want to share it in another stage. It avoids having to create a new VPC in the other stage. :::tip You can use the `static get` method to share VPCs across stages. ::: Imagine you create a VPC in the `dev` stage. And in your personal stage `frank`, instead of creating a new VPC, you want to share the VPC from `dev`. ```ts title="sst.config.ts" const vpc = $app.stage === "frank" ? sst.aws.Vpc.get("MyVPC", "vpc-0be8fa4de860618bb") : new sst.aws.Vpc("MyVPC"); ``` Here `vpc-0be8fa4de860618bb` is the ID of the VPC created in the `dev` stage. You can find this by outputting the VPC ID in the `dev` stage. ```ts title="sst.config.ts" return { vpc: vpc.id }; ``` --- ## Workflow Reference doc for the `sst.aws.Workflow` component. https://sst.dev/docs/component/aws/workflow The `Workflow` component lets you add serverless workflows to your app using [AWS Lambda Durable Functions](https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html). It's a thin wrapper around the [`Function`](/docs/component/aws/function) component with durable execution enabled. It includes an [SDK](/docs/components/aws/workflow/#sdk) that wraps the AWS SDK with a simpler interface, adds helper methods, and makes it easier to integrate with other SST components. #### Minimal example ```ts title="sst.config.ts" new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", }); ``` ```ts title="src/workflow.ts" const user = await ctx.step("load-user", async () => { return { id: "user_123", email: "alice@example.com" }; }); await ctx.wait("pause-before-email", "1 minute"); return ctx.step("send-email", async () => { return { sent: true, userId: user.id }; }); }); ``` #### Configure timeout and retention ```ts {3-7} title="sst.config.ts" new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", retention: "30 days", timeout: { execution: "2 hours", invocation: "30 seconds", }, }); ``` #### Link resources ```ts {1,5} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", link: [bucket], }); ``` ```ts title="src/workflow.ts" return ctx.step("get-bucket-name", async () => { return Resource.MyBucket.name; }); }); ``` #### Trigger with a cron job ```ts {5-8} title="sst.config.ts" const workflow = new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", }); new sst.aws.CronV2("MyCron", { schedule: "rate(1 minute)", function: workflow, }); ``` ```ts title="src/workflow.ts" await ctx.step("start", async ({ logger }) => { logger.info({ message: "Workflow invoked by cron" }); }); }); ``` [Check out the full example](/docs/examples/#aws-workflow-cron). #### Subscribe to a bus ```ts {6-9} title="sst.config.ts" const workflow = new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", }); const bus = new sst.aws.Bus("MyBus"); bus.subscribe("MySubscriber", workflow, { pattern: { detailType: ["app.workflow.requested"], }, }); ``` ```ts title="src/workflow.ts" interface Event { "detail-type": string; detail: { properties: { message: string; requestId: string; }; }; } await ctx.step("start", async ({ logger }) => { logger.info({ message: "Workflow invoked by bus", requestId: event.detail.properties.requestId, }); }); }); ``` [Check out the full example](/docs/examples/#aws-workflow-bus). --- ### Limitations Durable workflows replay from the top on resume and retry. Keep the control flow deterministic, and move side effects like API calls, database writes, timestamps, and random ID generation inside durable operations like `ctx.step()`. :::caution Workflow handlers have versioning enabled. Deploying an update won't update existing running workflows. ::: Before using workflows in production, review the [AWS best practices for durable functions](https://docs.aws.amazon.com/lambda/latest/dg/durable-best-practices.html). --- ### Cost A workflow has no idle monthly cost. You pay the standard Lambda request and compute charges for each invocation. :::tip When a workflow is suspended in a `wait`, functions don't incur costs until execution resumes. ::: Lambda durable functions usage is billed separately. - Durable operations like starting an execution, completing a step, and creating a wait are billed at $8.00 per 1 million operations. - Data written by durable operations is billed at $0.25 per GB. - Retained execution state is billed at $0.15 per GB-month. For example, a workflow with two `step()` calls and one `wait()` uses four durable operations: one start, two steps, and one wait. That's about **$0.000032 per execution** for durable operations, before Lambda compute, requests, written data, and retention. These are rough _us-east-1_ estimates. Check out the [AWS Lambda pricing](https://aws.amazon.com/lambda/pricing/#Lambda_Durable_Functions_Pricing) for more details. --- ## Constructor ```ts new Workflow(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`WorkflowArgs`](#workflowargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## WorkflowArgs ### architecture? **Type** `Input<"x86_64" | "arm64">` **Default** `"x86_64"` The [architecture](https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html) of the Lambda function. ```js { architecture: "arm64" } ``` ### bundle? **Type** `Input` Path to the source code directory for the function. By default, the handler is bundled with [esbuild](https://esbuild.github.io/). Use `bundle` to skip bundling. :::caution Use `bundle` only when you want to bundle the function yourself. ::: If the `bundle` option is specified, the `handler` needs to be in the root of the bundle. Here, the entire `packages/functions/src` directory is zipped. And the handler is in the `src` directory. ```js { bundle: "packages/functions/src", handler: "index.handler" } ``` ### copyFiles? **Type** `Input` - [`from`](#copyfiles-from) - [`to?`](#copyfiles-to) Add additional files to copy into the function package. Takes a list of objects with `from` and `to` paths. These will be copied over before the function package is zipped up. Copying over a single file from the `src` directory to the `src/` directory of the function package. ```js { copyFiles: [{ from: "src/index.js" }] } ``` Copying over a single file from the `src` directory to the `core/src` directory in the function package. ```js { copyFiles: [{ from: "src/index.js", to: "core/src/index.js" }] } ``` Copying over a couple of files. ```js { copyFiles: [ { from: "src/this.js", to: "core/src/this.js" }, { from: "src/that.js", to: "core/src/that.js" } ] } ``` from **Type** `Input` Source path relative to the `sst.config.ts`. to? **Type** `Input` **Default** The `from` path in the function package Destination path relative to function root in the package. By default, it creates the same directory structure as the `from` path and copies the file. ### description? **Type** `Input` A description for the function. This is displayed in the AWS Console. ```js { description: "Handler function for my nightly cron job." } ``` ### dev? **Type** `Input` **Default** `true` Disable running this function [_Live_](/docs/live/) in `sst dev`. By default, the functions in your app are run locally in `sst dev`. To do this, a _stub_ version of your function is deployed, instead of the real function. :::note In `sst dev` a _stub_ version of your function is deployed. ::: This shows under the **Functions** tab in the multiplexer sidebar where your invocations are logged. You can turn this off by setting `dev` to `false`. Read more about [Live](/docs/live/) and [`sst dev`](/docs/reference/cli/#dev). ```js { dev: false } ``` ### environment? **Type** `Input>>` Key-value pairs of values that are set as [Lambda environment variables](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html). The keys need to: - Start with a letter - Be at least 2 characters long - Contain only letters, numbers, or underscores They can be accessed in your function using `process.env.`. :::note The total size of the environment variables cannot exceed 4 KB. ::: ```js { environment: { DEBUG: "true" } } ``` ### handler **Type** `Input` Path to the handler for the function. - For Node.js this is in the format `{path}/{file}.{method}`. - For Python this is also `{path}/{file}.{method}`. - For Golang this is `{path}` to the Go module. - For Rust this is `{path}` to the Rust crate. ##### Node.js For example with Node.js you might have. ```js { handler: "packages/functions/src/main.handler" } ``` Where `packages/functions/src` is the path. And `main` is the file, where you might have a `main.ts` or `main.js`. And `handler` is the method exported in that file. :::note You don't need to specify the file extension. ::: If `bundle` is specified, the handler needs to be in the root of the bundle directory. ```js { bundle: "packages/functions/src", handler: "index.handler" } ``` ##### Python For Python, [uv](https://docs.astral.sh/uv/) is used to package the function. You need to have it installed. :::note You need uv installed for Python functions. ::: The functions need to be in a [uv workspace](https://docs.astral.sh/uv/concepts/projects/workspaces/#workspace-sources). ```js { handler: "functions/src/functions/api.handler" } ``` The project structure might look something like this. Where there is a `pyproject.toml` file in the root and the `functions/` directory is a uv workspace with its own `pyproject.toml`. ```txt ├── sst.config.ts ├── pyproject.toml └── functions ├── pyproject.toml └── src └── functions ├── __init__.py └── api.py ``` To make sure that the right runtime is used in `sst dev`, make sure to set the version of Python in your `pyproject.toml` to match the runtime you are using. ```toml title="functions/pyproject.toml" requires-python = "==3.11.*" ``` You can refer to [this example of deploying a Python function](/docs/examples/#aws-lambda-python). ##### Golang For Golang the handler looks like. ```js { handler: "packages/functions/go/some_module" } ``` Where `packages/functions/go/some_module` is the path to the Go module. This includes the name of the module in your `go.mod`. So in this case your `go.mod` might be in `packages/functions/go` and `some_module` is the name of the module. You can refer to [this example of deploying a Go function](/docs/examples/#aws-lambda-go). ##### Rust For Rust, the handler looks like. ```js { handler: "crates/api" } ``` Where `crates/api` is the path to the Rust crate. This means there is a `Cargo.toml` file in `crates/api`, and the main() function handles the lambda. ### hook? **Type** `Object` - [`postbuild`](#hook-postbuild) Hook into the Lambda function build process. postbuild ```ts postbuild(dir) ``` **Parameters** - `dir` `string` The directory where the function code is generated. **Returns** `Promise` Specify a callback that'll be run after the Lambda function is built. :::note This is not called in `sst dev`. ::: Useful for modifying the generated Lambda function code before it's deployed to AWS. It can also be used for uploading the generated sourcemaps to a service like Sentry. ### layers? **Type** `Input[]>` A list of Lambda layer ARNs to add to the function. :::note Layers are only added when the function is deployed. ::: These are only added when the function is deployed. In `sst dev`, your functions are run locally, so the layers are not used. Instead you should use a local version of what's in the layer. ```js { layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"] } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your function. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your function using the [SDK](/docs/reference/sdk/). Takes a list of components to link to the function. ```js { link: [bucket, stripeKey] } ``` ### logging? **Type** `false | Object` - [`format?`](#logging-format) - [`logGroup?`](#logging-loggroup) - [`retention?`](#logging-retention) **Default** `{retention: "1 month", format: "json"}` Configure the workflow logs in CloudWatch. Or pass in `false` to disable writing logs. The only supported log format is `json`. format? **Type** `Input<"json">` **Default** `"json"` The log format for the workflow. AWS Lambda durable functions require structured JSON logs, so `"json"` is the only supported value. logGroup? **Type** `Input` **Default** Creates a log group Assigns the given CloudWatch log group name to the workflow. This allows you to pass in a previously created log group. By default, the workflow creates a new log group when it's created. ```js { logging: { logGroup: "/existing/log-group" } } ``` retention? **Type** `Input<"1 day" | "3 days" | "5 days" | "1 week" | "2 weeks" | "1 month" | "2 months" | "3 months" | "4 months" | "5 months" | "6 months" | "1 year" | "13 months" | "18 months" | "2 years" | "3 years" | "5 years" | "6 years" | "7 years" | "8 years" | "9 years" | "10 years" | "forever">` **Default** `1 month` The duration the workflow logs are kept in CloudWatch. Not applicable when an existing log group is provided. ```js { logging: { retention: "forever" } } ``` ### memory? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"1024 MB"` The amount of memory allocated for the function. Takes values between 128 MB and 10240 MB in 1 MB increments. The amount of memory affects the amount of virtual CPU available to the function. :::tip While functions with less memory are cheaper, larger functions can process faster. And might end up being more [cost effective](https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html). ::: ```js { memory: "10240 MB" } ``` ### name? **Type** `Input` The name for the function. By default, the name is generated from the app name, stage name, and component name. This is displayed in the AWS Console for this function. :::caution To avoid the name from thrashing, you want to make sure that it includes the app and stage name. ::: If you are going to set the name, you need to make sure: 1. It's unique across your app. 2. Uses the app and stage name, so it doesn't thrash when you deploy to different stages. Also, changing the name after your've deployed it once will create a new function and delete the old one. ```js { name: `${$app.name}-${$app.stage}-my-function` } ``` ### nodejs? **Type** `Input` - [`banner?`](#nodejs-banner) - [`esbuild?`](#nodejs-esbuild) - [`format?`](#nodejs-format) - [`install?`](#nodejs-install) - [`loader?`](#nodejs-loader) - [`minify?`](#nodejs-minify) - [`sourcemap?`](#nodejs-sourcemap) - [`splitting?`](#nodejs-splitting) Configure how your function is bundled. By default, SST will bundle your function code using [esbuild](https://esbuild.github.io/). This tree shakes your code to only include what's used; reducing the size of your function package and improving cold starts. banner? **Type** `Input` Use this to insert a string at the beginning of the generated JS file. ```js { nodejs: { banner: "console.log('Function starting')" } } ``` esbuild? **Type** `Input` This allows you to customize esbuild config that is used. :::tip Check out the _JS tab_ in the code snippets in the esbuild docs for the [`BuildOptions`](https://esbuild.github.io/api/#build). ::: format? **Type** `Input<"cjs" | "esm">` **Default** `"esm"` Configure the format of the generated JS code; ESM or CommonJS. ```js { nodejs: { format: "cjs" } } ``` install? **Type** `Input>` Dependencies that need to be excluded from the function package. Certain npm packages cannot be bundled using esbuild. This allows you to exclude them from the bundle. Instead they'll be moved into a `node_modules/` directory in the function package. :::tip If esbuild is giving you an error about a package, try adding it to `install`. ::: This will allow your functions to be able to use these dependencies when deployed. They just won't be tree shaken. :::caution If you don't specify a version, the package still needs to be in your `package.json`. ::: Esbuild will ignore them while traversing the imports in your code. So these are the **package names as seen in the imports**. It also works on packages that are not directly imported by your code. ```js { nodejs: { install: { pg: "8.13.1" } } } ``` Passing `["packageName"]` is the same as passing `{ packageName: "*" }`. loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { nodejs: { loader: { ".png": "file" } } } ``` minify? **Type** `Input` **Default** `true` Disable if the function code is minified when bundled. ```js { nodejs: { minify: false } } ``` sourcemap? **Type** `Input` **Default** `false` Configure if source maps are added to the function bundle when **deployed**. Since they increase payload size and potentially cold starts, they are not added by default. However, they are always generated during `sst dev`. :::tip[SST Console] For the [Console](/docs/console/), source maps are always generated and uploaded to your bootstrap bucket. These are then downloaded and used to display Issues in the console. ::: ```js { nodejs: { sourcemap: true } } ``` splitting? **Type** `Input` **Default** `false` If enabled, modules that are dynamically imported will be bundled in their own files with common dependencies placed in shared chunks. This can help reduce cold starts as your function grows in size. ```js { nodejs: { splitting: true } } ``` ### permissions? **Type** `Input` - [`actions`](#permissions-actions) - [`conditions?`](#permissions-conditions) `Input[]>` - [`test`](#permissions-conditions-test) - [`values`](#permissions-conditions-values) - [`variable`](#permissions-conditions-variable) - [`effect?`](#permissions-effect) - [`resources`](#permissions-resources) Permissions and the resources that the function needs to access. These permissions are used to create the function's IAM role. :::tip If you `link` the function to a resource, the permissions to access it are automatically added. ::: Allow the function to read and write to an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Allow the function to perform all actions on an S3 bucket called `my-bucket`. ```js { permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] } ] } ``` Granting the function permissions to access all resources. ```js { permissions: [ { actions: ["*"], resources: ["*"] } ] } ``` actions **Type** `string[]` The [IAM actions](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html#actions_table) that can be performed. ```js { actions: ["s3:*"] } ``` conditions? **Type** `Input[]>` Configure specific conditions for when the policy is in effect. ```js { conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ] } ``` test **Type** `Input` Name of the [IAM condition operator](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html) to evaluate. values **Type** `Input[]>` The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an "OR" boolean operation. variable **Type** `Input` Name of a [Context Variable](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#AvailableKeys) to apply the condition to. Context variables may either be standard AWS variables starting with `aws:` or service-specific variables prefixed with the service name. effect? **Type** `"allow" | "deny"` **Default** `"allow"` Configures whether the permission is allowed or denied. ```ts { effect: "deny" } ``` resources **Type** `Input[]>` The resourcess specified using the [IAM ARN format](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html). ```js { resources: ["arn:aws:s3:::my-bucket/*"] } ``` ### policies? **Type** `Input` Policies to attach to the function. These policies will be added to the function's IAM role. Attaching policies lets you grant a set of predefined permissions to the function without having to specify the permissions in the `permissions` prop. For example, allow the function to have read-only access to all resources. ```js { policies: ["arn:aws:iam::aws:policy/ReadOnlyAccess"] } ``` ### python? **Type** `Input` - [`container?`](#python-container) `Input` - [`cache?`](#python-container-cache) Configure how your Python function is packaged. container? **Type** `Input` **Default** `false` Set this to `true` if you want to deploy this function as a container image. There are a couple of reasons why you might want to do this. 1. The Lambda package size has an unzipped limit of 250MB. Whereas the container image size has a limit of 10GB. 2. Even if you are below the 250MB limit, larger Lambda function packages have longer cold starts when compared to container image. 3. You might want to use a custom Dockerfile to handle complex builds. ```ts { python: { container: true } } ``` When you run `sst deploy`, it uses a built-in Dockerfile. It also needs the Docker daemon to be running. :::note This needs the Docker daemon to be running. ::: To use a custom Dockerfile, add one to the rooot of the uv workspace of the function. ```txt {5} ├── sst.config.ts ├── pyproject.toml └── function ├── pyproject.toml ├── Dockerfile └── src └── function └── api.py ``` You can refer to [this example of using a container image](/docs/examples/#aws-lambda-python-container). cache? **Type** `Input` **Default** `true` Controls whether Docker build cache is enabled. Disable Docker build caching, useful for environments like Localstack where ECR cache export is not supported. ```js { python: { container: { cache: false } } } ``` ### retention? **Type** `Input<"$\{number\} day" | "$\{number\} days">` **Default** `"30 days"` Number of days to retain the workflow execution state. ### role? **Type** `Input` **Default** Creates a new role Assigns the given IAM role ARN to the function. This allows you to pass in a previously created role. :::caution When you pass in a role, the function will not update it if you add `permissions` or `link` resources. ::: By default, the function creates a new IAM role when it's created. It'll update this role if you add `permissions` or `link` resources. However, if you pass in a role, you'll need to update it manually if you add `permissions` or `link` resources. ```js { role: "arn:aws:iam::123456789012:role/my-role" } ``` ### runtime? **Type** `Input<"nodejs22.x" | "nodejs24.x" | "python3.13">` **Default** `"nodejs24.x"` The language runtime for the workflow. AWS Lambda durable functions currently support `"nodejs22.x"`, `"nodejs24.x"`, and `"python3.13"`. ```js { runtime: "python3.13" } ``` ### storage? **Type** `Input<"$\{number\} MB" | "$\{number\} GB">` **Default** `"512 MB"` The amount of ephemeral storage allocated for the function. This sets the ephemeral storage of the lambda function (/tmp). Must be between "512 MB" and "10240 MB" ("10 GB") in 1 MB increments. ```js { storage: "5 GB" } ``` ### tags? **Type** `Input>>` A list of tags to add to the function. ```js { tags: { "my-tag": "my-value" } } ``` ### timeout? **Type** `Input` - [`execution?`](#timeout-execution) - [`invocation?`](#timeout-invocation) Configure timeout limits for the workflow execution and each underlying Lambda invocation. execution? **Type** `Input<`${number} minute` | `${number} minutes` | `${number} hour` | `${number} hours` | `${number} second` | `${number} seconds` | `${number} day` | `${number} days`> | undefined` **Default** `"14 days"` Maximum execution time for the entire workflow execution, from when it starts until it completes. This includes time spent across retries, replays, waits, and all durable invocations. invocation? **Type** `Input<`${number} minute` | `${number} minutes` | `${number} second` | `${number} seconds`> | undefined` **Default** `"5 minutes"` Maximum execution time for each underlying Lambda invocation. This is not a per-step timeout. A single invocation can run multiple steps before the workflow yields, waits, or replays. ### transform? **Type** `Object` - [`function?`](#transform-function) `Object` - [`eventInvokeConfig?`](#transform-function-eventinvokeconfig) - [`function?`](#transform-function-function) - [`logGroup?`](#transform-function-loggroup) - [`role?`](#transform-function-role) [Transform](/docs/components#transform) how this component creates its underlying resources. function? **Type** `Object` Transform the underlying SST Function component resources. eventInvokeConfig? **Type** [`FunctionEventInvokeConfigArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/functioneventinvokeconfig/#inputs)` | (args: `[`FunctionEventInvokeConfigArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/functioneventinvokeconfig/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Function Event Invoke Config resource. This is only created when the `retries` property is set. function? **Type** [`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/function/#inputs)` | (args: `[`FunctionArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/function/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Lambda Function resource. logGroup? **Type** [`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)` | (args: `[`LogGroupArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the CloudWatch LogGroup resource. role? **Type** [`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)` | (args: `[`RoleArgs`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the IAM Role resource. ### volume? **Type** `Input` - [`efs`](#volume-efs) - [`path?`](#volume-path) Mount an EFS file system to the function. Create an EFS file system. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const fileSystem = new sst.aws.Efs("MyFileSystem", { vpc }); ``` And pass it in. ```js { volume: { efs: fileSystem } } ``` By default, the file system will be mounted to `/mnt/efs`. You can change this by passing in the `path` property. ```js { volume: { efs: fileSystem, path: "/mnt/my-files" } } ``` To use an existing EFS, you can pass in an EFS access point ARN. ```js { volume: { efs: "arn:aws:elasticfilesystem:us-east-1:123456789012:access-point/fsap-12345678", } } ``` efs **Type** `Input` The EFS file system to mount. Or an EFS access point ARN. path? **Type** `Input` **Default** `"/mnt/efs"` The path to mount the volume. ### vpc? **Type** [`Vpc`](/docs/component/aws/vpc)` | Input` - [`privateSubnets`](#vpc-privatesubnets) - [`securityGroups`](#vpc-securitygroups) Configure the function to connect to private subnets in a virtual private cloud or VPC. This allows your function to access private resources. Create a `Vpc` component. ```js title="sst.config.ts" const myVpc = new sst.aws.Vpc("MyVpc"); ``` Or reference an existing VPC. ```js title="sst.config.ts" const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567"); ``` And pass it in. ```js { vpc: myVpc } ``` privateSubnets **Type** `Input[]>` A list of VPC subnet IDs. securityGroups **Type** `Input[]>` A list of VPC security group IDs. ## Properties ### arn **Type** `Output` The ARN of the Lambda function backing the workflow. ### name **Type** `Output` The name of the Lambda function backing the workflow. ### nodes **Type** `Object` - [`function`](#nodes-function) The underlying [resources](/docs/components/#nodes) this component creates. function **Type** [`Function`](/docs/component/aws/function) The SST Function component backing the workflow. ### qualifier **Type** `Output` The published version qualifier backing the workflow. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `name` `string` The name of the Lambda function backing the workflow. - `qualifier` `undefined | string` The published version qualifier backing the workflow. The `workflow` SDK is a thin wrapper around the [`@aws/durable-execution-sdk-js`](https://www.npmjs.com/package/@aws/durable-execution-sdk-js) package and the AWS Lambda durable execution APIs. SST also adds a few helpers on top, including `ctx.stepWithRollback()`, `ctx.rollbackAll()`, and `ctx.waitUntil()`. ```ts title="src/workflow.ts" ``` Use `stepWithRollback()` and `rollbackAll()` to register compensating actions. ```ts title="src/workflow.ts" try { const order = await ctx.stepWithRollback("create-order", { run: async () => ({ orderId: "order_123" }), undo: async (error, result) => { await fetch(`https://example.com/orders/${result.orderId}`, { method: "DELETE", }); }, }); await ctx.step("charge-card", async () => { throw new Error("Card declined"); }); return order; } catch (error) { await ctx.rollbackAll(error); throw error; } }); ``` Use `waitUntil()` when you already know the exact time the workflow should resume. ```ts title="src/workflow.ts" async (_event, ctx) => { const resumeAt = new Date(); resumeAt.setMinutes(resumeAt.getMinutes() + 10); await ctx.waitUntil("wait-for-follow-up", resumeAt); return ctx.step("send-follow-up", async () => { return { delivered: true }; }); }, ); ``` --- ### describe ```ts workflow.describe(arn, options?) ``` #### Parameters - `arn` `string` - `options?` [`Options`](#options) **Returns** `Promise<`[`DescribeResponse`](#describeresponse)`>` Get the details for a single workflow execution. ### fail ```ts workflow.fail(token, input, options?) ``` #### Parameters - `token` `string` - `input` `FailInput` - `options?` [`Options`](#options) **Returns** `Promise` Send a failure result for a pending workflow callback. This is the equivalent to calling [`SendDurableExecutionCallbackFailure`](https://docs.aws.amazon.com/lambda/latest/api/API_SendDurableExecutionCallbackFailure.html). ### handler ```ts workflow.handler(input, config?) ``` #### Parameters - `input` `Handler` - `config?` `DurableExecutionConfig` **Returns** `DurableLambdaHandler` Create a durable workflow handler. ```ts title="src/workflow.ts" async (_event, ctx) => { const user = await ctx.step("load-user", async () => { return { id: "user_123", email: "alice@example.com" }; }); await ctx.wait("pause-before-email", "1 minute"); return ctx.step("send-email", async () => { return { sent: true, userId: user.id }; }); }, ); ``` ### heartbeat ```ts workflow.heartbeat(token, options?) ``` #### Parameters - `token` `string` - `options?` [`Options`](#options) **Returns** `Promise` Send a heartbeat for a pending workflow callback. This is useful when the external system handling the callback is still doing work and needs to prevent the callback from timing out. This is the equivalent to calling [`SendDurableExecutionCallbackHeartbeat`](https://docs.aws.amazon.com/lambda/latest/api/API_SendDurableExecutionCallbackHeartbeat.html). ### list ```ts workflow.list(resource, query, options?) ``` #### Parameters - `resource` [`Resource`](#resource) - `query` `ListQuery` - `options?` [`Options`](#options) **Returns** `Promise<`[`ListResponse`](#listresponse)`>` List workflow executions. The SDK returns only the first page of results. ### start ```ts workflow.start(resource, input, options?) ``` #### Parameters - `resource` [`Resource`](#resource) - `input` `StartInput` - `options?` [`Options`](#options) **Returns** `Promise<`[`StartResponse`](#startresponse)`>` Start a new workflow execution. This is the equivalent to calling [`Invoke`](https://docs.aws.amazon.com/lambda/latest/api/API_Invoke.html) for a durable Lambda function, using the durable invocation flow described in [Invoking durable Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/durable-invoking.html). ### stop ```ts workflow.stop(arn, input?, options?) ``` #### Parameters - `arn` `string` - `input?` `StopInput` - `options?` [`Options`](#options) **Returns** `Promise<`[`StopResponse`](#stopresponse)`>` Stop a running workflow execution. ### succeed ```ts workflow.succeed(token, input?, options?) ``` #### Parameters - `token` `string` - `input?` `SucceedInput` - `options?` [`Options`](#options) **Returns** `Promise` Send a successful result for a pending workflow callback. This is the equivalent to calling [`SendDurableExecutionCallbackSuccess`](https://docs.aws.amazon.com/lambda/latest/api/API_SendDurableExecutionCallbackSuccess.html). ### Context **Type** `Object` Only showing custom SDK methods here. For the full API, see [the AWS Durable Execution SDK docs](https://docs.aws.amazon.com/durable-functions/sdk-reference/). - [`rollbackAll`](#context-rollbackall) - [`stepWithRollback`](#context-stepwithrollback) - [`waitUntil`](#context-waituntil) rollbackAll ```ts rollbackAll(error) ``` **Parameters** - `error` `unknown` **Returns** `Promise` Execute all registered rollback steps in reverse order. stepWithRollback ```ts stepWithRollback(name, handler, config?) ``` **Parameters** - `name` `string` - `handler` `StepWithRollbackHandler` - `config?` `StepConfig` **Returns** `DurablePromise` Execute a durable step and register a compensating rollback step if it succeeds. If `run` throws, nothing is added to the rollback stack for that step. waitUntil ```ts waitUntil(name, until) ``` **Parameters** - `name` `string` - `until` `Date<>` **Returns** `DurablePromise` Wait until the provided time. Delays are rounded up to the nearest second. ### DescribeResponse **Type** `Object` - [`arn`](#describeresponse-arn) - [`createdAt`](#describeresponse-createdat) - [`endedAt?`](#describeresponse-endedat) - [`functionArn`](#describeresponse-functionarn) - [`name`](#describeresponse-name) - [`status`](#describeresponse-status) - [`version?`](#describeresponse-version) arn **Type** `string` The ARN of the durable execution. createdAt **Type** `Date<>` When the execution started. endedAt? **Type** `Date<>` When the execution ended, if it has finished. functionArn **Type** `string` The ARN of the workflow function. name **Type** `string` The durable execution name. status **Type** `ExecutionStatus` The current execution status. version? **Type** `string` The version that started the execution. ### Execution **Type** `Object` - [`arn`](#execution-arn) - [`createdAt`](#execution-createdat) - [`endedAt?`](#execution-endedat) - [`functionArn`](#execution-functionarn) - [`name`](#execution-name) - [`status`](#execution-status) arn **Type** `string` The ARN of the durable execution. createdAt **Type** `Date<>` When the execution started. endedAt? **Type** `Date<>` When the execution ended, if it has finished. functionArn **Type** `string` The ARN of the workflow function. name **Type** `string` The durable execution name. status **Type** `ExecutionStatus` The current execution status. ### ListResponse **Type** `Object` - [`executions`](#listresponse-executions) executions **Type** [`Execution`](#execution)`[]` The matching executions. ### Options **Type** `Object` - [`aws?`](#options-aws) aws? **Type** `Object` Configure the options for the [aws4fetch](https://github.com/mhart/aws4fetch) [`AWSClient`](https://github.com/mhart/aws4fetch?tab=readme-ov-file#new-awsclientoptions) used internally by the SDK. ### Resource **Type** `Object` - [`name`](#resource-name) - [`qualifier`](#resource-qualifier) name **Type** `string` The name of the workflow function. qualifier **Type** `string` The version or alias qualifier to invoke. Linked `sst.aws.Workflow` resources include this automatically. ### StartResponse **Type** `Object` - [`arn?`](#startresponse-arn) - [`statusCode`](#startresponse-statuscode) - [`version?`](#startresponse-version) arn? **Type** `string` The ARN of the durable execution. statusCode **Type** `number` The HTTP status code from Lambda. version? **Type** `string` The function version that was executed. ### StopResponse **Type** `Object` - [`arn`](#stopresponse-arn) - [`status`](#stopresponse-status) - [`stoppedAt?`](#stopresponse-stoppedat) arn **Type** `string` The ARN of the durable execution. status **Type** `"STOPPED"` The execution status after the stop call. stoppedAt? **Type** `Date<>` When the execution was stopped. --- ## Ai Reference doc for the `sst.cloudflare.Ai` component. https://sst.dev/docs/component/cloudflare/ai The `Ai` component lets you add a [Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/) binding to your app. #### Minimal example ```ts title="sst.config.ts" const ai = new sst.cloudflare.Ai("MyAi"); ``` #### Link to a worker You can link AI to a worker. ```ts {3} title="sst.config.ts" new sst.cloudflare.Worker("MyWorker", { handler: "./index.ts", link: [ai], url: true }); ``` Once linked, you can use the SDK to interact with the AI binding. ```ts title="index.ts" {3} const result = await Resource.MyAi.run("@cf/meta/llama-3-8b-instruct", { prompt: "What is the origin of the phrase 'Hello, World'" }); ``` --- ## Constructor ```ts new Ai(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`AiArgs`](#aiargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Bindings When you link an AI binding, it will be available to the worker and you can interact with it using its [API methods](https://developers.cloudflare.com/workers-ai/). ```ts title="index.ts" {3} const result = await Resource.MyAi.run("@cf/meta/llama-3-8b-instruct", { prompt: "What is the origin of the phrase 'Hello, World'" }); ``` --- ## Astro Reference doc for the `sst.cloudflare.Astro` component. https://sst.dev/docs/component/cloudflare/astro The `Astro` component lets you deploy an [Astro](https://astro.build) site to Cloudflare. :::caution Features like `sst dev` support and `sst/resource` bindings require Astro v6 or newer. ::: #### Minimal example Deploy the Astro site that's in the project root. ```js title="sst.config.ts" new sst.cloudflare.Astro("MyWeb"); ``` #### Change the path Deploys the Astro site in the `my-astro-app/` directory. ```js {2} title="sst.config.ts" new sst.cloudflare.Astro("MyWeb", { path: "my-astro-app/" }); ``` #### Add a custom domain Set a custom domain for your Astro site. ```js {2} title="sst.config.ts" new sst.cloudflare.Astro("MyWeb", { domain: "my-app.com" }); ``` #### Link resources [Link resources](/docs/linking/) to your Astro site. This will grant permissions to the resources and allow you to access it in your site. ```ts {4} title="sst.config.ts" const bucket = new sst.cloudflare.Bucket("MyBucket"); new sst.cloudflare.Astro("MyWeb", { link: [bucket] }); ``` Add this to your `astro.config.mjs` for SST to work correctly. ```js title="astro.config.mjs" adapter: cloudflare({ configPath: process.env.SST_WRANGLER_PATH, }), }); ``` Use `sst/resource` for linked resources. ```astro title="src/pages/index.astro" --- const files = await Resource.MyBucket.list(); --- ``` --- ## Constructor ```ts new Astro(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`AstroArgs`](#astroargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## AstroArgs ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your Astro site. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your Astro site is run in dev mode; it's not deployed. ::: Instead of deploying your Astro site, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` Set a custom domain for your Astro site. ```js { domain: "my-app.com" } ``` ### environment? **Type** `Input>>` Set [environment variables](https://docs.astro.build/en/guides/environment-variables/) in your Astro site. :::tip You can also `link` resources to your Astro site and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: Recall that in Astro, you need to prefix your environment variables with `PUBLIC_` to access them on the client-side. [Read more here](https://docs.astro.build/en/guides/environment-variables/). ```js { environment: { API_URL: api.url, // Accessible on the client-side PUBLIC_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your Astro site. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access them in your site using `sst/resource`. Takes a list of resources to link to the function. ```js { link: [bucket, stripeKey] } ``` Access linked resources in your site with [`sst/resource`](/docs/reference/sdk/#sstresource). This works in both `sst dev` and after deploy. ```astro --- const files = await Resource.MyBucket.list(); --- ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your Astro site is located. This path is relative to your `sst.config.ts`. By default it assumes your Astro site is in the root of your SST app. If your Astro site is in a package in your monorepo. ```js { path: "packages/web" } ``` ### transform? **Type** `Object` - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. server? **Type** [`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)` | (args: `[`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Worker component used for handling the server-side rendering. ## Properties ### nodes **Type** `Object` - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. server **Type** `undefined | `[`Worker`](/docs/component/cloudflare/worker) The Cloudflare Worker that renders the site. ### url **Type** `Output` The URL of the Astro site. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated Worker URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `undefined | string` The URL of the Astro site. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated Worker URL. --- ## Cloudflare Linkable helper Reference doc for the `sst.cloudflare.binding` helper. https://sst.dev/docs/component/cloudflare/binding The Cloudflare Binding Linkable helper is used to define the Cloudflare bindings included with the [`sst.Linkable`](/docs/component/linkable/) component. ```ts sst.cloudflare.binding({ type: "r2BucketBindings", properties: { bucketName: "my-bucket" } }) ``` --- ## Functions ### binding ```ts binding(input) ``` #### Parameters - `input` [`AiBinding`](#aibinding)` | `[`KvBinding`](#kvbinding)` | `[`SecretTextBinding`](#secrettextbinding)` | `[`ServiceBinding`](#servicebinding)` | `[`PlainTextBinding`](#plaintextbinding)` | `[`QueueBinding`](#queuebinding)` | `[`R2BucketBinding`](#r2bucketbinding)` | `[`D1DatabaseBinding`](#d1databasebinding)` | `[`HyperdriveBinding`](#hyperdrivebinding)` | `[`VersionMetadataBinding`](#versionmetadatabinding) **Returns** `Object` ## AiBinding ### properties **Type** `Record` ### type **Type** `"aiBindings"` ## D1DatabaseBinding ### properties **Type** `Object` - [`id`](#properties-id) id **Type** `Input` ### type **Type** `"d1DatabaseBindings"` ## HyperdriveBinding ### properties **Type** `Object` - [`id`](#properties-id-1) id **Type** `Input` ### type **Type** `"hyperdriveBindings"` ## KvBinding ### properties **Type** `Object` - [`namespaceId`](#properties-namespaceid) namespaceId **Type** `Input` ### type **Type** `"kvNamespaceBindings"` ## PlainTextBinding ### properties **Type** `Object` - [`text`](#properties-text) text **Type** `Input` ### type **Type** `"plainTextBindings"` ## QueueBinding ### properties **Type** `Object` - [`queueName`](#properties-queuename) queueName **Type** `Input` ### type **Type** `"queueBindings"` ## R2BucketBinding ### properties **Type** `Object` - [`bucketName`](#properties-bucketname) bucketName **Type** `Input` ### type **Type** `"r2BucketBindings"` ## SecretTextBinding ### properties **Type** `Object` - [`text`](#properties-text-1) text **Type** `Input` ### type **Type** `"secretTextBindings"` ## ServiceBinding ### properties **Type** `Object` - [`service`](#properties-service) service **Type** `Input` ### type **Type** `"serviceBindings"` ## VersionMetadataBinding ### properties **Type** `Record` ### type **Type** `"versionMetadataBindings"` --- ## Bucket Reference doc for the `sst.cloudflare.Bucket` component. https://sst.dev/docs/component/cloudflare/bucket The `Bucket` component lets you add a [Cloudflare R2 Bucket](https://developers.cloudflare.com/r2/) to your app. #### Minimal example ```ts title="sst.config.ts" const bucket = new sst.cloudflare.Bucket("MyBucket"); ``` #### Link to a worker You can link the bucket to a worker. ```ts {3} title="sst.config.ts" new sst.cloudflare.Worker("MyWorker", { handler: "./index.ts", link: [bucket], url: true }); ``` Once linked, you can use the SDK to interact with the bucket. ```ts title="index.ts" {3} await Resource.MyBucket.list(); ``` --- ## Constructor ```ts new Bucket(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`BucketArgs`](#bucketargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## BucketArgs ### transform? **Type** `Object` - [`bucket?`](#transform-bucket) [Transform](/docs/components/#transform) how this component creates its underlying resources. bucket? **Type** [`R2BucketArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/r2bucket/#inputs)` | (args: `[`R2BucketArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/r2bucket/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the R2 Bucket resource. ## Properties ### name **Type** `Output` The generated name of the R2 Bucket. ### nodes **Type** `Object` - [`bucket`](#nodes-bucket) The underlying [resources](/docs/components/#nodes) this component creates. bucket **Type** [`R2Bucket`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/r2bucket/) The Cloudflare R2 Bucket. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `name` `string` The generated name of the R2 Bucket. ### Bindings When you link a bucket to a worker, you can interact with it using these [Bucket methods](https://developers.cloudflare.com/r2/api/workers/workers-api-reference/#bucket-method-definitions). ```ts title="index.ts" {3} await Resource.MyBucket.list(); ``` --- ## Cron Reference doc for the `sst.cloudflare.Cron` component. https://sst.dev/docs/component/cloudflare/cron The `Cron` component lets you add cron jobs to your app using Cloudflare. It uses [Cloudflare Cron Triggers](https://developers.cloudflare.com/workers/configuration/cron-triggers/). #### Minimal example Create a worker file that exposes a `scheduled` handler: ```ts title="cron.ts" async scheduled() { console.log("Running on a schedule"); }, }; ``` Pass in a `schedules` and a `worker` that'll be executed. ```ts title="sst.config.ts" new sst.cloudflare.Cron("MyCronJob", { worker: "cron.ts", schedules: ["* * * * *"] }); ``` #### Customize the worker ```js title="sst.config.ts" new sst.cloudflare.Cron("MyCronJob", { schedules: ["* * * * *"], worker: { handler: "cron.ts", link: [bucket] } }); ``` --- ## Constructor ```ts new Cron(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`CronArgs`](#cronargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## CronArgs ### schedules **Type** `Input` The schedule for the cron job. :::note The cron job continues to run even after you exit `sst dev`. ::: You can use a [cron expression](https://developers.cloudflare.com/workers/configuration/cron-triggers/#supported-cron-expressions). ```ts { schedules: ["* * * * *"] // schedules: ["*/30 * * * *"] // schedules: ["45 * * * *"] // schedules: ["0 17 * * sun"] // schedules: ["10 7 * * mon-fri"] // schedules: ["0 15 1 * *"] // schedules: ["59 23 LW * *"] } ``` ### transform? **Type** `Object` - [`trigger?`](#transform-trigger) [Transform](/docs/components/#transform) how this component creates its underlying resources. trigger? **Type** [`WorkerCronTriggerArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workercrontrigger/#inputs)` | (args: `[`WorkerCronTriggerArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workercrontrigger/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Worker Cron Trigger resource. ### worker? **Type** `Input` The worker that'll be executed when the cron job runs. ```ts { worker: "src/cron.ts" } ``` You can pass in the full worker props. ```ts { worker: { handler: "src/cron.ts", link: [bucket] } } ``` ## Properties ### nodes **Type** `Object` - [`trigger`](#nodes-trigger) - [`worker`](#nodes-worker) The underlying [resources](/docs/components/#nodes) this component creates. trigger **Type** `Output<`[`WorkerCronTrigger`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workercrontrigger/)`>` The Cloudflare Worker Cron Trigger. worker **Type** `Output<`[`WorkerScript`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workerscript/)`>` The Cloudflare Worker. --- ## D1 Reference doc for the `sst.cloudflare.D1` component. https://sst.dev/docs/component/cloudflare/d1 The `D1` component lets you add a [Cloudflare D1 database](https://developers.cloudflare.com/d1/) to your app. #### Minimal example ```ts title="sst.config.ts" const db = new sst.cloudflare.D1("MyDatabase"); ``` #### Link to a worker You can link the db to a worker. ```ts {3} title="sst.config.ts" new sst.cloudflare.Worker("MyWorker", { handler: "./index.ts", link: [db], url: true }); ``` Once linked, you can use the SDK to interact with the db. ```ts title="index.ts" {1} "Resource.MyDatabase.prepare" await Resource.MyDatabase.prepare( "SELECT id FROM todo ORDER BY id DESC LIMIT 1", ).first(); ``` --- ## Constructor ```ts new D1(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`D1Args`](#d1args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## D1Args ### transform? **Type** `Object` - [`database?`](#transform-database) [Transform](/docs/components/#transform) how this component creates its underlying resources. database? **Type** [`D1DatabaseArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/d1database/#inputs)` | (args: `[`D1DatabaseArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/d1database/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the D1 resource. ## Properties ### databaseId **Type** `Output` The generated ID of the D1 database. ### nodes **Type** `Object` - [`database`](#nodes-database) The underlying [resources](/docs/components/#nodes) this component creates. database **Type** [`D1Database`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/d1database/) The Cloudflare D1 database. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `databaseId` `string` The generated ID of the D1 database. ### Bindings When you link a D1 database, the database will be available to the worker and you can query it using its [API methods](https://developers.cloudflare.com/d1/build-with-d1/d1-client-api/). ```ts title="index.ts" {1} "Resource.MyDatabase.prepare" await Resource.MyDatabase.prepare( "SELECT id FROM todo ORDER BY id DESC LIMIT 1", ).first(); ``` ## Methods ### static get ```ts D1.get(name, databaseId, opts?) ``` #### Parameters - `name` `string` The name of the component. - `databaseId` `string` The database ID of the existing D1 Database. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`D1`](.) Reference an existing D1 Database with the given database ID. This is useful when you create a D1 in one stage and want to share it in another. It avoids having to create a new D1 Database in the other stage. :::tip You can use the `static get` method to share D1 Databases across stages. ::: Imagine you create a D1 Database in the `dev` stage. And in your personal stage `giorgio`, instead of creating a new database, you want to share the same database from `dev`. ```ts title="sst.config.ts" const d1 = $app.stage === "giorgio" ? sst.cloudflare.D1.get("MyD1", "my-database-id") : new sst.cloudflare.D1("MyD1"); ``` Here `my-database-id` is the ID of the D1 Database created in the `dev` stage. You can find it by outputting the D1 Database in the `dev` stage. ```ts title="sst.config.ts" return { d1 }; ``` --- ## Cloudflare DNS Adapter Reference doc for the `sst.cloudflare.dns` adapter. https://sst.dev/docs/component/cloudflare/dns The Cloudflare DNS Adapter is used to create DNS records to manage domains hosted on [Cloudflare DNS](https://developers.cloudflare.com/dns/). :::note You need to [add the Cloudflare provider](/docs/providers/#install) to use this adapter. ::: This needs the Cloudflare provider. To add it run: ```bash sst add cloudflare ``` This adapter is passed in as `domain.dns` when setting a custom domain, where `example.com` is hosted on Cloudflare. ```ts { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` Specify the zone ID. ```ts { domain: { name: "example.com", dns: sst.cloudflare.dns({ zone: "415e6f4653b6d95b775d350f32119abb" }) } } ``` --- ## Functions ### dns ```ts dns(args?) ``` #### Parameters - `args?` [`DnsArgs`](#dnsargs) **Returns** `Object` ## DnsArgs ### proxy? **Type** `Input` **Default** `false` Configure ALIAS DNS records as [proxy records](https://developers.cloudflare.com/learning-paths/get-started-free/onboarding/proxy-dns-records/). :::tip Proxied records help prevent DDoS attacks and allow you to use Cloudflare's global content delivery network (CDN) for caching. ::: ```js { proxy: true } ``` ### transform? **Type** `Object` - [`record?`](#transform-record) [Transform](/docs/components#transform) how this component creates its underlying resources. record? **Type** [`RecordArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/record/#inputs)` | (args: `[`RecordArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/record/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Cloudflare record resource. ### zone? **Type** `Input` The ID of the Cloudflare zone to create the record in. ```js { zone: "415e6f4653b6d95b775d350f32119abb" } ``` --- ## Kv Reference doc for the `sst.cloudflare.Kv` component. https://sst.dev/docs/component/cloudflare/kv The `Kv` component lets you add a [Cloudflare KV storage namespace](https://developers.cloudflare.com/kv/) to your app. #### Minimal example ```ts title="sst.config.ts" const storage = new sst.cloudflare.Kv("MyStorage"); ``` #### Link to a worker You can link KV to a worker. ```ts {3} title="sst.config.ts" new sst.cloudflare.Worker("MyWorker", { handler: "./index.ts", link: [storage], url: true }); ``` Once linked, you can use the SDK to interact with the bucket. ```ts title="index.ts" {3} await Resource.MyStorage.get("someKey"); ``` --- ## Constructor ```ts new Kv(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`KvArgs`](#kvargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## KvArgs ### transform? **Type** `Object` - [`namespace?`](#transform-namespace) [Transform](/docs/components/#transform) how this component creates its underlying resources. namespace? **Type** [`WorkersKvNamespaceArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workerskvnamespace/#inputs)` | (args: `[`WorkersKvNamespaceArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workerskvnamespace/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the KV namespace resource. ## Properties ### namespaceId **Type** `Output` The generated ID of the KV namespace. ### nodes **Type** `Object` - [`namespace`](#nodes-namespace) The underlying [resources](/docs/components/#nodes) this component creates. namespace **Type** [`WorkersKvNamespace`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workerskvnamespace/) The Cloudflare KV namespace. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `namespaceId` `string` The generated ID of the KV namespace. ### Bindings When you link a KV storage, the storage will be available to the worker and you can interact with it using its [API methods](https://developers.cloudflare.com/kv/api/). ```ts title="index.ts" {3} await Resource.MyStorage.get("someKey"); ``` ## Methods ### static get ```ts Kv.get(name, args, opts?) ``` #### Parameters - `name` `string` The name of the component. - `args` [`KvGetArgs`](#kvgetargs) The arguments to get the KV namespace. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) **Returns** [`Kv`](.) Reference an existing KV namespace with the given name. This is useful when you create a KV namespace in one stage and want to share it in another. :::tip You can use the `static get` method to share KV namespaces across stages. ::: Imagine you create a KV namespace in the `dev` stage. And in your personal stage `frank`, instead of creating a new namespace, you want to share the same one from `dev`. ```ts title="sst.config.ts" const storage = $app.stage === "frank" ? sst.cloudflare.Kv.get("MyStorage", { namespaceId: "a1b2c3d4e5f6", }) : new sst.cloudflare.Kv("MyStorage"); ``` ## KvGetArgs ### namespaceId **Type** `string` The ID of the existing KV namespace. --- ## QueueWorkerSubscriber Reference doc for the `sst.cloudflare.QueueWorkerSubscriber` component. https://sst.dev/docs/component/cloudflare/queue-worker-subscriber The `QueueWorkerSubscriber` component is internally used by the `Queue` component to add a consumer to [Cloudflare Queues](https://developers.cloudflare.com/queues/). :::note This component is not intended to be created directly. ::: You'll find this component returned by `Queue.subscribe()`. --- ## Constructor ```ts new QueueWorkerSubscriber(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`QueueWorkerSubscriberArgs`](#queueworkersubscriberargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## QueueWorkerSubscriberArgs ### batch? **Type** `Object` - [`size?`](#batch-size) - [`window?`](#batch-window) **Default** `10` The maximum number of messages to include in a batch. size? **Type** `Input` **Default** `10` The maximum number of events that will be processed together in a single invocation of the consumer function. Value must be between 1 and 100. :::note When `size` is set to a value greater than 10, `window` must be set to at least `1 second`. ::: window? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The maximum amount of time to wait for collecting events before sending the batch to the consumer function, even if the batch size hasn't been reached. Value must be between 0 seconds and 60 seconds. ### dlq? **Type** `Object` - [`queue`](#dlq-queue) - [`retry?`](#dlq-retry) - [`retryDelay?`](#dlq-retrydelay) The dead letter queue to send messages that fail processing. When `dlq` is configured, `dlq.queue` is required. queue **Type** `Input` The name of the dead letter queue. retry? **Type** `Input` **Default** `3` The number of times the main queue will retry the message before sending it to the dead-letter queue. retryDelay? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `0 seconds` The number of seconds to delay before making the message available for another attempt. ### maxConcurrency? **Type** `Input` **Default** `null` Maximum number of concurrent consumers that may consume from this Queue. ### queue **Type** `Input` - [`id`](#queue-id) The queue to use. id **Type** `Input` The ID of the queue. ### subscriber **Type** `Input` The subscriber worker. ### transform? **Type** `Object` - [`consumer?`](#transform-consumer) - [`worker?`](#transform-worker) [Transform](/docs/components/#transform) how this component creates its underlying resources. consumer? **Type** [`QueueConsumerArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queueconsumer/#inputs)` | (args: `[`QueueConsumerArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queueconsumer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Consumer resource. worker? **Type** [`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)` | (args: `[`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Worker resource. ## Properties ### nodes **Type** `Object` - [`consumer`](#nodes-consumer) - [`worker`](#nodes-worker) The underlying [resources](/docs/components/#nodes) this component creates. consumer **Type** [`QueueConsumer`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queueconsumer/) The Cloudflare Queue Consumer. worker **Type** `Output<`[`Worker`](/docs/component/cloudflare/worker)`>` The Worker that'll process messages from the queue. --- ## Queue Reference doc for the `sst.cloudflare.Queue` component. https://sst.dev/docs/component/cloudflare/queue The `Queue` component lets you add a [Cloudflare Queue](https://developers.cloudflare.com/queues/) to your app. #### Create a Queue ```ts title="sst.config.ts" const queue = new sst.cloudflare.Queue("MyQueue"); ``` #### Subscribe to the Queue Create a worker file that exposes a default handler for queue messages: ```ts title="consumer.ts" async queue(batch, env) { for (const message of batch.messages) { console.log("Processing message:", message.body); } }, }; ``` Subscribe to the queue with a consumer worker. ```ts title="sst.config.ts" queue.subscribe("consumer.ts"); ``` #### Link to the Queue You can link other workers to the queue. ```ts title="sst.config.ts" new sst.cloudflare.Worker("MyWorker", { handler: "producer.ts", link: [queue], url: true, }); ``` #### Subscribe with full worker props ```ts title="sst.config.ts" const bucket = new sst.cloudflare.Bucket("MyBucket"); queue.subscribe({ handler: "consumer.ts", link: [bucket], }); ``` --- ## Constructor ```ts new Queue(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`QueueArgs`](#queueargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## QueueArgs ### dlq? **Type** `Object` - [`queue`](#dlq-queue) - [`retry?`](#dlq-retry) - [`retryDelay?`](#dlq-retrydelay) The dead letter queue to send messages that fail processing. When `dlq` is configured, `dlq.queue` is required. queue **Type** `Input` The name of the dead letter queue. retry? **Type** `Input` **Default** `3` The number of times the main queue will retry the message before sending it to the dead-letter queue. retryDelay? **Type** `Input<"$\{number\} second" | "$\{number\} seconds">` **Default** `0 seconds` The number of seconds to delay before making the message available for another attempt. ### maxConcurrency? **Type** `Input` **Default** `null` Maximum number of concurrent consumers that may consume from this Queue. ### transform? **Type** `Object` - [`queue?`](#transform-queue) [Transform](/docs/components/#transform) how this component creates its underlying resources. queue? **Type** [`QueueArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queue/#inputs)` | (args: `[`QueueArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queue/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Queue resource. ## Properties ### id **Type** `Output` The generated id of the queue ### nodes **Type** `Object` - [`queue`](#nodes-queue) The underlying [resources](/docs/components/#nodes) this component creates. queue **Type** [`Queue`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queue/) The Cloudflare queue. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Bindings ## Methods ### getSSTLink ```ts getSSTLink() ``` **Returns** `Object` ### subscribe ```ts subscribe(subscriber, args?, opts?) ``` #### Parameters - `subscriber` `Input` The worker that'll process messages from the queue. - `args?` [`QueueSubscribeArgs`](#queuesubscribeargs) Configure the subscription. - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) Component resource options. **Returns** [`QueueWorkerSubscriber`](/docs/component/cloudflare/queue-worker-subscriber) Subscribe to the queue with a worker. Subscribe to the queue with a worker file. ```ts title="sst.config.ts" queue.subscribe("consumer.ts"); ``` Pass in full worker props. ```ts title="sst.config.ts" const bucket = new sst.cloudflare.Bucket("MyBucket"); queue.subscribe({ handler: "consumer.ts", link: [bucket], }); ``` Configure batch settings. ```ts title="sst.config.ts" queue.subscribe("consumer.ts", { batch: { size: 10, window: "20 seconds", }, }); ``` ## QueueSubscribeArgs ### batch? **Type** `Object` - [`size?`](#batch-size) - [`window?`](#batch-window) **Default** `10` The maximum number of messages to include in a batch. size? **Type** `Input` **Default** `10` The maximum number of events that will be processed together in a single invocation of the consumer function. Value must be between 1 and 100. :::note When `size` is set to a value greater than 10, `window` must be set to at least `1 second`. ::: Set batch size to 1. This will process events individually. ```js { batch: { size: 1 } } ``` window? **Type** `Input<"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} second" | "$\{number\} seconds">` **Default** `"5 seconds"` The maximum amount of time to wait for collecting events before sending the batch to the consumer function, even if the batch size hasn't been reached. Value must be between 0 seconds and 60 seconds. ```js { batch: { window: "5 seconds" } } ``` ### transform? **Type** `Object` - [`consumer?`](#transform-consumer) - [`worker?`](#transform-worker) [Transform](/docs/components/#transform) how this component creates its underlying resources. consumer? **Type** [`QueueConsumerArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queueconsumer/#inputs)` | (args: `[`QueueConsumerArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/queueconsumer/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Consumer resource. worker? **Type** [`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)` | (args: `[`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Worker resource. --- ## StaticSiteV2 Reference doc for the `sst.cloudflare.StaticSiteV2` component. https://sst.dev/docs/component/cloudflare/static-site-v2 The `StaticSiteV2` component lets you deploy a static website to Cloudflare. It uses [Cloudflare Workers](https://developers.cloudflare.com/workers/) with [static assets](https://developers.cloudflare.com/workers/static-assets/) to serve your files. It can also `build` your site by running your static site generator, like [Vite](https://vitejs.dev) and uploading the build output as static assets. #### Minimal example Simply uploads the current directory as a static site. ```js new sst.cloudflare.StaticSiteV2("MyWeb"); ``` #### Change the path Change the `path` that should be uploaded. ```js new sst.cloudflare.StaticSiteV2("MyWeb", { path: "path/to/site" }); ``` #### Deploy a Vite SPA Use [Vite](https://vitejs.dev) to deploy a React/Vue/Svelte/etc. SPA by specifying the `build` config. ```js new sst.cloudflare.StaticSiteV2("MyWeb", { build: { command: "npm run build", output: "dist" }, notFound: "single-page-application" }); ``` #### Deploy a Jekyll site Use [Jekyll](https://jekyllrb.com) to deploy a static site. ```js new sst.cloudflare.StaticSiteV2("MyWeb", { build: { command: "bundle exec jekyll build", output: "_site" } }); ``` #### Add a custom domain Set a custom domain for your site. ```js {2} new sst.cloudflare.StaticSiteV2("MyWeb", { domain: "my-app.com" }); ``` #### Set environment variables Set `environment` variables for the build process of your static site. These will be used locally and on deploy. For some static site generators like Vite, [environment variables](https://vitejs.dev/guide/env-and-mode) prefixed with `VITE_` can be accessed in the browser. ```ts {5-7} const bucket = new sst.cloudflare.Bucket("MyBucket"); new sst.cloudflare.StaticSiteV2("MyWeb", { environment: { BUCKET_NAME: bucket.name, // Accessible in the browser VITE_STRIPE_PUBLISHABLE_KEY: "pk_test_123" }, build: { command: "npm run build", output: "dist" } }); ``` --- ## Constructor ```ts new StaticSiteV2(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`StaticSiteV2Args`](#staticsitev2args) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## StaticSiteV2Args ### build? **Type** `Input` - [`command`](#build-command) - [`output`](#build-output) Configure if your static site needs to be built. This is useful if you are using a static site generator. The `build.output` directory will be uploaded as static assets. For a Vite project using npm this might look like this. ```js { build: { command: "npm run build", output: "dist" } } ``` command **Type** `Input` The command that builds the static site. It's run before your site is deployed. This is run at the root of your site, `path`. ```js { build: { command: "yarn build" } } ``` output **Type** `Input` The directory where the build output of your static site is generated. This will be uploaded. The path is relative to the root of your site, `path`. ```js { build: { output: "build" } } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your static site is run in dev mode; it's not deployed. ::: Instead of deploying your static site, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. Use a custom dev command. ```js { dev: { command: "yarn dev" } } ``` autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` Set a custom domain for your static site. Supports domains hosted on Cloudflare. :::tip You can migrate an externally hosted domain to Cloudflare by [following this guide](https://developers.cloudflare.com/dns/zone-setups/full-setup/setup/). ::: ```js { domain: "domain.com" } ``` ### environment? **Type** `Input>>` Set environment variables for your static site. These are made available: 1. Locally while running your site through `sst dev`. 2. In the build process when running `build.command`. ```js environment: { API_URL: api.url STRIPE_PUBLISHABLE_KEY: "pk_test_123" } ``` Some static site generators like Vite have their [concept of environment variables](https://vitejs.dev/guide/env-and-mode), and you can use this option to set them. :::note The types for the Vite environment variables are generated automatically. You can change their location through `vite.types`. ::: These can be accessed as `import.meta.env` in your site. And only the ones prefixed with `VITE_` can be accessed in the browser. ```js environment: { API_URL: api.url // Accessible in the browser VITE_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } ``` ### notFound? **Type** `Input<"single-page-application" | "404">` **Default** `"single-page-application"` Configure the response when a request does not match a static asset. - `"single-page-application"`: Serve `index.html` for unmatched routes (SPA mode) - `"404"`: Serve the nearest `404.html` file with a `404` status ### path? **Type** `Input` **Default** `"."` Path to the directory where your static site is located. By default this assumes your static site is in the root of your SST app. This directory will be uploaded as static assets. The path is relative to your `sst.config.ts`. :::note If the `build` options are specified, `build.output` will be uploaded as static assets instead. ::: If you are using a static site generator, like Vite, you'll need to configure the `build` options. When these are set, the `build.output` directory will be uploaded as static assets instead. Change where your static site is located. ```js { path: "packages/web" } ``` ### trailingSlash? **Type** `"drop" | "auto" | "force"` **Default** `"auto"` Configure trailing slash behavior for HTML pages. - `"auto"`: Individual files served without slash, folder indexes with slash - `"force"`: All HTML pages served with trailing slash - `"drop"`: All HTML pages served without trailing slash #### Force trailing slashes ```js { trailingSlash: "force" } ``` ### transform? **Type** `Object` - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. server? **Type** [`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)` | (args: `[`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Worker component used for serving the static site. ## Properties ### nodes **Type** `Object` - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. server **Type** `undefined | `[`Worker`](/docs/component/cloudflare/worker) The worker that serves the requests. ### url **Type** `Output` The URL of the website. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated worker URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `undefined | string` The URL of the website. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated worker URL. --- ## StaticSite Reference doc for the `sst.cloudflare.StaticSite` component. https://sst.dev/docs/component/cloudflare/static-site The `StaticSite` component has been deprecated. Use [`StaticSiteV2`](/docs/component/cloudflare/static-site-v2) instead. :::caution This component has been deprecated. ::: The `StaticSite` component lets you deploy a static website to Cloudflare. It uses [Cloudflare KV storage](https://developers.cloudflare.com/kv/) to store your files and [Cloudflare Workers](https://developers.cloudflare.com/workers/) to serve them. It can also `build` your site by running your static site generator, like [Vite](https://vitejs.dev) and uploading the build output to Cloudflare KV. #### Minimal example Simply uploads the current directory as a static site. ```js new sst.cloudflare.StaticSite("MyWeb"); ``` #### Change the path Change the `path` that should be uploaded. ```js new sst.cloudflare.StaticSite("MyWeb", { path: "path/to/site" }); ``` #### Deploy a Vite SPA Use [Vite](https://vitejs.dev) to deploy a React/Vue/Svelte/etc. SPA by specifying the `build` config. ```js new sst.cloudflare.StaticSite("MyWeb", { build: { command: "npm run build", output: "dist" } }); ``` #### Deploy a Jekyll site Use [Jekyll](https://jekyllrb.com) to deploy a static site. ```js new sst.cloudflare.StaticSite("MyWeb", { errorPage: "404.html", build: { command: "bundle exec jekyll build", output: "_site" } }); ``` #### Add a custom domain Set a custom domain for your site. ```js {2} new sst.cloudflare.StaticSite("MyWeb", { domain: "my-app.com" }); ``` #### Redirect www to apex domain Redirect `www.my-app.com` to `my-app.com`. ```js {4} new sst.cloudflare.StaticSite("MyWeb", { domain: { name: "my-app.com", redirects: ["www.my-app.com"] } }); ``` #### Set environment variables Set `environment` variables for the build process of your static site. These will be used locally and on deploy. :::tip For Vite, the types for the environment variables are also generated. This can be configured through the `vite` prop. ::: For some static site generators like Vite, [environment variables](https://vitejs.dev/guide/env-and-mode) prefixed with `VITE_` can be accessed in the browser. ```ts {5-7} const kv = new sst.cloudflare.Kv("MyKv"); new sst.cloudflare.StaticSite("MyWeb", { environment: { KV_NAMESPACE: kv.id, // Accessible in the browser VITE_STRIPE_PUBLISHABLE_KEY: "pk_test_123" }, build: { command: "npm run build", output: "dist" } }); ``` --- ## Constructor ```ts new StaticSite(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`StaticSiteArgs`](#staticsiteargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## StaticSiteArgs ### assets? **Type** `Object` - [`fileOptions?`](#assets-fileoptions) `Input` - [`cacheControl?`](#assets-fileoptions-cachecontrol) - [`contentType?`](#assets-fileoptions-contenttype) - [`files`](#assets-fileoptions-files) - [`ignore?`](#assets-fileoptions-ignore) - [`textEncoding?`](#assets-textencoding) **Default** `Object` Configure how the static site's assets are uploaded to KV. By default, this is set to the following. Read more about these options below. ```js { assets: { textEncoding: "utf-8", fileOptions: [ { files: ["**/*.css", "**/*.js"], cacheControl: "max-age=31536000,public,immutable" }, { files: "**/*.html", cacheControl: "max-age=0,no-cache,no-store,must-revalidate" } ] } } ``` fileOptions? **Type** `Input` **Default** `Object[]` Specify the `Content-Type` and `Cache-Control` headers for specific files. This allows you to override the default behavior for specific files using glob patterns. By default, this is set to cache CSS/JS files for 1 year and not cache HTML files. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], cacheControl: "max-age=31536000,public,immutable" }, { files: "**/*.html", cacheControl: "max-age=0,no-cache,no-store,must-revalidate" } ] } } ``` You can change the default options. For example, apply `Cache-Control` and `Content-Type` to all zip files. ```js { assets: { fileOptions: [ { files: "**/*.zip", contentType: "application/zip", cacheControl: "private,no-cache,no-store,must-revalidate" }, ], } } ``` Apply `Cache-Control` to all CSS and JS files except for CSS files with `index-` prefix in the `main/` directory. ```js { assets: { fileOptions: [ { files: ["**/*.css", "**/*.js"], ignore: "main/index-*.css", cacheControl: "private,no-cache,no-store,must-revalidate" }, ], } } ``` cacheControl? **Type** `string` The `Cache-Control` header to apply to the matched files. contentType? **Type** `string` The `Content-Type` header to apply to the matched files. files **Type** `string | string[]` A glob pattern or array of glob patterns of files to apply these options to. ignore? **Type** `string | string[]` A glob pattern or array of glob patterns of files to exclude from the ones matched by the `files` glob pattern. textEncoding? **Type** `Input<"utf-8" | "iso-8859-1" | "windows-1252" | "ascii" | "none">` **Default** `"utf-8"` Character encoding for text based assets uploaded, like HTML, CSS, JS. This is used to set the `Content-Type` header when these files are served out. If set to `"none"`, then no charset will be returned in header. ```js { assets: { textEncoding: "iso-8859-1" } } ``` ### build? **Type** `Input` - [`command`](#build-command) - [`output`](#build-output) Configure if your static site needs to be built. This is useful if you are using a static site generator. The `build.output` directory will be uploaded to KV instead. For a Vite project using npm this might look like this. ```js { build: { command: "npm run build", output: "dist" } } ``` command **Type** `Input` The command that builds the static site. It's run before your site is deployed. This is run at the root of your site, `path`. ```js { build: { command: "yarn build" } } ``` output **Type** `Input` The directory where the build output of your static site is generated. This will be uploaded. The path is relative to the root of your site, `path`. ```js { build: { output: "build" } } ``` ### domain? **Type** `Input` Set a custom domain for your static site. Supports domains hosted on Cloudflare. :::tip You can migrate an externally hosted domain to Cloudflare by [following this guide](https://developers.cloudflare.com/dns/zone-setups/full-setup/setup/). ::: ```js { domain: "domain.com" } ``` ### environment? **Type** `Input>>` Set environment variables for your static site. These are made available: 1. Locally while running your site through `sst dev`. 2. In the build process when running `build.command`. ```js environment: { API_URL: api.url STRIPE_PUBLISHABLE_KEY: "pk_test_123" } ``` Some static site generators like Vite have their [concept of environment variables](https://vitejs.dev/guide/env-and-mode), and you can use this option to set them. :::note The types for the Vite environment variables are generated automatically. You can change their location through `vite.types`. ::: These can be accessed as `import.meta.env` in your site. And only the ones prefixed with `VITE_` can be accessed in the browser. ```js environment: { API_URL: api.url // Accessible in the browser VITE_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } ``` ### errorPage? **Type** `Input` **Default** The `indexPage` of your site. The error page to display on a 403 or 404 error. This is a path relative to the root of your site, or the `path`. ```js { errorPage: "404.html" } ``` ### indexPage? **Type** `string` **Default** `"index.html"` The name of the index page of the site. This is a path relative to the root of your site, or the `path`. :::note The index page only applies to the root of your site. ::: By default this is set to `index.html`. So if a visitor goes to your site, let's say `example.com`, `example.com/index.html` will be served. ```js { indexPage: "home.html" } ``` ### path? **Type** `Input` **Default** `"."` Path to the directory where your static site is located. By default this assumes your static site is in the root of your SST app. This directory will be uploaded to KV. The path is relative to your `sst.config.ts`. :::note If the `build` options are specified, `build.output` will be uploaded to KV instead. ::: If you are using a static site generator, like Vite, you'll need to configure the `build` options. When these are set, the `build.output` directory will be uploaded to KV instead. Change where your static site is located. ```js { path: "packages/web" } ``` ### transform? **Type** `Object` - [`assets?`](#transform-assets) [Transform](/docs/components#transform) how this component creates its underlying resources. assets? **Type** [`KvArgs`](/docs/component/cloudflare/kv#kvargs)` | (args: `[`KvArgs`](/docs/component/cloudflare/kv#kvargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Kv resource used for uploading the assets. ### vite? **Type** `Input` - [`types?`](#vite-types) Configure [Vite](https://vitejs.dev) related options. :::tip If a `vite.config.ts` or `vite.config.js` file is detected in the `path`, then these options will be used during the build and deploy process. ::: types? **Type** `string` **Default** `"src/sst-env.d.ts"` The path where the type definition for the `environment` variables are generated. This is relative to the `path`. [Read more](https://vitejs.dev/guide/env-and-mode#intellisense-for-typescript). ```js { vite: { types: "other/path/sst-env.d.ts" } } ``` ## Properties ### nodes **Type** `Object` - [`assets`](#nodes-assets) - [`router`](#nodes-router) The underlying [resources](/docs/components/#nodes) this component creates. assets **Type** [`Kv`](/docs/component/cloudflare/kv) The KV namespace that stores the assets. router **Type** [`Worker`](/docs/component/cloudflare/worker) The worker that serves the requests. ### url **Type** `Output` The URL of the website. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated worker URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `undefined | string` The URL of the website. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated worker URL. --- ## TanStackStart Reference doc for the `sst.cloudflare.TanStackStart` component. https://sst.dev/docs/component/cloudflare/tan-stack-start The `TanStackStart` component lets you deploy a [TanStack Start](https://tanstack.com/start/latest) app to Cloudflare. :::note Create a Cloudflare-compatible app with `bunx @tanstack/cli@latest create --deployment cloudflare`. ::: #### Minimal example Deploy the TanStack Start app that's in the project root. ```js title="sst.config.ts" new sst.cloudflare.TanStackStart("MyWeb"); ``` #### Change the path Deploys the TanStack Start app in the `my-app/` directory. ```js {2} title="sst.config.ts" new sst.cloudflare.TanStackStart("MyWeb", { path: "my-app/" }); ``` #### Add a custom domain Set a custom domain for your TanStack Start app. ```js {2} title="sst.config.ts" new sst.cloudflare.TanStackStart("MyWeb", { domain: "my-app.com" }); ``` #### Link resources [Link resources](/docs/linking/) to your TanStack Start app. This will grant permissions to the resources and allow you to access it in your app. ```ts {4} title="sst.config.ts" const bucket = new sst.cloudflare.Bucket("MyBucket"); new sst.cloudflare.TanStackStart("MyWeb", { link: [bucket] }); ``` Add this to your `vite.config.ts` for SST to work correctly. ```ts title="vite.config.ts" plugins: [ cloudflare({ viteEnvironment: { name: 'ssr' }, configPath: process.env.SST_WRANGLER_PATH, }), tanstackStart(), ], }) ``` Use `sst/resource` for linked resources. ```ts title="src/routes/api.ts" const files = await Resource.MyBucket.list(); ``` --- ## Constructor ```ts new TanStackStart(name, args?, opts?) ``` #### Parameters - `name` `string` - `args?` [`TanStackStartArgs`](#tanstackstartargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## TanStackStartArgs ### buildCommand? **Type** `Input` **Default** `"npm run build"` The command used internally to build your TanStack Start app. If you want to use a different build command. ```js { buildCommand: "yarn build" } ``` ### dev? **Type** `false | Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) - [`url?`](#dev-url) Configure how this component works in `sst dev`. :::note In `sst dev` your TanStack Start app is run in dev mode; it's not deployed. ::: Instead of deploying your TanStack Start app, this starts it in dev mode. It's run as a separate process in the `sst dev` multiplexer. Read more about [`sst dev`](/docs/reference/cli/#dev). To disable dev mode, pass in `false`. autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** Uses the `path` Change the directory from where the `command` is run. title? **Type** `Input` The title of the tab in the multiplexer. url? **Type** `Input` **Default** `"http://url-unavailable-in-dev.mode"` The `url` when this is running in dev mode. Since this component is not deployed in `sst dev`, there is no real URL. But if you are using this component's `url` or linking to this component's `url`, it can be useful to have a placeholder URL. It avoids having to handle it being `undefined`. ### domain? **Type** `Input` Set a custom domain for your TanStack Start app. ```js { domain: "my-app.com" } ``` ### environment? **Type** `Input>>` Set environment variables in your TanStack Start app. These are made available: 1. In `vite build`, they are loaded into the build. 2. At runtime as Worker bindings. 3. Locally while running `vite dev` through `sst dev`. :::tip You can also `link` resources to your TanStack Start app and access them in a type-safe way with the [SDK](/docs/reference/sdk/). We recommend linking since it's more secure. ::: ```js { environment: { API_URL: api.url, PUBLIC_STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` You can access the environment variables in your TanStack Start app as follows: ```ts const apiUrl = env.API_URL; ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your TanStack Start app. This will: 1. Grant the permissions needed to access the resources. 2. Allow you to access it in your app using the [SDK](/docs/reference/sdk/). Takes a list of resources to link to the app. ```js { link: [bucket, stripeKey] } ``` You can access the linked resources in your TanStack Start app. ```ts console.log(Resource.MyBucket.name); ``` ### path? **Type** `string` **Default** `"."` Path to the directory where your TanStack Start app is located. This path is relative to your `sst.config.ts`. By default it assumes your TanStack Start app is in the root of your SST app. If your TanStack Start app is in a package in your monorepo. ```js { path: "packages/web" } ``` ### transform? **Type** `Object` - [`server?`](#transform-server) [Transform](/docs/components#transform) how this component creates its underlying resources. server? **Type** [`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)` | (args: `[`WorkerArgs`](/docs/component/cloudflare/worker#workerargs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Worker component used for handling the server-side rendering. ## Properties ### nodes **Type** `Object` - [`server`](#nodes-server) The underlying [resources](/docs/components/#nodes) this component creates. server **Type** `undefined | `[`Worker`](/docs/component/cloudflare/worker) The Cloudflare Worker that renders the site. ### url **Type** `Output` The URL of the TanStack Start app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated Worker URL. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `undefined | string` The URL of the TanStack Start app. If the `domain` is set, this is the URL with the custom domain. Otherwise, it's the auto-generated Worker URL. --- ## Worker Reference doc for the `sst.cloudflare.Worker` component. https://sst.dev/docs/component/cloudflare/worker The `Worker` component lets you create a Cloudflare Worker. #### Minimal example ```ts title="sst.config.ts" new sst.cloudflare.Worker("MyWorker", { handler: "src/worker.handler" }); ``` #### Link resources [Link resources](/docs/linking/) to the Worker. This will handle the credentials and allow you to access it in your handler. ```ts {5} title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.cloudflare.Worker("MyWorker", { handler: "src/worker.handler", link: [bucket] }); ``` You can use the [SDK](/docs/reference/sdk/) to access the linked resources in your handler. ```ts title="src/worker.ts" {3} console.log(Resource.MyBucket.name); ``` #### Enable URLs Enable worker URLs to invoke the worker over HTTP. ```ts {3} title="sst.config.ts" new sst.cloudflare.Worker("MyWorker", { handler: "src/worker.handler", url: true }); ``` #### Bundling Customize how SST uses [esbuild](https://esbuild.github.io/) to bundle your worker code with the `build` property. ```ts title="sst.config.ts" {3-5} new sst.cloudflare.Worker("MyWorker", { handler: "src/worker.handler", build: { install: ["pg"] } }); ``` --- ## Constructor ```ts new Worker(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`WorkerArgs`](#workerargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## WorkerArgs ### build? **Type** `Input` - [`banner?`](#build-banner) - [`esbuild?`](#build-esbuild) - [`loader?`](#build-loader) - [`minify?`](#build-minify) Configure how your function is bundled. SST bundles your worker code using [esbuild](https://esbuild.github.io/). This tree shakes your code to only include what's used. banner? **Type** `Input` Use this to insert a string at the beginning of the generated JS file. ```js { build: { banner: "console.log('Function starting')" } } ``` esbuild? **Type** `Input` This allows you to customize esbuild config that is used. :::tip Check out the _JS tab_ in the code snippets in the esbuild docs for the [`BuildOptions`](https://esbuild.github.io/api/#build). ::: loader? **Type** `Input>` Configure additional esbuild loaders for other file extensions. This is useful when your code is importing non-JS files like `.png`, `.css`, etc. ```js { build: { loader: { ".png": "file" } } } ``` minify? **Type** `Input` **Default** `true` Disable if the worker code should be minified when bundled. ```js { build: { minify: false } } ``` ### compatibility? **Type** `Input` - [`date?`](#compatibility-date) - [`flags?`](#compatibility-flags) Configure Cloudflare compatibility for the Worker. date? **Type** `Input` **Default** `"2025-05-05"` The Cloudflare compatibility date for the Worker. SST uses this for both the uploaded Worker and for deciding which native Node.js modules should stay external during bundling. flags? **Type** `Input[]>` **Default** `["nodejs_compat"]` The Cloudflare compatibility flags for the Worker. SST uses this for both the uploaded Worker and for deciding which native Node.js modules should stay external during bundling. ### domain? **Type** `Input` Set a custom domain for your Worker. Supports domains hosted on Cloudflare. :::tip You can migrate an externally hosted domain to Cloudflare by [following this guide](https://developers.cloudflare.com/dns/zone-setups/full-setup/setup/). ::: ```js { domain: "domain.com" } ``` ### environment? **Type** `Input>>` Key-value pairs that are set as [Worker environment variables](https://developers.cloudflare.com/workers/configuration/environment-variables/). They can be accessed in your worker through `env.`. ```js { environment: { DEBUG: "true" } } ``` ### handler **Type** `Input` Path to the handler file for the worker. The handler path is relative to the root your repo or the `sst.config.ts`. ```js { handler: "packages/functions/src/worker.ts" } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your worker. This will: 1. Handle the credentials needed to access the resources. 2. Allow you to access it in your site using the [SDK](/docs/reference/sdk/). Takes a list of components to link to the function. ```js { link: [bucket, stripeKey] } ``` ### placement? **Type** `Input` - [`host?`](#placement-host) - [`hostname?`](#placement-hostname) - [`mode?`](#placement-mode) - [`region?`](#placement-region) Configure [placement](https://developers.cloudflare.com/workers/configuration/placement/) for your Worker. #### Smart Placement ```js { placement: { mode: "smart" } } ``` #### Explicit region ```js { placement: { region: "aws:us-east-1" } } ``` host? **Type** `Input` hostname? **Type** `Input` mode? **Type** `Input` region? **Type** `Input` ### transform? **Type** `Object` - [`worker?`](#transform-worker) [Transform](/docs/components/#transform) how this component creates its underlying resources. worker? **Type** [`WorkersScriptArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workersscript/#inputs)` | (args: `[`WorkersScriptArgs`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workersscript/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Worker resource. ### url? **Type** `Input` **Default** `false` Enable a dedicated endpoint for your Worker. ## Properties ### nodes **Type** `Object` - [`worker`](#nodes-worker) The underlying [resources](/docs/components/#nodes) this component creates. worker **Type** [`WorkersScript`](https://www.pulumi.com/registry/packages/cloudflare/api-docs/workersscript/) The Cloudflare Worker script. ### url **Type** `Output` The Worker URL if `url` is enabled. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `url` `undefined | string` The Worker URL if `url` is enabled. ### Bindings When you link a worker, say WorkerA, to another worker, WorkerB; it automatically creates a service binding between the workers. It allows WorkerA to call WorkerB without going through a publicly-accessible URL. ```ts title="index.ts" {3} await Resource.WorkerB.fetch(request); ``` Read more about [binding Workers](https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/). --- ## DevCommand Reference doc for the `sst.experimental.DevCommand` component. https://sst.dev/docs/component/experimental/dev-command The `DevCommand` lets you run a command in a separate pane when you run `sst dev`. :::note This is an experimental feature and the API may change in the future. ::: The `sst dev` CLI starts a multiplexer with panes for separate processes. This component allows you to add a process to it. :::tip This component does not do anything on deploy. ::: This component only works in `sst dev`. It does not do anything in `sst deploy`. #### Example For example, you can use this to run Drizzle Studio locally. ```ts title="sst.config.ts" new sst.x.DevCommand("Studio", { link: [rds], dev: { autostart: true, command: "npx drizzle-kit studio", }, }); ``` Here `npx drizzle-kit studio` will be run in `sst dev` and will show up under the **Studio** tab. It'll also have access to the links from `rds`. --- ## Constructor ```ts new DevCommand(name, args, opts?) ``` #### Parameters - `name` `string` - `args` [`DevCommandArgs`](#devcommandargs) - `opts?` [`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/) ## DevCommandArgs ### dev? **Type** `Object` - [`autostart?`](#dev-autostart) - [`command?`](#dev-command) - [`directory?`](#dev-directory) - [`title?`](#dev-title) autostart? **Type** `Input` **Default** `true` Configure if you want to automatically start this when `sst dev` starts. You can still start it manually later. command? **Type** `Input` **Default** `"npm run dev"` The command that `sst dev` runs to start this in dev mode. directory? **Type** `Input` **Default** The project root. Change the directory from where the `command` is run. title? **Type** `Input` **Default** The name of the component. The title of the tab in the multiplexer. ### environment? **Type** `Input>>` Set environment variables for this command. ```js { environment: { API_URL: api.url, STRIPE_PUBLISHABLE_KEY: "pk_test_123" } } ``` ### link? **Type** `Input` [Link resources](/docs/linking/) to your command. This will allow you to access it in your command using the [SDK](/docs/reference/sdk/). Takes a list of resources to link. ```js { link: [bucket, stripeKey] } ``` --- ## Linkable Reference doc for the `sst.Linkable` component. https://sst.dev/docs/component/linkable The `Linkable` component and the `Linkable.wrap` method lets you link any resources in your app; not just the built-in SST components. It also lets you modify the links SST creates. #### Linking any value The `Linkable` component takes a list of properties that you want to link. These can be outputs from other resources or constants. ```ts title="sst.config.ts" new sst.Linkable("MyLinkable", { properties: { foo: "bar" } }); ``` You can also use this to combine multiple resources into a single linkable resource. And optionally include permissions or bindings for the linked resource. ```ts title="sst.config.ts" const bucketA = new sst.aws.Bucket("MyBucketA"); const bucketB = new sst.aws.Bucket("MyBucketB"); const storage = new sst.Linkable("MyStorage", { properties: { foo: "bar", bucketA: bucketA.name, bucketB: bucketB.name }, include: [ sst.aws.permission({ actions: ["s3:*"], resources: [bucketA.arn, bucketB.arn] }) ] }); ``` You can now link this resource to your frontend or a function. ```ts title="sst.config.ts" {3} new sst.aws.Function("MyApi", { handler: "src/lambda.handler", link: [storage] }); ``` Then use the [SDK](/docs/reference/sdk/) to access it at runtime. ```js title="src/lambda.ts" console.log(Resource.MyStorage.bucketA); ``` #### Linking any resource You can also wrap any Pulumi Resource class to make it linkable. ```ts title="sst.config.ts" sst.Linkable.wrap(aws.dynamodb.Table, (table) => ({ properties: { tableName: table.name }, include: [ sst.aws.permission({ actions: ["dynamodb:*"], resources: [table.arn] }) ] })); ``` Now you create an instance of `aws.dynamodb.Table` and link it in your app like any other SST component. ```ts title="sst.config.ts" {7} const table = new aws.dynamodb.Table("MyTable", { attributes: [{ name: "id", type: "S" }], hashKey: "id" }); new sst.aws.Nextjs("MyWeb", { link: [table] }); ``` And use the [SDK](/docs/reference/sdk/) to access it at runtime. ```js title="app/page.tsx" console.log(Resource.MyTable.tableName); ``` Your function will also have the permissions defined above. #### Modify built-in links You can also modify how SST creates links. For example, you might want to change the permissions of a linkable resource. ```ts title="sst.config.ts" "sst.aws.Bucket" sst.Linkable.wrap(sst.aws.Bucket, (bucket) => ({ properties: { name: bucket.name }, include: [ sst.aws.permission({ actions: ["s3:GetObject"], resources: [bucket.arn] }) ] })); ``` This overrides the built-in link and lets you create your own. #### Exposing links as env vars If you want to pass link env vars to compute not managed by SST, like an ECS task definition or a Kubernetes pod, use `Linkable.env()`. It returns an `Output` of `SST_RESOURCE_*` env vars that you can pass to any provider. [Check out an example](/docs/examples/#aws-linkable-env). --- ## Constructor ```ts new Linkable(name, definition) ``` #### Parameters - `name` `string` - `definition` [`Definition`](#definition) ## Properties ### name **Type** `Output` ### properties **Type** `Record` ## Methods ### static env ```ts Linkable.env(links) ``` #### Parameters - `links` `Input` Array of linkable resources. **Returns** `Output>` Convert an array of resources into `SST_RESOURCE_*` environment variables so that `Resource.MyResource` works at runtime inside containers or functions deployed through an external provider. If the provider accepts a flat `Record`, you can pass the output directly. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new vercel.Project("BadDecision", { name: "bad-decision", environment: sst.Linkable.env([bucket]), }); ``` ### static wrap ```ts Linkable.wrap(cls, cb) ``` #### Parameters - `cls` `Constructor` The resource class to wrap. - `cb` (resource: `Resource`) => [`Definition`](#definition) A callback that returns the definition for the linkable resource. **Returns** `void` Wrap any resource class to make it linkable. Behind the scenes this modifies the prototype of the given class. :::tip Use `Linkable.wrap` to make any resource linkable. ::: Here we are wrapping the [`aws.dynamodb.Table`](https://www.pulumi.com/registry/packages/aws/api-docs/dynamodb/table/) class to make it linkable. ```ts title="sst.config.ts" Linkable.wrap(aws.dynamodb.Table, (table) => ({ properties: { tableName: table.name }, include: [ sst.aws.permission({ actions: ["dynamodb:*"], resources: [table.arn] }) ] })); ``` It's defining the properties that we want made accessible at runtime and the permissions that the linked resource should have. Now you can link any `aws.dynamodb.Table` instances in your app just like any other SST component. ```ts title="sst.config.ts" {7} const table = new aws.dynamodb.Table("MyTable", { attributes: [{ name: "id", type: "S" }], hashKey: "id", }); new sst.aws.Nextjs("MyWeb", { link: [table] }); ``` Since this applies to any resource, you can also use it to wrap SST components and modify how they are linked. ```ts title="sst.config.ts" "sst.aws.Bucket" sst.Linkable.wrap(sst.aws.Bucket, (bucket) => ({ properties: { name: bucket.name }, include: [ sst.aws.permission({ actions: ["s3:GetObject"], resources: [bucket.arn] }) ] })); ``` This overrides the built-in link and lets you create your own. :::tip You can modify the permissions granted by a linked resource. ::: In the above example, we're modifying the permissions to access a linked `sst.aws.Bucket` in our app. ## Definition ### include? **Type** `(`[`sst.aws.permission`](/docs/component/aws/permission/)` | `[`sst.cloudflare.binding`](/docs/component/cloudflare/binding/)`)[]` Include AWS permissions or Cloudflare bindings for the linkable resource. The linked resource will have these permissions or bindings. Include AWS permissions. ```ts { include: [ sst.aws.permission({ actions: ["lambda:InvokeFunction"], resources: ["*"] }) ] } ``` Include Cloudflare bindings. ```ts { include: [ sst.cloudflare.binding({ type: "r2BucketBindings", properties: { bucketName: "my-bucket" } }) ] } ``` ### properties **Type** `Record` Define values that the linked resource can access at runtime. These can be outputs from other resources or constants. ```ts { properties: { foo: "bar" } } ``` --- ## Secret Reference doc for the `sst.Secret` component. https://sst.dev/docs/component/secret The `Secret` component lets you create secrets in your app. Secrets are encrypted and stored in an S3 Bucket in your AWS account. If used in your app config, they'll be encrypted in your state file as well. If used in your function code, they are encrypted and included in the bundle. They are then decrypted synchronously when your function starts up by the SST SDK. #### Create a secret The name of a secret follows the same rules as a component name. It must start with a capital letter and contain only letters and numbers. :::note Secret names must start with a capital letter and contain only letters and numbers. ::: ```ts title="sst.config.ts" const secret = new sst.Secret("MySecret"); ``` #### Set a placeholder You can optionally set a `placeholder`. :::tip Useful for cases where you might use a secret for values that aren't sensitive, so you can just set them in code. ::: ```ts title="sst.config.ts" const secret = new sst.Secret("MySecret", "my-secret-placeholder-value"); ``` #### Set the value of the secret You can then set the value of a secret using the [CLI](/docs/reference/cli/). ```sh title="Terminal" sst secret set MySecret my-secret-value ``` :::note If you are not running `sst dev`, you'll need to `sst deploy` to apply the secret. ::: #### Set a fallback for the secret You can set a _fallback_ value for the secret with the `--fallback` flag. If the secret is not set for a stage, it'll use the fallback value instead. ```sh title="Terminal" sst secret set MySecret my-fallback-value --fallback ``` This is useful for PR environments that are auto-deployed. #### Use the secret in your app config You can now use the secret in your app config. ```ts title="sst.config.ts" console.log(mySecret.value); ``` This is an [Output](/docs/components#outputs) that can be used as an Input to other components. #### Link the secret to a resource You can link the secret to other resources, like a function or your Next.js app. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { link: [secret] }); ``` Once linked, you can use the secret in your function code. ```ts title="app/page.tsx" console.log(Resource.MySecret.value); ``` --- ## Constructor ```ts new Secret(name, placeholder?) ``` #### Parameters - `name` `string` - `placeholder?` `Input` A placeholder value of the secret. This can be useful for cases where you might not be storing sensitive values. ## Properties ### name **Type** `Output` The name of the secret. ### placeholder **Type** `undefined | Output` The placeholder value of the secret. ### value **Type** `Output` The value of the secret. It'll be `undefined` if the secret has not been set through the CLI or if the `placeholder` hasn't been set. ## SDK Use the [SDK](/docs/reference/sdk/) in your runtime to interact with your infrastructure. --- ### Links This is accessible through the `Resource` object in the [SDK](/docs/reference/sdk/#links). - `value` `string` The value of the secret. It'll be `undefined` if the secret has not been set through the CLI or if the `placeholder` hasn't been set. --- ## Vercel DNS Adapter Reference doc for the `sst.vercel.dns` adapter. https://sst.dev/docs/component/vercel/dns The Vercel DNS Adapter is used to create DNS records to manage domains hosted on [Vercel](https://vercel.com/docs/projects/domains/working-with-domains). :::note You need to [add the Vercel provider](/docs/all-providers#directory) to use this adapter. ::: This adapter is passed in as `domain.dns` when setting a custom domain; where `example.com` is hosted on Vercel. ```ts { domain: { name: "example.com", dns: sst.vercel.dns({ domain: "example.com" }) } } ``` #### Configure provider 1. To use this component, add the `vercel` provider to your app. ```bash sst add vercel ``` 2. If you don't already have a Vercel Access Token, [follow this guide](https://vercel.com/guides/how-do-i-use-a-vercel-api-access-token#creating-an-access-token) to create one. 3. Add a `VERCEL_API_TOKEN` environment variable with the access token value. If the domain belongs to a team, also add a `VERCEL_TEAM_ID` environment variable with the Team ID. You can find your Team ID inside your team's general project settings in the Vercel dashboard. --- ## Functions ### dns ```ts dns(args) ``` #### Parameters - `args` [`DnsArgs`](#dnsargs) **Returns** `Object` ## DnsArgs ### domain **Type** `Input` The domain name in your Vercel account to create the record in. ```js { domain: "example.com" } ``` ### transform? **Type** `Object` - [`record?`](#transform-record) [Transform](/docs/components#transform) how this component creates its underlying resources. record? **Type** [`DnsRecordArgs`](https://www.pulumi.com/registry/packages/vercel/api-docs/dnsrecord/#inputs)` | (args: `[`DnsRecordArgs`](https://www.pulumi.com/registry/packages/vercel/api-docs/dnsrecord/#inputs)`, opts: `[`ComponentResourceOptions`](https://www.pulumi.com/docs/concepts/options/)`, name: string) => void` Transform the Vercel record resource. --- ## Components Components are the building blocks of your app. https://sst.dev/docs/components Every SST app is made up of components. These are logical units that represent features in your app; like your frontends, APIs, databases, or queues. There are two types of components in SST: 1. Built-in components — High level components built by the SST team 2. Provider components — Low level components from the providers Let's look at them below. --- ## Background Most [providers](/docs/providers/) like AWS are made up of low level resources. And it takes quite a number of these to put together something like a frontend or an API. For example, it takes around 70 low level AWS resources to create a Next.js app on AWS. As a result, Infrastructure as Code has been traditionally only been used by DevOps or Platform engineers. To fix this, SST has components that can help you with the most common features in your app. --- ## Built-in The built-in components in SST, the ones you see in the sidebar, are designed to make it really easy to create the various parts of your app. For example, you don't need to know a lot of AWS details to deploy your Next.js frontend: ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb"); ``` And because this is all in code, it's straightforward to configure this further. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { domain: "my-app.com", path: "packages/web", imageOptimization: { memory: "512 MB" }, buildCommand: "npm run build" }); ``` You can even take this a step further and completely transform how the low level resources are created. We'll look at this below. :::tip Aside from the built-in SST components, all the [Pulumi/Terraform providers](/docs/all-providers#directory) are supported as well. ::: Currently SST has built-in components for two cloud providers. --- ### AWS The AWS built-in components are designed to make it easy to work with AWS. :::tip SST's built-in components make it easy to build apps with AWS. ::: These components are namespaced under **`sst.aws.*`** and listed under AWS in the sidebar. Internally they use Pulumi's [AWS](https://www.pulumi.com/registry/packages/aws/) provider. --- ### Cloudflare These components are namespaced under **`sst.cloudflare.*`** and listed under Cloudflare in the sidebar. Internally they use Pulumi's [Cloudflare](https://www.pulumi.com/registry/packages/cloudflare/) provider. --- ## Constructor To add a component to your app, you create an instance of it by passing in a couple of args. For example, here's the signature of the [Function](/docs/component/aws/function) component. ```ts new sst.aws.Function(name: string, args: FunctionArgs, opts?: pulumi.ComponentResourceOptions) ``` Each component takes the following: - `name`: The name of the component. This needs to be unique across your entire app. - `args`: An object of properties that allows you to configure the component. - `opts?`: An optional object of properties that allows you to configure this component in Pulumi. Here's an example of creating a `Function` component: ```ts title="sst.config.ts" const function = new sst.aws.Function("MyFunction", { handler: "src/lambda.handler" }); ``` --- ### Name There are two guidelines to follow when naming your components: 1. The names of SST's built-in components and components extended with [`Linkable.wrap`](/docs/component/linkable/#static-wrap) need to be global across your entire app. This allows [Resource Linking](linking) to look these resources up at runtime. 2. Optionally, use PascalCase for the component name. For example, you might name your bucket, `MyBucket` and use Resource Linking to look it up with `Resource.MyBucket`. However this is purely cosmetic. You can use kebab case. So `my-bucket`, and look it up using `Resource['my-bucket']`. --- ### Args Each component takes a set of args that allow you to configure it. These args are specific to each component. For example, the Function component takes [`FunctionArgs`](/docs/component/aws/function#functionargs). Most of these args are optional, meaning that most components need very little configuration to get started. Typically, the most common configuration options are lifted to the top-level. To further configure the component, you'll need to use the `transform` prop. Args usually take primitive types. However, they also take a special version of a primitive type. It'll look something like _`Input`_. We'll look at this in detail below. --- ## Transform Most components take a `transform` prop as a part of their constructor or methods. It's an object that takes callbacks that allow you to transform how that component's infrastructure is created. :::tip You can completely configure a component using the `transform` prop. ::: For example, here's what the `transform` prop looks like for the [Function](/docs/component/aws/function#transform) component: - `function`: A callback to transform the underlying Lambda function - `logGroup`: A callback to transform the Lambda's LogGroup resource - `role`: A callback to transform the role that the Lambda function assumes The type for these callbacks is similar. Here's what the `role` callback looks like: ```ts RoleArgs | (args: RoleArgs, opts: pulumi.ComponentResourceOptions, name: string) => void ``` This takes either: - A `RoleArgs` object. For example: ```ts { transform: { role: { name: "MyRole" } } } ``` This is **merged** with the original `RoleArgs` that were going to be passed to the component. - A function that takes `RoleArgs`. Here's the function signature: ```ts (args: RoleArgs, opts: pulumi.ComponentResourceOptions, name: string) => void ``` Where [`args`, `opts`, and `name`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/#constructor-syntax) are the arguments for the Role constructor passed to Pulumi. So you can pass in a callback that takes the current `RoleArgs` and mutates it. ```ts { transform: { role: (args, opts) => { args.name = `${args.name}-MyRole`; opts.retainOnDelete = true; } } } ``` --- ### `$transform` Similar to the component transform, we have the global `$transform`. This allows you to transform how a component of a given type is created. :::tip Set default props across all your components with `$transform`. ::: For example, set a default `runtime` for your functions. ```ts title="sst.config.ts" $transform(sst.aws.Function, (args, opts) => { // Set the default if it's not set by the component args.runtime ??= "nodejs18.x"; }); ``` This sets the runtime for any `Function` component that'll be **created after this call**. The reason we do the check for `args.runtime` is to allow components to override the default. We do this by only setting the default if the component isn't specifying its own `runtime`. ```ts title="sst.config.ts" new sst.aws.Function("MyFunctionA", { handler: "src/lambdaA.handler" }); new sst.aws.Function("MyFunctionB", { handler: "src/lambdaB.handler", runtime: "nodejs20.x" }); ``` So given the above transform, `MyFunctionA` will have a runtime of `nodejs18.x` and `MyFunctionB` will have a runtime of `nodejs20.x`. :::note The `$transform` is only applied to components that are defined after it. ::: The `args` and `opts` in the `$transform` callback are what you'd pass to the `Function` component. Recall the signature of the `Function` component: ```ts title="sst.config.ts" new sst.aws.Function(name: string, args: FunctionArgs, opts?: pulumi.ComponentResourceOptions) ``` Read more about the global [`$transform`](/docs/reference/global/#transform). --- ## Properties An instance of a component exposes a set of properties. For example, the `Function` component exposes the following [properties](/docs/component/aws/function#properties) — `arn`, `name`, `url`, and `nodes`. ```ts const functionArn = function.arn; ``` These can be used to output info about your app or can be used as args for other components. These are typically primitive types. However, they can also be a special version of a primitive type. It'll look something like _`Output`_. We'll look at this in detail below. --- ### Links Some of these properties are also made available via [resource linking](/docs/linking/). This allows you to access them in your functions and frontends in a typesafe way. For example, a Function exposes its `name` through its [links](/docs/component/aws/bucket/#links). --- ### Nodes The `nodes` property that a component exposes gives you access to the underlying infrastructure. This is an object that contains references to the underlying Pulumi components that are created. :::tip The nodes that are made available reflect the ones that can be configured using the `transform` prop. ::: For example, the `Function` component exposes the following [nodes](/docs/component/aws/function#nodes) — `function`, `logGroup`, and `role`. --- ## Outputs The properties of a component are typically of a special type that looks something like, _`Output`_. These are values that are not available yet and will be resolved as the deploy progresses. However, these outputs can be used as args in other components. This makes it so that parts of your app are not blocked and all your resources are deployed as concurrently as possible. For example, let's create a function with an url. ```ts title="sst.config.ts" const myFunction = new sst.aws.Function("MyFunction", { url: true, handler: "src/lambda.handler" }); ``` Here, `myFunction.url` is of type `Output`. We want to use this function url as a route in our router. ```ts {3} title="sst.config.ts" new sst.aws.Router("MyRouter", { routes: { "/api": myFunction.url } }); ``` The route arg takes `Input`, which means it can take a string or an output. This creates a dependency internally. So the router will be deployed after the function has been. However, other components that are not dependent on this function can be deployed concurrently. You can read more about [Input and Output types on the Pulumi docs](https://www.pulumi.com/docs/concepts/inputs-outputs/). --- ### Apply Since outputs are values that are yet to be resolved, you cannot use them in regular operations. You'll need to resolve them first. For example, let's take the function url from above. We cannot do the following. ```ts title="sst.config.ts" const newUrl = myFunction.url + "/foo"; ``` This is because the value of the output is not known at the time of this operation. We'll need to resolve it. The easiest way to work with an output is using `.apply`. It'll allow you to apply an operation on the output and return a new output. ```ts title="sst.config.ts" const newUrl = myFunction.url.apply((value) => value + "/foo"); ``` In this case, `newUrl` is also an `Output`. --- ### Helpers To make it a little easier to work with outputs, we have the following global helper functions. --- #### `$concat` This lets you do. ```ts title="sst.config.ts" const newUrl = $concat(myFunction.url, "/foo"); ``` Instead of the apply. ```ts title="sst.config.ts" const newUrl = myFunction.url.apply((value) => value + "/foo"); ``` Read more about [`$concat`](/docs/reference/global/#concat). --- #### `$interpolate` This lets you do. ```ts title="sst.config.ts" const newUrl = $interpolate`${myFunction.url}/foo`; ``` Instead of the apply. ```ts title="sst.config.ts" const newUrl = myFunction.url.apply((value) => value + "/foo"); ``` Read more about [`$interpolate`](/docs/reference/global/#interpolate). --- #### `$jsonParse` This is for outputs that are JSON strings. So instead of doing this. ```ts title="sst.config.ts" const policy = policyStr.apply((policy) => JSON.parse(policy) ); ``` You can. ```ts title="sst.config.ts" const policy = $jsonParse(policyStr); ``` Read more about [`$jsonParse`](/docs/reference/global/#jsonParse). --- #### `$jsonStringify` Similarly, for outputs that are JSON objects. Instead of doing a stringify after an apply. ```ts title="sst.config.ts" const policy = policyObj.apply((policy) => JSON.stringify(policy) ); ``` You can. ```ts title="sst.config.ts" const policy = $jsonStringify(policyObj); ``` Read more about [`$jsonStringify`](/docs/reference/global/#jsonStringify). --- #### `$resolve` And finally when you are working with a list of outputs and you want to resolve them all together. ```ts title="sst.config.ts" $resolve([bucket.name, worker.url]).apply(([bucketName, workerUrl]) => { console.log(`Bucket: ${bucketName}`); console.log(`Worker: ${workerUrl}`); }) ``` Read more about [`$resolve`](/docs/reference/global/#resolve). --- ## Versioning SST components evolve over time, sometimes introducing breaking changes. To maintain backwards compatibility, we implement a component versioning scheme. For example, we released a new version the [`Vpc`](/docs/component/aws/vpc) that does not create a NAT Gateway by default. To roll this out the previous version of the `Vpc` component was renamed to [`Vpc.v1`](/docs/component/aws/vpc-v1). Now if you were using the original `Vpc` component, update SST, and deploy; you'll get an error during the deploy saying that there's a new version of this component. This allows you to decide what you want to do with this component. --- #### Continue with the old version If you prefer to continue using the older version of a component, you can rename it. ```diff title="sst.config.ts" lang="ts" - const vpc = new sst.aws.Vpc("MyVpc"); + const vpc = new sst.aws.Vpc.v1("MyVpc"); ``` Now if you deploy again, SST knows that you want to stick with the old version and it won't error. --- #### Update to the latest version Instead, if you wanted to update to the latest version, you'll have to rename the component. ```diff title="sst.config.ts" lang="ts" - const vpc = new sst.aws.Vpc("MyVpc"); + const vpc = new sst.aws.Vpc("MyNewVpc"); ``` Now if you redeploy, it'll remove the previously created component and recreate it with the new name and the latest version. This is because from SST's perspective it looks like the `MyVpc` component was removed and a new component called `MyNewVpc` was added. :::caution Removing and recreating components may cause temporary downtime in your app. ::: Since these are being recreated you've to be aware that there might be a period of time when that resource is not around. This might cause some downtime, depending on the resource. --- ## Configure a Router Create a shared CloudFront distribution for your entire app. https://sst.dev/docs/configure-a-router You can set [custom domains](/docs/custom-domains) on components like your frontends, APIs, or services. Each of these create their own CloudFront distribution. But as your app grows you might: 1. Have multiple frontends, like a landing page, or a docs site, etc. 2. Want to serve resources from different paths of the same domain; like `/docs`, or `/api`. 3. Want to set up preview environments on subdomains. Also since CloudFront distributions can take 15-20 minutes to deploy, creating new distributions for each of the components, and for each stage, can really impact how long it takes to deploy your app. :::tip The `Router` lets you create and share a single CloudFront distribution for your entire app. ::: The ideal setup here is to create a single CloudFront distribution for your entire app and share that across components and across stages. Let's look at how to do this with the `Router` component. --- #### A sample app To demo this, let's say you have the following components in your app. ```ts title="sst.config.ts" // Frontend const web = new sst.aws.Nextjs("MyWeb", { path: "packages/web" }); // API const api = new sst.aws.Function("MyApi", { url: true, handler: "packages/functions/api.handler" }); // Docs const docs = new sst.aws.Astro("MyDocs", { path: "packages/docs" }); ``` This has a frontend, a docs site, and an API. In production we'd like to have: - `example.com` serve `MyWeb` - `example.com/api` serve `MyApi` - `docs.example.com` serve `MyDocs` We'll create a Router for production. In our dev stage we'd like to have: - `dev.example.com` serve `MyWeb` - `dev.example.com/api` serve `MyApi` - `docs.dev.example.com` serve `MyDocs` For our PR stages or preview environments we'd like to have: - `pr-123.dev.example.com` serve `MyWeb` - `pr-123.dev.example.com/api` serve `MyApi` - `docs-pr-123.dev.example.com` serve `MyDocs` We'll create a separate Router for the dev stage and share it across all the PR stages. We are doing `docs-pr-123.dev.` instead of `docs.pr-123.dev.` because of a limitation with custom domains in CloudFront that we'll look at below. Let's set this up. --- ## Add a router Instead of adding custom domains to each component, let's add a `Router` to our app with the domain we are going to use in production. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: { name: "example.com", aliases: ["*.example.com"] } }); ``` The `*.example.com` alias is because we want to route to the `docs.` subdomain. And use that in our components. ```diff lang="ts" title="sst.config.ts" // Frontend const web = new sst.aws.Nextjs("MyWeb", { path: "packages/web", + router: { + instance: router + } }); // API const api = new sst.aws.Function("MyApi", { handler: "packages/functions/api.handler", + url: { + router: { + instance: router, + path: "/api" + } + } }); // Docs const docs = new sst.aws.Astro("MyDocs", { path: "packages/docs", + router: { + instance: router, + domain: "docs.example.com" + } }); ``` Next, let's configure the dev stage. --- ## Stage based domains Since we also want to configure domains for our dev stage, let's add a function that returns the domain we want, based on the stage. ```ts title="sst.config.ts" const domain = $app.stage === "production" ? "example.com" : $app.stage === "dev" ? "dev.example.com" : undefined; ``` Now when we deploy the dev stage, we'll create a new `Router` with our dev domain. ```diff lang="ts" title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: { - name: "example.com", - aliases: ["*.example.com"] + name: domain, + aliases: [`*.${domain}`] } }); ``` And update the `MyDocs` component to use this. ```diff lang="ts" title="sst.config.ts" // Docs const docs = new sst.aws.Astro("MyDocs", { path: "packages/docs", router: { instance: router, - domain: "docs.example.com" + domain: `docs.${domain}` } }); ``` --- ## Preview environments Currently, we create a new CloudFront distribution for dev and production. But we want to **share the same distribution from dev** in our PR stages. --- ### Share the router To do that, let's modify how we create the `Router`. ```diff lang="ts" title="sst.config.ts" - const router = new sst.aws.Router("MyRouter", { + const router = isPermanentStage ? new sst.aws.Router("MyRouter", { domain: { name: domain, aliases: [`*.${domain}`] } }) + : sst.aws.Router.get("MyRouter", "A2WQRGCYGTFB7Z"); ``` The `A2WQRGCYGTFB7Z` is the ID of the Router distribution created in the dev stage. You can look this up in the SST Console or output it when you deploy your dev stage. ```ts title="sst.config.ts" return { router: router.distributionID }; ``` We are also defining `isPermanentStage`. This is set to `true` if the stage is `dev` or `production`. ```ts title="sst.config.ts" const isPermanentStage = ["production", "dev"].includes($app.stage); ``` Let's also update our `domain` helper. ```diff lang="ts" title="sst.config.ts" const domain = $app.stage === "production" ? "example.com" : $app.stage === "dev" ? "dev.example.com" - : undefined; + : `${$app.stage}.dev.example.com`; ``` Since the domain alias for the dev stage is set to `*.dev.example.com`, it can match `pr-123.dev.example.com`. But not `docs.pr-123.dev.example.com`. This is a limitation of CloudFront. --- ### Nested subdomains So we'll be using `docs-pr-123.dev.example.com` instead. :::note Nested wildcards domain patterns are not supported. ::: To do this, let's add a helper function. ```ts title="sst.config.ts" function subdomain(name: string) { if (isPermanentStage) return `${name}.${domain}`; return `${name}-${domain}`; } ``` This will add the `-` for our PR stages. Let's update our `MyDocs` component to use this. ```diff lang="ts" title="sst.config.ts" // Docs const docs = new sst.aws.Astro("MyDocs", { path: "packages/docs", router: { instance: router, - domain: `docs.${domain}` + domain: subdomain("docs") } }); ``` --- ## Wrapping up And that's it! We've now configured our router to serve our entire app. Here's what the final config looks like. ```ts title="sst.config.ts" const isPermanentStage = ["production", "dev"].includes($app.stage); const domain = $app.stage === "production" ? "example.com" : $app.stage === "dev" ? "dev.example.com" : `${$app.stage}.dev.example.com`; function subdomain(name: string) { if (isPermanentStage) return `${name}.${domain}`; return `${name}-${domain}`; } const router = isPermanentStage ? new sst.aws.Router("MyRouter", { domain: { name: domain, aliases: [`*.${domain}`] } }) : sst.aws.Router.get("MyRouter", "A2WQRGCYGTFB7Z"); // Frontend const web = new sst.aws.Nextjs("MyWeb", { path: "packages/web", router: { instance: router } }); // API const api = new sst.aws.Function("MyApi", { handler: "packages/functions/api.handler", url: { router: { instance: router, path: "/api" } } }); // Docs const docs = new sst.aws.Astro("MyDocs", { path: "packages/docs", router: { instance: router, domain: subdomain("docs") } }); ``` Our components are all sharing the same CloudFront distribution. We also have our PR stages sharing the same router as our dev stage. --- ## Console Manage and monitor your apps with the SST Console. https://sst.dev/docs/console The Console is a web based dashboard to manage your SST apps — [**console.sst.dev**](https://console.sst.dev) With it, you and your team can see all your apps, their **resources** and **updates**, **view logs**, **get alerts** on any issues, and **_git push to deploy_** them. :::tip The Console is completely optional and comes with a free tier. ::: --- ## Get started Start by creating an account and connecting your AWS account. :::note Currently the Console only supports apps **deployed to AWS**. ::: 1. **Create an account with your email** It's better to use your work email so that you can invite your team to your workspace later — [**console.sst.dev**](https://console.sst.dev) 2. **Create a workspace** You can add your apps and invite your team to a workspace. A workspace can be for a personal project or for your team at work. You can create as many workspaces as you want. :::tip Create a workspace for your organization. You can use it to invite your team and connect all your AWS accounts. ::: 2. **Connect your AWS account** This will ask you to create a CloudFormation stack in your AWS account. Make sure that this stack is being added to **us-east-1**. Scroll down and click **Create stack**. :::caution The CloudFormation stack needs to be created in **us-east-1**. If you create it in the wrong region by mistake, remove it and create it again. ::: This stack will scan all the regions in your account for SST apps and subscribe to them. Once created, you'll see all your apps, stages, and the functions in the apps. If you are connecting a newly created AWS account, you might run into the following error while creating the stack. > Resource handler returned message: "Specified ReservedConcurrentExecutions for function decreases account's UnreservedConcurrentExecution below its minimum value This happens because AWS has been limiting the concurrency of Lambda functions for new accounts. It's a good idea to increase this limit before you go to production anyway. To do so, you can [request a quota increase](https://repost.aws/knowledge-center/lambda-concurrency-limit-increase) to the default value of 1000. You can also do the following to expedite the request.
Expedite the request If you want to expedite the request: 1. Submit the request. 2. Click the **Quota request history** link in the sidebar. 3. Click on **AWS Support Center Case** to open your request case details. 4. Hit the **Reply** button and select **Chat** to chat with an AWS representative to expedite it.
3. **Invite your team** Use the email address of your teammates to invite them. They just need to login with the email you've used and they'll be able to join your workspace. --- ## How it works At a high level, here's how the Console works. - It's hosted on our side It stores some metadata about what resources you have deployed. We'll have a version that can be self-hosted in the future. - You can view all your apps and stages Once you've connected your AWS accounts, it'll deploy a separate CloudFormation stack and connect to any SST apps in it. And all your apps and stages will show up automatically. - It's open-source and built with SST The Console is an SST app. You can view the [source on GitHub](https://github.com/sst/console). It's also auto-deployed using itself. --- ## Security The CloudFormation stack that the Console uses, creates an IAM Role in your account to manage your resources. If this is a concern for your production environments, we have a couple of options. By default, this role is granted `AdministratorAccess`, but you can customize it to restrict access. We'll look at this below. Additionally, if you'd like us to sign a BAA, feel free to [contact us][contact-us]. There maybe cases where you don't want any data leaving your AWS account. For this, we'll be supporting self-hosting the Console in the future. --- #### IAM permissions Permissions for the Console fall into two categories: read and write: - **Read Permissions**: The Console needs specific permissions to display information about resources within your SST apps. | Purpose | AWS IAM Action | |----------------------------------------|----------------------------------| | Fetch stack outputs | `cloudformation:DescribeStacks` | | Retrieve function runtime and size | `lambda:GetFunction` | | Access stack metadata | `ec2:DescribeRegions`
`s3:GetObject`
`s3:ListBucket`| | Display function logs | `logs:DescribeLogStreams`
`logs:FilterLogEvents`
`logs:GetLogEvents`
`logs:StartQuery`| | Monitor invocation usage | `cloudwatch:GetMetricData` | Attach the `arn:aws:iam::aws:policy/ReadOnlyAccess` AWS managed policy to the IAM Role for comprehensive read access. - **Write Permissions**: The Console requires the following write permissions. | Purpose | AWS IAM Action | |-----------------------------------------------------|------------------------------------------------------------------------------| | Forward bootstrap bucket events to event bus | `s3:PutBucketNotification` | | Send events to Console | `events:PutRule`
`events:PutTargets` | | Grant event bus access for Console | `iam:CreateRole`
`iam:DeleteRole`
`iam:DeleteRolePolicy`
`iam:PassRole`
`iam:PutRolePolicy` | | Enable Issues to subscribe logs | `logs:CreateLogGroup`
`logs:PutSubscriptionFilter` | | Invoke Lambda functions and replay invocations | `lambda:InvokeFunction` | It's good practice to periodically review and update these policies. --- #### Customize policy To customize IAM permissions for the CloudFormation stack: 1. On the CloudFormation create stack page, download the default `template.json`. 2. Edit the template file with necessary changes.
_View the template changes_ ```diff title="template.json" "SSTRole": { "Type": "AWS::IAM::Role", "Properties": { ... "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/AdministratorAccess" + "arn:aws:iam::aws:policy/ReadOnlyAccess" + ], + "Policies": [ + { + "PolicyName": "SSTPolicy", + "PolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:PutBucketNotification" + ], + "Resource": [ + "arn:aws:s3:::sstbootstrap-*" + ] + }, + { + "Effect": "Allow", + "Action": [ + "events:PutRule", + "events:PutTargets" + ], + "Resource": { + "Fn::Sub": "arn:aws:events:*:${AWS::AccountId}:rule/SSTConsole*" + } + }, + { + "Effect": "Allow", + "Action": [ + "iam:CreateRole", + "iam:DeleteRole", + "iam:DeleteRolePolicy", + "iam:PassRole", + "iam:PutRolePolicy" + ], + "Resource": { + "Fn::Sub": "arn:aws:iam::${AWS::AccountId}:role/SSTConsolePublisher*" + } + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:PutSubscriptionFilter" + ], + "Resource": { + "Fn::Sub": "arn:aws:logs:*:${AWS::AccountId}:log-group:*" + } + }, + { + "Effect": "Allow", + "Action": [ + "lambda:InvokeFunction" + ], + "Resource": { + "Fn::Sub": "arn:aws:lambda:*:${AWS::AccountId}:function:*" + } + } + ] + } + } ] } } ```
3. Upload your edited `template.json` file to an S3 bucket. 4. Return to the CloudFormation create stack page and replace the template URL in the page URL. --- ## Pricing [Starting Feb 1, 2025](/blog/console-pricing-update), the Console will be priced based on the number of active resources in your SST apps. | Resources | Rate per resource | |-----------|-----------| | First 2000 | $0.086 | | 2000+ | $0.032 | **Free Tier**: Workspaces with 350 active resources or fewer. So for example, if you go over the free tier and have 351 active resources in a month, your bill will be 351 x $0.086 = $30.2. A couple of things to note. - These are calculated for a given workspace every month. - A resource is what SST creates in your cloud provider. [Learn more below](#faq). - You can always access personal stages, even if you're above the free tier. - A resource is considered active if it comes from a stage: - That has been around for at least 2 weeks. - And, was updated during the month. - For volume pricing, feel free to [contact us][contact-us]. [Learn more in the FAQ](#faq). --- ##### Active resources A resource is considered active if it comes from a stage that has been around for at least 2 weeks. And, was updated during the month. Let's look at a few different scenarios to see how this works. - A stage that was created 5 months ago and was deployed this month, is active. - A stage that was created 5 months ago but was not deployed this month, is not active. - A stage that was created 12 days ago, is not active. - A stage that was created 20 days ago and was removed 10 days ago, is not active. - A stage that was created 5 months ago, deployed this month, then removed this month, is active. - A stage created 5 months ago, was not deployed this month, and removed this month, is not active. --- #### Old pricing Previously, the Console pricing was based on the number of times the Lambda functions in your SST apps are invoked per month and it used the following tiers. | Invocations | Rate (per invocation) | |-------------|------| | First 1M | Free | | 1M - 10M | $0.00002 | | 10M+ | $0.000002 | - These are calculated for a given workspace on a monthly basis. - This does not apply to personal stages, they'll be free forever. - There's also a soft limit for Issues on all accounts. - For volume pricing, feel free to [contact us][contact-us]. --- ## Features Here are a few of the things the Console does for you. 1. [**Logs**](#logs): View logs from any log group in your app 2. [**Issues**](#issues): Get real-time alerts for any errors in your app 3. [**Local logs**](#local-logs): View logs from your local `sst dev` session 4. [**Updates**](#updates): View the details of every update made to your app 5. [**Resources**](#resources): View all the resources in your app and their props 6. [**Autodeploy**](#autodeploy): Auto-deploy your app when you _git push_ to your repo --- ### Logs With the Console, you don't need to go to CloudWatch to look at the logs for your functions, containers and other log groups. You can view: - View recent logs - Jump to a specific time - Search for logs with a given string --- ### Issues The Console will automatically show you any errors in your Node.js Lambda functions and containers in real-time. And notify you through Slack or email. With Issues, there is: - **Nothing to setup**, no code to instrument - **Source maps** are supported **automatically** - **No impact on performance**, since your code isn't modified :::note Issues works out of the box and has no impact on performance. ::: Issues currently only supports Node.js functions and containers. Other languages and runtimes are on the roadmap. --- #### Error detection For the Console to automatically report your errors, you need to `console.error` an error object. ```js title="src/index.ts" console.error(new Error("my-error")); ``` This works a little differently for containers and functions. - **Containers** In a container applications, your code needs to also import the [SST JS SDK](/docs/reference/sdk/). ```js title="src/index.ts" {1} import "sst"; console.error(new Error("my-error")); ``` This applies a polyfill to the `console` object to prepend the log lines with a marker that allows Issues to detect errors. [More on this below](#how-it-works-1). If you are already importing the SDK, you won't need to add an additional import. - **Functions** In addition, to errors logged using `console.error(new Error("my-error"))`, Issues also reports Lambda function failures. ```js title="src/lambda.ts" console.error(new Error("my-error")); ``` In Lambda you don't need to import the SDK to polyfill the `console` object. Since the Lambda runtime does this automatically for you. --- #### How it works Here's how Issues works behind the scenes. 1. When an app is deployed or when an account is first synced, we add a log subscriber to the CloudWatch Log groups in your SST apps. - This is added to your AWS account and includes a Lambda function. More on this below. 2. If the subscriber filter matches anything that looks like an error it invokes the Lambda function. - In case of errors from a Lambda function, the Lambda runtime automatically adds a marker to the logs that the filter matches for. - For containers, the SST SDK polyfills the `console` object to add the marker. 3. The Lambda function tries to parse the error. If the error comes from a Lambda function, it fetches the source maps from the state bucket in your account. 4. It then hits an endpoint in the SST Console and passes in that error. 5. Finally, the Console groups similar looking errors together and displays them. --- #### Log subscriber The log subscriber also includes the following: 1. **Lambda function** that'll be invoked when a log with an error is matched. - This function has a max concurrency set to 10. - If it falls behind on processing by over 10 minutes, it'll discard the logs. - This prevents it from scaling indefinitely when there's a burst of errors. - This also means that if there are a lot of errors, the alerts might be delayed by up to 10 minutes. 2. **IAM role** that gives it access to query the logs and the state bucket for the source maps. 3. **Log group** with a 1 day retention. These are added to **every region** in your AWS account that has a CloudWatch log group from your SST apps. It's deployed using a CloudFormation stack. This process of adding a log subscriber might fail if we: - Don't have enough permissions. In this case, update the permissions that you've granted to the Console. - Hit the limit for the number of subscribers, there's a maximum of 2 subscribers. To fix this, you can remove one of the existing subscribers. You can see these errors in the Issues tab. Once you've fixed these issues, you can hit **Retry** and it'll try attaching the subscriber again. --- #### Costs AWS will bill you for the Lambda function log subscriber that's in your account. This is usually fairly minimal. Even if your apps are generating an infinite number of errors, the Lambda function is limited to a concurrency of 10. So the **maximum** you'll be charged $43 x 10 = **$430 per month x # of regions** that are being monitored. You can also disable Issues from your workspace settings, if you are using a separate service for monitoring. [Learn more about Lambda pricing](https://aws.amazon.com/lambda/pricing/). --- ### Updates Each update in your app also gets a unique URL, a **_permalink_**. This is printed out by the SST CLI. ```bash title="sst deploy" ↗ Permalink https://sst.dev/u/318d3879 ``` You can view these updates in the Console. Each update shows: 1. Full list of **all the resources** that were modified 2. Changes in their **inputs and outputs** 3. Any Docker or site **builds logs** 4. **CLI command** that triggered the update 5. **Git commit**, if it was an auto-deploy The permalink is useful for sharing with your team and debugging any issues with your deploys. The CLI updates your [state](/docs/state/) with the event log from each update and generated a globally unique id. If your AWS account is connected to the Console, it'll pull the state and event log to generate the details for the update permalink. When you visit the permalink, the Console looks up the id of the update and redirects you to the right app in your workspace. --- ### Resources The Console shows you the complete [state of the resources](/docs/state/) in your app. You can view: 1. Each resource in your app 2. The relation between resources 3. The outputs of a given resource --- ### Autodeploy The Console can auto-deploy your apps when you _git push_ to your GitHub repo. Autodeploy uses [AWS CodeBuild](https://aws.amazon.com/codebuild/) in your account to run the build. We designed Autodeploy to be a better fit for SST apps when compared to alternatives like GitHub Actions or CircleCI. 1. **Easy to get started** - Autodeploy supports the standard branch and PR workflow out of the box. You don't need a config file to get started. - There are no complicated steps in configuring your AWS credentials; since your AWS account is already connected to the Console. 2. **Configurable** - You can configure how Autodeploy works directly through your `sst.config.ts`. - It's typesafe and the callbacks let you customize how to respond to incoming git events. 3. **Runs in your AWS account** - The builds are run in your AWS account. - It can also be configured to run in your VPC. This is useful if your builds need to access private resources. 4. **Integrates with the Console** - You can see which resources were updated in a deploy. - Your resource updates will also show you the related git commit. --- #### Setup To get started with Autodeploy: 1. **Enable the GitHub integration** Head over to your **Workspace settings** > **Integrations** and enable GitHub. This will ask you to login to GitHub and you'll be asked to pick the GitHub organization or user you want to link to. :::tip You can only associate your workspace with a single GitHub org. ::: If you have multiple GitHub orgs, you can create multiple workspaces in the Console. 2. **Connect a repo** To auto-deploy an app, head over to the **App's Settings** > **Autodeploy** and select the repo for the app. 3. **Configure an environment** Next you can configure a branch or PR environment by selecting the **stage** you want deployed to an **AWS account**. You can optionally configure **environment variables** as well. :::note Stage names by default are generated based on the branch or PR. ::: By default, stages are based on the branch name or PR. We'll look at this in detail below. 4. **Git push** Finally, _git push_ to the environment you configured and head over to your app's **Autodeploy** tab to see it in action. :::note PR stages are removed when the PR is closed while branch stages are not. ::: For example, if you configure a branch environment for the stage `production`, any git pushes to the `production` branch will be auto-deployed. Similarly, if you create a new PR, say PR#12, the Console will auto-deploy a stage called `pr-12`. You can also manually trigger a deployment through the Console by passing in a Git ref and the stage you want to deploy to. 5. **Setup alerts** Once your deploys are working, you can set the Console to send alerts for your deploys. Head over to your **Workspace Settings** > **Alerts** and add a new alert to be notified on any Autodeploys, or only on Autodeploy errors. :::tip You can configure how Autodeploy works through your `sst.config.ts`. ::: While Autodeploy supports the standard branch and PR workflow out of the box, it can be configured in depth through your `sst.config.ts`. --- #### Configure The above can be configured through the [`console.autodeploy`](/docs/reference/config/#console-autodeploy) option in the `sst.config.ts`. ```ts title="sst.config.ts" {7-15} // Your app's config app(input) { }, // Your app's resources async run() { }, // Your app's Console config console: { autodeploy: { target(event) { if (event.type === "branch" && event.branch === "main" && event.action === "pushed") { return { stage: "production" }; } } } } }); ``` In the above example we are using the `console.autodeploy.target` option to change the stage that's tied to a git event. Only git pushes to the `main` branch to auto-deploy to the `production` stage. This works because if `target` returns `undefined`, the deploy is skipped. And if you provide your own `target` callback, it overrides the default behavior. :::tip You can use the git events to configure how your app is auto-deployed. ::: Through the `console.autodeploy.runner` option, you can configure the runner that's used. For example, if you wanted to increase the timeouts to 2 hours, you can. ```ts title="sst.config.ts" console: { autodeploy: { runner: { timeout: "2 hours" } } } ``` This also takes the stage name, so you can set the runner config for a specific stage. ```ts title="sst.config.ts" console: { autodeploy: { runner(stage) { if (stage === "production") return { timeout: "3 hours" }; } } } ``` You can also have your builds run inside your VPC. ```ts title="sst.config.ts" console: { autodeploy: { runner: { vpc: { id: "vpc-0be8fa4de860618bb", securityGroups: ["sg-0399348378a4c256c"], subnets: ["subnet-0b6a2b73896dc8c4c", "subnet-021389ebee680c2f0"] } } } } ``` Or specify files and directories to be cached. ```ts title="sst.config.ts" console: { autodeploy: { runner: { cache: { paths: ["node_modules", "/path/to/cache"] } } } } ``` Read more about the [`console.autodeploy`](/docs/reference/config/#console-autodeploy) config. --- #### Environments The Console needs to know which account it needs to autodeploy into. You configure this under the **App's Settings** > **Autodeploy**. Each environment takes: 1. **Stage** The stage that is being deployed. By default, the stage name comes from the name of the branch. Branch names are sanitized to only letters/numbers and hyphens. So for example: - A push to a branch called `production` will deploy a stage called `production`. - A push to PR#12 will deploy to a stage called `pr-12`. As mentioned, above you can customize this through your `sst.config.ts`. :::tip You can specify a pattern to match the stage name in your environments. ::: If multiple stages share the same environment, you can use a glob pattern. For example, `pr-*` matches all stages that start with `pr-`. 2. **AWS Account** The AWS account that you are deploying to. 3. **Environment Variables** Any environment variables you need for the build process. These are made available under `process.env.*` in your `sst.config.ts`. --- #### How it works When you _git push_ to a branch, pull request, or tag, the following happens: 1. The stage name is generated based on the `console.autodeploy.target` callback. 1. If there is no callback, the stage name is a sanitized version of the branch or tag. 2. If there is a callback but no stage is returned, the deploy is skipped. 2. The stage is matched against the environments in the Console to get the AWS account and any environment variables for the deploy. 3. The runner config is generated based on the `console.autodeploy.runner`. Or the defaults are used. 4. The deploy is run based on the above config. This only applies only to git events. If you trigger a deploy through the Console, you are asked to specify the stage you want to deploy to. So in this case, it skips step 1 from above and does not call `console.autodeploy.target`. Both `target` and `runner` are optional and come with defaults, but they can be customized. --- #### Costs AWS will bill you for the **CodeBuild build minutes** that are used to run your builds. [Learn more about CodeBuild pricing](https://aws.amazon.com/codebuild/pricing/). --- ### Local logs When the Console starts up, it checks if you are running `sst dev` locally. If so, then it'll show you real-time logs from your local terminal. This works by connecting to a local server that's run as a part of the SST CLI. :::info The local server only allows access from `localhost` and `console.sst.dev`. ::: The local logs works in all browsers and environments. But for certain browsers like Safari or Brave, and Gitpod, it needs some additional configuration. --- #### Safari & Brave Certain browsers like Safari and Brave require the local connection between the browser and the `sst dev` CLI to be running on HTTPS. SST can automatically generate a locally-trusted certificate using the [`sst cert`](/docs/reference/cli#cert) command. ```bash sst cert ``` You'll only need to **run this once** on your machine. --- #### Gitpod If you are using [Gitpod](https://www.gitpod.io/), you can use the Gitpod Local Companion app to connect to the `sst dev` process running inside your Gitpod workspace. To get started: 1. [Install Gitpod Local Companion app](https://www.gitpod.io/blog/local-app#installation) 2. [Run the Companion app](https://www.gitpod.io/blog/local-app#running) 3. Navigate to Console in the browser The companion app runs locally and creates a tunnelled connection to your Gitpod workspace. --- ## FAQ Here are some frequently asked questions about the Console. - Do I need to use the Console to use SST? You **don't need the Console** to use SST. It compliments the CLI and has some features that help with managing your apps in production. That said, it is completely free to get started. You can create an account and invite your team, **without** having to add a **credit card**. - Is there a free tier? If your workspace has 350 active resources or fewer for the month, it's considered to be in the free tier. This count also resets every month. - What happens if I go over the free tier? You won't be able to access the _production_ or deployed stages till you add your billing details in the workspace settings. Note that, you can continue to **access your personal stages**. Just make sure you have `sst dev` running locally. Otherwise the Console won't be able to detect that it's a personal stage. - What counts as a resource? Resources are what SST creates in your cloud provider. This includes the resources created by both SST's built-in components, like `Function`, `Nextjs`, `Bucket`, and the ones created by any other Terraform/Pulumi provider. Some components, like `Nextjs` and `StaticSite`, create multiple resources. In general, the more complex the component, the more resources it'll create. You can see a [full list of resources](#resources) if you go to an app in your Console and navigate to a stage in it. For some context, the Console is itself a pretty large [SST app](https://github.com/sst/console) and it has around 320 resources. - Do PR stages also count? A stage has to be around for at least 2 weeks before the resources in it are counted as active. So if a PR stage is created and removed within 2 weeks, they don't count. However, if you remove a stage and create a new one with the same name, it does not reset the 2 week initial period. --- #### Old pricing FAQ Here were some frequently asked questions about the old pricing plan for the Console. - Do I need to switch to the new pricing? If you are currently on the old plan, you don't have to switch and you won't be automatically switched over either. You can go to the workspace settings and check out how much you'll be billed based on both the plans. To switch over, you can cancel your current plan and then subscribe to the new plan. At some point in the future, we'll remove the old plan. But there's no specific timeline for it yet. - Which Lambda functions are included in the number of invocations? The number of invocations are only counted for the **Lambda functions in your SST apps**. Other Lambda functions in your AWS accounts are not included. - Do the functions in my personal stages count as a part of the invocations? Lambda functions that are invoked **locally are not included**. - My invocation volume is far higher than the listed tiers. Are there any other options? Feel free to [contact us][contact-us] and we can figure out a pricing plan that works for you. If you have any further questions, feel free to [send us an email][contact-us]. [contact-us]: mailto:hello@sst.dev --- ## Custom Domains Configure custom domains in your components. https://sst.dev/docs/custom-domains You can configure custom domains and subdomains for your frontends, APIs, services, or routers in SST. :::note SST currently supports configuring custom domains for AWS components. ::: By default, these components auto-generate a URL. You can pass in the `domain` to use your custom domain. **Frontend** ```ts title="sst.config.ts" {2} new sst.aws.Nextjs("MyWeb", { domain: "example.com" }); ``` **API** ```ts title="sst.config.ts" {2} new sst.aws.ApiGatewayV2("MyApi", { domain: "api.example.com" }); ``` **Service** ```ts title="sst.config.ts" {6} const vpc = new sst.aws.Vpc("MyVpc"); new sst.aws.Cluster("MyCluster", { vpc, loadBalancer: { domain: "example.com" } }); ``` **Router** ```ts title="sst.config.ts" {2} new sst.aws.Router("MyRouter", { domain: "example.com" }); ``` SST supports a couple of DNS providers automatically. These include AWS Route 53, Cloudflare, and Vercel. Other providers will need to be manually configured. We'll look at how it works below. --- ##### Redirect www to apex domain A common use case is to redirect `www.example.com` to `example.com`. You can do this by: ```ts title="sst.config.ts" {3,4} new sst.aws.Router("MyRouter", { domain: { name: "example.com", redirects: ["www.example.com"] } }); ``` --- ##### Add subdomains You can add subdomains to your domain. This is useful if you want to use a `Router` to route a subdomain to a separate resource. ```ts title="sst.config.ts" {3,4,11} const router = new sst.aws.Router("MyRouter", { domain: { name: "example.com", aliases: ["*.example.com"] } }); new sst.aws.Nextjs("MyWeb", { router: { instance: router, domain: "docs.example.com" } }); ``` Here if a user visits `docs.example.com`, they'll kept on the alias domain and be served the docs site. :::tip You can use the `Router` component to centrally manage domains and routing for your app. [Learn more](/docs/configure-a-router). ::: However, this does not match `docs.dev.example.com`. For that, you'll need to add `*.dev.example.com` as an alias. --- ## How it works Configuring a custom domain is a two step process. 1. Validate that you own the domain. For AWS you do this by [creating an ACM certificate](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) and validating it by: - Setting a DNS record with your domain provider. - Verifying through an email sent to the domain owner. 2. Add the DNS records to route your domain to your component. SST can perform these steps automatically for the supported providers through a concept of _adapters_. These create the above DNS records on a given provider. --- ## Adapters You can use a custom domain hosted on any provider. SST supports domains on AWS, Cloudflare, and Vercel automatically. --- ### AWS By default, if you set a custom domain, SST assumes the domain is configured in AWS Route 53 in the same AWS account. ```js { domain: { name: "example.com" } } ``` This is the same as using the [`sst.aws.dns`](/docs/component/aws/dns/) adapter. ```js { domain: { name: "example.com", dns: sst.aws.dns() } } ``` If you have the same domain in multiple hosted zones in Route 53, you can specify the hosted zone. ```js {5} { domain: { name: "example.com", dns: sst.aws.dns({ zone: "Z2FDTNDATAQYW2" }) } } ``` If your domains are hosted on AWS but in a separate AWS account, you'll need to follow the [manual setup](#manual-setup). --- ### Vercel If your domains are hosted on [Vercel](https://vercel.com), you'll need to do the following. 1. [Add the Vercel provider to your app](/docs/component/vercel/dns/#configure-provider). ```bash sst add vercel ``` 2. Set the **`VERCEL_API_TOKEN`** in your environment. You might also need to set the `VERCEL_TEAM_ID` if the domain belongs to a team. ```bash export VERCEL_API_TOKEN=aaaaaaaa_aaaaaaaaaaaa_aaaaaaaa ``` 3. Use the [`sst.vercel.dns`](/docs/component/vercel/dns/) adapter. ```js { domain: { name: "example.com", dns: sst.vercel.dns() } } ``` --- ### Cloudflare If your domains are hosted on [Cloudflare](https://developers.cloudflare.com/dns/), you'll need to do the following. 1. Add the Cloudflare provider to your app. ```bash sst add cloudflare ``` 2. Set the **`CLOUDFLARE_API_TOKEN`** in your environment. ```bash export CLOUDFLARE_API_TOKEN=aaaaaaaa_aaaaaaaaaaaa_aaaaaaaa export CLOUDFLARE_DEFAULT_ACCOUNT_ID=aaaaaaaa_aaaaaaaaaaaa_aaaaaaaa ``` To get your API tokens, head to the [API Tokens section](https://dash.cloudflare.com/profile/api-tokens) of your Cloudflare dashboard and create one with the **Edit zone DNS** policy. The Cloudflare providers need these credentials to deploy your app in the first place, which means they can't be set using the `sst secret` CLI. If you are auto-deploying your app through the [SST Console](console.mdx#autodeploy) or through your CI, you'll need to set these as environment variables. 3. Use the [`sst.cloudflare.dns`](/docs/component/cloudflare/dns/) adapter. ```js { domain: { name: "example.com", dns: sst.cloudflare.dns() } } ``` --- ## Manual setup If your domain is on a provider that is not supported above, or is in a separate AWS account; you'll need to verify that you own the domain and set up the DNS records on your own. To manually set up a domain on an unsupported provider, you'll need to: 1. [Validate that you own the domain](https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html) by creating an ACM certificate. You can either validate it by setting a DNS record or by verifying an email sent to the domain owner. :::note For CloudFront distributions, the certificate needs to be created in `us-east-1`. ::: If you are configuring a custom domain for a CloudFront distribution, the ACM certificate that's used to prove that you own the domain needs be created in the `us-east-1` region. For all the other components, like ApiGatewayV2 or Cluster, can be created in any region. 2. Once validated, set the certificate ARN as the `cert` and set `dns` to `false`. ```js { domain: { name: "domain.com", dns: false, cert: "arn:aws:acm:us-east-1:112233445566:certificate/3a958790-8878-4cdc-a396-06d95064cf63" } } ``` 3. Add the DNS records in your provider to point to the CloudFront distribution, API Gateway, or load balancer URL. --- ## Enterprise Everything you need to use SST at your enterprise. https://sst.dev/docs/enterprise SST is great for larger teams that have complex policy, security, and compliance requirements. 1. It runs completely on your infrastructure. 2. It's designed to work with enterprise requirements by default. 3. You can transform any component if you have custom needs. 4. You can always drop down and use Pulumi or Terraform within SST if you need to. Feel free to open an issue on GitHub if you are having any problems with enterprise requirements. --- ## Support While we support our community through GitHub and Discord, your team might need dedicated support. We're happy to: - Set up a shared Slack or Discord channel with your team. - Fix any critical issues your team is running into. - Do a call with your team and answer any questions. - Prioritize any feature requests from your team. - Share details about the SST roadmap. - And provide support SLAs. **Contact us** for further details about our enterprise plans. --- ## Environment Variables Manage the environment variables in your app. https://sst.dev/docs/environment-variables You can manage the environment variables for all the components in your app, across all your stages, through the `sst.config.ts`. :::tip You don't need to use `.env` files in SST. ::: While SST automatically loads your environment variables and `.env` files; we don't recommend relying on them. --- ## Recommended Typically, you'll use environment variables or `.env` files to share things like database URLs, secrets, or other config. To understand why we don't recommend `.env` files, let's look at each of these in detail. --- ### Links A very common use case for `.env` is to share something like a database URL across your app. Instead in SST, you can link the resources together. ```ts title="sst.config.ts" {4} const rds = new sst.aws.Postgres("MyPostgres"); new sst.aws.Nextjs("MyWeb", { link: [rds] }); ``` You can then access the database in your Next.js app with the [JS SDK](/docs/reference/sdk/). ```ts title="app/page.tsx" {5-7} schema, database: Resource.MyPostgres.database, secretArn: Resource.MyPostgres.secretArn, resourceArn: Resource.MyPostgres.clusterArn }); ``` This has a couple of key advantages: 1. You don't have to deploy your database separately and then store the credentials in a `.env` file. 2. You don't need to update this for every stage. 3. You don't have to share these URLs with your teammates. Anybody on your team can just run `sst deploy` on any stage and it'll deploy the app and link the resources. :::tip Your team can just `git checkout` and `sst deploy`, without the need for a separate `.env` file. ::: You can learn more about [linking resources](/docs/linking/). --- ### Secrets Another common use case for `.env` is to manage secrets across your app. SST has a built-in way to handle secrets. ```ts title="sst.config.ts" {4} const secret = new sst.Secret("MySecret"); new sst.aws.Nextjs("MyWeb", { link: [secret] }); ``` You can set the secret using the `sst secret` CLI. ```bash title="Terminal" sst secret set MySecret my-secret-value ``` This far more secure than storing it in a `.env` file and accidentally committing it to Git. Learn more about [secrets](/docs/component/secret). --- ### Other config Finally, people use `.env` files for some general config. These are often different across stages and are not really sensitive. For example, you might have your `SENTRY_DSN` that's different for dev and prod. We recommend putting these directly in your `sst.config.ts` instead. And using the right one based on the stage. ```ts title="sst.config.ts" const SENTRY_DSN = $app.stage !== "prod" ? "https://foo@sentry.io/bar" : "https://baz@sentry.io/qux"; ``` You can also conditionally set it based on if you are running `sst dev` or `sst deploy`. ```ts title="sst.config.ts" const SENTRY_DSN = $dev === true ? "https://foo@sentry.io/bar" : "https://baz@sentry.io/qux"; ``` And you can pass this into your frontends and functions. ```ts title="sst.config.ts" {3} new sst.aws.Nextjs("MyWeb", { environment: { SENTRY_DSN } }); ``` Learn more about [`$app`](/docs/reference/global#app) and [`$dev`](/docs/reference/global#dev). --- ## Traditional As mentioned above, SST also supports the traditional approach. If you run `sst dev` or `sst deploy` with an environment variable: ```bash title="Terminal" "SOME_ENV_VAR=FOO" SOME_ENV_VAR=FOO sst deploy ``` You can access it using the `process.env` in your `sst.config.ts`. ```ts title="sst.config.ts" async run() { console.log(process.env.SOME_ENV_VAR); // FOO } ``` However, this isn't automatically added to your frontends or functions. You'll need to add it manually. ```ts title="sst.config.ts" {3} new sst.aws.Nextjs("MyWeb", { environment: { SOME_ENV_VAR: process.env.SOME_ENV_VAR ?? "fallback value", } }); ``` SST doesn't do this automatically because you might have multiple frontends or functions and you might not want to load it for all of them. :::tip Environment variables are not automatically added to your frontend or functions. ::: Now you can access it in your frontend. ```ts title="app/page.tsx" return Hello {process.env.SOME_ENV_VAR}; } ``` --- ### .env The same thing works if you have a `.env` file in your project root. ```bash title=".env" SOME_ENV_VAR=FOO ``` It'll be loaded into `process.env` in your `sst.config.ts`. ```ts title="sst.config.ts" async run() { console.log(process.env.SOME_ENV_VAR); // FOO } ``` Or if you have a stage specific `.env.dev` file. ```bash title=".env.dev" SOME_ENV_VAR=BAR ``` And you run `sst deploy --stage dev`, it'll be loaded into `process.env` in your `sst.config.ts`. ```ts title="sst.config.ts" async run() { console.log(process.env.SOME_ENV_VAR); // BAR } ``` While the traditional approach works, we do not recommend it because it's both cumbersome and not secure. --- ## Examples A collection of example apps for reference. https://sst.dev/docs/examples Below is a collection of example SST apps. These are available in the [`examples/`](https://github.com/sst/sst/tree/dev/examples) directory of the repo. :::tip This doc is best viewed through the site search or through the _AI_. ::: The descriptions for these examples are generated using the comments in the `sst.config.ts` of the app. #### Contributing To contribute an example or to edit one, submit a PR to the [repo](https://github.com/sst/sst). Make sure to document the `sst.config.ts` in your example. --- ## API Gateway auth Enable IAM and JWT authorizers for API Gateway routes. ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayV2("MyApi", { domain: { name: "api.ion.sst.sh", path: "v1", }, }); api.route("GET /", { handler: "route.handler", }); api.route("GET /foo", "route.handler", { auth: { iam: true } }); api.route("GET /bar", "route.handler", { auth: { jwt: { issuer: "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_Rq4d8zILG", audiences: ["user@example.com"], }, }, }); api.route("$default", "route.handler"); return { api: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-apig-auth). --- ## AWS API Gateway V1 streaming An example on how to enable streaming for API Gateway REST API routes. ```ts title="sst.config.ts" api.route("GET /", { handler: "index.handler", streaming: true, }); ``` The handler uses the native `awslambda.streamifyResponse` and `awslambda.HttpResponseStream.from` to stream responses through API Gateway. ```ts title="index.ts" async (event, stream) => { stream = awslambda.HttpResponseStream.from(stream, { statusCode: 200, headers: { "Content-Type": "text/plain; charset=UTF-8", "X-Content-Type-Options": "nosniff", }, }); stream.write("Hello "); await new Promise((resolve) => setTimeout(resolve, 3000)); stream.write("World"); stream.end(); }, ); ``` ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayV1("MyApi"); api.route("GET /", { handler: "index.handler", streaming: true, }); api.route("GET /hono", { handler: "hono.handler", streaming: true, }); api.deploy(); return { api: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-apigv1-stream). --- ## AWS Astro container with Redis Creates a hit counter app with Astro and Redis. This deploys Astro as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash npx sst dev ``` Now if you go to `http://localhost:4321` you’ll see a counter update as you refresh the page. Finally, you can deploy it by adding the `Dockerfile` that's included in this example and running `npx sst deploy --stage production`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "4321/http" }], }, dev: { command: "npm run dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-astro-redis). --- ## AWS Astro streaming Follows the [Astro Streaming](https://docs.astro.build/en/recipes/streaming-improve-page-performance/) guide to create an app that streams HTML. The `responseMode` in the [`astro-sst`](https://www.npmjs.com/package/astro-sst) adapter is set to enable streaming. ```ts title="astro.config.mjs" adapter: aws({ responseMode: "stream" }) ``` Now any components that return promises will be streamed. ```astro title="src/components/Friends.astro" --- const friends: Character[] = await new Promise((resolve) => setTimeout(() => { setTimeout(() => { resolve( [ { name: "Patrick Star", image: "patrick.png" }, { name: "Sandy Cheeks", image: "sandy.png" }, { name: "Squidward Tentacles", image: "squidward.png" }, { name: "Mr. Krabs", image: "mr-krabs.png" }, ] ); }, 3000); })); ---
{friends.map((friend) => (
{friend.name} {friend.name}
))} ``` You should see the _friends_ section load after a 3 second delay. :::note Safari handles streaming differently than other browsers. ::: Safari uses a [different heuristic](https://bugs.webkit.org/show_bug.cgi?id=252413) to determine when to stream data. You need to render _enough_ initial HTML to trigger streaming. This is typically only a problem for demo apps. There's nothing to configure for streaming in the `Astro` component. ```ts title="sst.config.ts" new sst.aws.Astro("MyWeb"); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-astro-stream). --- ## AWS Aurora local In this example, we connect to a locally running Postgres instance for dev. While on deploy, we use RDS Aurora. We use the [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/) CLI to start a local container with Postgres. You don't have to use Docker, you can use Postgres.app or any other way to run Postgres locally. ```bash docker run \ --rm \ -p 5432:5432 \ -v $(pwd)/.sst/storage/postgres:/var/lib/postgresql/data \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=password \ -e POSTGRES_DB=local \ postgres:16.4 ``` The data is saved to the `.sst/storage` directory. So if you restart the dev server, the data will still be there. We then configure the `dev` property of the `Aurora` component with the settings for the local Postgres instance. ```ts title="sst.config.ts" dev: { username: "postgres", password: "password", database: "local", port: 5432, } ``` By providing the `dev` prop for Postgres, SST will use the local Postgres instance and not deploy a new RDS database when running `sst dev`. It also allows us to access the database through a Resource `link` without having to conditionally check if we are running locally. ```ts title="index.ts" const pool = new Pool({ host: Resource.MyPostgres.host, port: Resource.MyPostgres.port, user: Resource.MyPostgres.username, password: Resource.MyPostgres.password, database: Resource.MyPostgres.database, }); ``` The above will work in both `sst dev` and `sst deploy`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2" }); const database = new sst.aws.Aurora("MyPostgres", { engine: "postgres", dev: { username: "postgres", password: "password", database: "local", host: "localhost", port: 5432, }, vpc, }); new sst.aws.Function("MyFunction", { vpc, url: true, link: [database], handler: "index.handler", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-aurora-local). --- ## AWS Aurora MySQL In this example, we deploy a Aurora MySQL database. ```ts title="sst.config.ts" const mysql = new sst.aws.Aurora("MyDatabase", { engine: "mysql", vpc, }); ``` And link it to a Lambda function. ```ts title="sst.config.ts" {4} new sst.aws.Function("MyApp", { handler: "index.handler", link: [mysql], url: true, vpc, }); ``` Now in the function we can access the database. ```ts title="index.ts" const connection = await mysql.createConnection({ database: Resource.MyDatabase.database, host: Resource.MyDatabase.host, port: Resource.MyDatabase.port, user: Resource.MyDatabase.username, password: Resource.MyDatabase.password, }); ``` We also enable the `bastion` option for the VPC. This allows us to connect to the database from our local machine with the `sst tunnel` CLI. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. Now you can run `npx sst dev` and you can connect to the database from your local machine. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2", bastion: true, }); const mysql = new sst.aws.Aurora("MyDatabase", { engine: "mysql", vpc, }); new sst.aws.Function("MyApp", { handler: "index.handler", link: [mysql], url: true, vpc, }); return { host: mysql.host, port: mysql.port, username: mysql.username, password: mysql.password, database: mysql.database, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-aurora-mysql). --- ## AWS Aurora Postgres In this example, we deploy a Aurora Postgres database. ```ts title="sst.config.ts" const postgres = new sst.aws.Aurora("MyDatabase", { engine: "postgres", vpc, }); ``` And link it to a Lambda function. ```ts title="sst.config.ts" {4} new sst.aws.Function("MyApp", { handler: "index.handler", link: [postgres], url: true, vpc, }); ``` In the function we use the [`postgres`](https://www.npmjs.com/package/postgres) package. ```ts title="index.ts" const sql = postgres({ username: Resource.MyDatabase.username, password: Resource.MyDatabase.password, database: Resource.MyDatabase.database, host: Resource.MyDatabase.host, port: Resource.MyDatabase.port, }); ``` We also enable the `bastion` option for the VPC. This allows us to connect to the database from our local machine with the `sst tunnel` CLI. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. Now you can run `npx sst dev` and you can connect to the database from your local machine. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2", bastion: true, }); const postgres = new sst.aws.Aurora("MyDatabase", { engine: "postgres", vpc, }); new sst.aws.Function("MyApp", { handler: "index.handler", link: [postgres], url: true, vpc, }); return { host: postgres.host, port: postgres.port, username: postgres.username, password: postgres.password, database: postgres.database, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-aurora-postgres). --- ## AWS OpenAuth React SPA This is a full-stack monorepo app shows the OpenAuth flow for a single-page app and an authenticated API. It has: - React SPA built with Vite and the `StaticSite` component in the `packages/web` directory. ```ts title="infra/web.ts" export const web = new sst.aws.StaticSite("MyWeb", { path: "packages/web", build: { output: "dist", command: "npm run build", }, environment: { VITE_API_URL: api.url, VITE_AUTH_URL: auth.url, }, }); ``` - API with Hono and the `Function` component in `packages/functions/src/api.ts`. ```ts title="infra/api.ts" export const api = new sst.aws.Function("MyApi", { url: true, link: [auth], handler: "packages/functions/src/api.handler", }); ``` - OpenAuth with the `Auth` component in `packages/functions/src/auth.ts`. ```ts title="infra/auth.ts" export const auth = new sst.aws.Auth("MyAuth", { issuer: "packages/functions/src/auth.handler", }); ``` The React frontend uses a `AuthContext` provider to manage the auth flow. ```tsx title="packages/web/src/AuthContext.tsx" {children} ``` Now in `App.tsx`, we can use the `useAuth` hook. ```tsx title="packages/web/src/App.tsx" const auth = useAuth(); return !auth.loaded ? (
Loading...
) : (
{auth.loggedIn ? (
Logged in {auth.userId && as {auth.userId}}
) : ( )}
); ``` Once authenticated, we can call our authenticated API by passing in the access token. ```tsx title="packages/web/src/App.tsx" {3} await fetch(`${import.meta.env.VITE_API_URL}me`, { headers: { Authorization: `Bearer ${await auth.getToken()}`, }, }); ``` The API uses the OpenAuth client to verify the token. ```ts title="packages/functions/src/api.ts" {3} const authHeader = c.req.header("Authorization"); const token = authHeader.split(" ")[1]; const verified = await client.verify(subjects, token); ``` The `sst.config.ts` dynamically imports all the `infra/` files. ```ts title="sst.config.ts" await import("./infra/auth"); await import("./infra/api"); await import("./infra/web"); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-auth-react). --- ## Bucket lifecycle policies Configure S3 bucket lifecycle policies to expire objects automatically. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { lifecycle: [ { expiresIn: "60 days", }, { id: "expire-tmp-files", prefix: "tmp/", expiresIn: "30 days", }, { prefix: "data/", expiresAt: "2028-12-31", }, ], }); return { bucket: bucket.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bucket-lifecycle-rules). --- ## Bucket policy Create an S3 bucket and transform its bucket policy. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { transform: { policy: (args) => { // use sst.aws.iamEdit helper function to manipulate IAM policy // containing Output values from components args.policy = sst.aws.iamEdit(args.policy, (policy) => { policy.Statement.push({ Effect: "Allow", Principal: { Service: "ses.amazonaws.com" }, Action: "s3:PutObject", Resource: $interpolate`arn:aws:s3:::${args.bucket}/*`, }); }); }, }, }); return { bucket: bucket.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bucket-policy). --- ## Bucket queue notifications Create an S3 bucket and subscribe to its events with an SQS queue. ```ts title="sst.config.ts" const queue = new sst.aws.Queue("MyQueue"); queue.subscribe("subscriber.handler"); const bucket = new sst.aws.Bucket("MyBucket"); bucket.notify({ notifications: [ { name: "MySubscriber", queue, events: ["s3:ObjectCreated:*"], }, ], }); return { bucket: bucket.name, queue: queue.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bucket-queue-subscriber). --- ## Bucket notifications Create an S3 bucket and subscribe to its events with a function. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); bucket.notify({ notifications: [ { name: "MySubscriber", function: "subscriber.handler", events: ["s3:ObjectCreated:*"], }, ], }); return { bucket: bucket.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bucket-subscriber). --- ## Bucket topic notifications Create an S3 bucket and subscribe to its events with an SNS topic. ```ts title="sst.config.ts" const topic = new sst.aws.SnsTopic("MyTopic"); topic.subscribe("MySubscriber", "subscriber.handler"); const bucket = new sst.aws.Bucket("MyBucket"); bucket.notify({ notifications: [ { name: "MySubscriber", topic, events: ["s3:ObjectCreated:*"], }, ], }); return { bucket: bucket.name, topic: topic.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bucket-topic-subscriber). --- ## AWS Bun Elysia container Deploys a Bun [Elysia](https://elysiajs.com/) API to AWS. You can get started by running. ```bash bun create elysia aws-bun-elysia cd aws-bun-elysia bunx sst init ``` Now you can add a service. ```ts title="sst.config.ts" new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "bun dev", }, }); ``` Start your app locally. ```bash bun sst dev ``` This example lets you upload a file to S3 and then download it. ```bash curl -F file=@elysia.png http://localhost:3000/ curl http://localhost:3000/latest ``` Finally, you can deploy it using `bun sst deploy --stage production`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "bun dev", }, link: [bucket], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bun-elysia). --- ## AWS Bun Redis Creates a hit counter app with Bun and Redis. This deploys Bun as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {9} new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "bun dev", }, link: [redis], }); ``` We also have a couple of scripts. A `dev` script with a watcher and a `build` script that used when we deploy to production. ```json title="package.json" { "scripts": { "dev": "bun run --watch index.ts", "build": "bun build --target bun index.ts" }, } ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo bun sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash bun sst dev ``` Now if you go to `http://localhost:3000` you’ll see a counter update as you refresh the page. Finally, you can deploy it using `bun sst deploy --stage production` using a `Dockerfile` that's included in the example. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "bun dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bun-redis). --- ## AWS Bus subscriptions Subscribe bus events with AWS Lambda functions. ```ts title="sst.config.ts" const bus = new sst.aws.Bus("Bus"); const publisher = new sst.aws.Function("Publisher", { handler: "./src/publisher.handler", url: true, link: [bus], }); bus.subscribe("Example", "./src/receiver.handler"); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-bus). --- ## AWS Cluster custom autoscaling In this example, we'll create a cluster that autoscales based on a custom metric. In this case, the number of messages in a queue. We'll create a queue, and two functions that'll seed and purge the queue. We'll also create two policies. One that scales it up. ```ts title="sst.config.ts" const scaleUpPolicy = new aws.appautoscaling.Policy("ScaleUpPolicy", { serviceNamespace: service.nodes.autoScalingTarget.serviceNamespace, scalableDimension: service.nodes.autoScalingTarget.scalableDimension, resourceId: service.nodes.autoScalingTarget.resourceId, policyType: "StepScaling", stepScalingPolicyConfiguration: { adjustmentType: "ChangeInCapacity", cooldown: 5, stepAdjustments: [ { metricIntervalLowerBound: "0", scalingAdjustment: 1, }, ], }, }); ``` And one that scales it down. ```ts title="sst.config.ts" const scaleDownPolicy = new aws.appautoscaling.Policy("ScaleDownPolicy", { serviceNamespace: service.nodes.autoScalingTarget.serviceNamespace, scalableDimension: service.nodes.autoScalingTarget.scalableDimension, resourceId: service.nodes.autoScalingTarget.resourceId, policyType: "StepScaling", stepScalingPolicyConfiguration: { adjustmentType: "ChangeInCapacity", cooldown: 5, stepAdjustments: [ { metricIntervalUpperBound: "0", scalingAdjustment: -1, }, ], }, }); ``` We'll add a CloudWatch metric alarm that triggers the scaling policies if the queue depth exceeds 3 messages. ```ts title="sst.config.ts" new aws.cloudwatch.MetricAlarm("QueueDepthAlarm", { comparisonOperator: "GreaterThanThreshold", evaluationPeriods: 1, metricName: "ApproximateNumberOfMessagesVisible", namespace: "AWS/SQS", period: 10, statistic: "Average", threshold: 3, dimensions: { QueueName: queue.nodes.queue.name, }, alarmDescription: "Scale up when queue depth exceeds 3 messages", alarmActions: [scaleUpPolicy.arn], okActions: [scaleDownPolicy.arn], }); ``` To test this example, first deploy your app then: 1. Invoke the `MyQueueSeeder` URL. This will cause the service to scale up to 5 instances in a few minutes. 2. Then invoke the `MyQueuePurger` URL. This will cause the service to scale down to 1 instance in a few minutes. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); // Create a queue and two functions to seed and purge the queue const queue = new sst.aws.Queue("MyQueue"); new sst.aws.Function("MyQueueSeeder", { handler: "queue.seeder", link: [queue], url: true, }); new sst.aws.Function("MyQueuePurger", { handler: "queue.purger", link: [queue], url: true, }); // Create a cluster and disable default scaling on CPU and memory utilization const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const service = new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http" }], }, scaling: { min: 1, max: 5, cpuUtilization: false, memoryUtilization: false, }, }); // Create a scale up policy that scales up by 1 instance at a time const scaleUpPolicy = new aws.appautoscaling.Policy("ScaleUpPolicy", { serviceNamespace: service.nodes.autoScalingTarget.serviceNamespace, scalableDimension: service.nodes.autoScalingTarget.scalableDimension, resourceId: service.nodes.autoScalingTarget.resourceId, policyType: "StepScaling", stepScalingPolicyConfiguration: { adjustmentType: "ChangeInCapacity", cooldown: 5, stepAdjustments: [ { metricIntervalLowerBound: "0", scalingAdjustment: 1, }, ], }, }); // Create a scale down policy that scales down by 1 instance at a time const scaleDownPolicy = new aws.appautoscaling.Policy("ScaleDownPolicy", { serviceNamespace: service.nodes.autoScalingTarget.serviceNamespace, scalableDimension: service.nodes.autoScalingTarget.scalableDimension, resourceId: service.nodes.autoScalingTarget.resourceId, policyType: "StepScaling", stepScalingPolicyConfiguration: { adjustmentType: "ChangeInCapacity", cooldown: 5, stepAdjustments: [ { metricIntervalUpperBound: "0", scalingAdjustment: -1, }, ], }, }); // Create an alarm that scales up when the queue depth exceeds 3 messages // and scales down when the queue depth is less than 3 messages new aws.cloudwatch.MetricAlarm("QueueDepthAlarm", { comparisonOperator: "GreaterThanThreshold", evaluationPeriods: 1, metricName: "ApproximateNumberOfMessagesVisible", namespace: "AWS/SQS", period: 10, statistic: "Average", threshold: 3, dimensions: { QueueName: queue.nodes.queue.name, }, alarmDescription: "Scale up when queue depth exceeds 3 messages", alarmActions: [scaleUpPolicy.arn], okActions: [scaleDownPolicy.arn], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-cluster-autoscaling). --- ## AWS Cluster private service Adds a private load balancer to a service by setting the `loadBalancer.public` prop to `false`. This allows you to create internal services that can only be accessed inside a VPC. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { public: false, ports: [{ listen: "80/http" }], }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-cluster-internal). --- ## AWS Cluster Spot capacity This example, shows how to use the Fargate Spot capacity provider for your services. We have it set to use only Fargate Spot instances for all non-production stages. Learn more about the [`capacity`](/docs/component/aws/cluster#capacity) prop. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http" }], }, capacity: $app.stage === "production" ? undefined : "spot", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-cluster-spot). --- ## AWS Cluster with API Gateway Expose a service through API Gateway HTTP API using a VPC link. This is an alternative to using a load balancer. Since API Gateway is pay per request, it works out a lot cheaper for services that don't get a lot of traffic. You need to specify which port in your service will be exposed through API Gateway. ```ts title="sst.config.ts" {4} const service = new sst.aws.Service("MyService", { cluster, serviceRegistry: { port: 80, }, }); ``` A couple of things to note: 1. Your API Gateway HTTP API also needs to be in the **same VPC** as the service. 2. You also need to verify that your VPC's [**availability zones support VPC link**](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vpc-links.html#http-api-vpc-link-availability). 3. Run `aws ec2 describe-availability-zones` to get a list of AZs for your account. 4. Only list the AZ ID's that support VPC link. ```ts title="sst.config.ts" {4} vpc: { az: ["eu-west-3a", "eu-west-3c"] } ``` If the VPC picks an AZ automatically that doesn't support VPC link, you'll get the following error: ``` operation error ApiGatewayV2: BadRequestException: Subnet is in Availability Zone 'euw3-az2' where service is not available ``` ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { // Pick at least two AZs that support VPC link // az: ["eu-west-3a", "eu-west-3c"], }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const service = new sst.aws.Service("MyService", { cluster, serviceRegistry: { port: 80, }, }); const api = new sst.aws.ApiGatewayV2("MyApi", { vpc }); api.routePrivate("$default", service.nodes.cloudmapService.arn); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-cluster-vpclink). --- ## AWS Cognito User Pool Create a Cognito User Pool with a hosted UI domain, client, and identity pool. ```ts title="sst.config.ts" const userPool = new sst.aws.CognitoUserPool("MyUserPool", { domain: { prefix: `my-app-${$app.stage}`, }, triggers: { preSignUp: { handler: "index.handler", }, }, }); const client = userPool.addClient("Web", { callbackUrls: ['https://example.com/auth/callback'] }); const identityPool = new sst.aws.CognitoIdentityPool("MyIdentityPool", { userPools: [ { userPool: userPool.id, client: client.id, }, ], }); return { UserPool: userPool.id, Client: client.id, IdentityPool: identityPool.id, DomainUrl: userPool.domainUrl, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-cognito). --- ## Subscribe to queues with dead-letter queue Messages not processed successfully by the primary subscriber function will be sent to the dead-letter queue after the retry limit is reached. ```ts title="sst.config.ts" // create dead letter queue const dlq = new sst.aws.Queue("DeadLetterQueue"); dlq.subscribe("subscriber.dlq"); // create main queue const queue = new sst.aws.Queue("MyQueue", { dlq: dlq.arn, }); queue.subscribe("subscriber.main"); const app = new sst.aws.Function("MyApp", { handler: "publisher.handler", link: [queue], url: true, }); return { app: app.url, queue: queue.url, dlq: dlq.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-dead-letter-queue). --- ## AWS Deno Redis Creates a hit counter app with Deno and Redis. This deploys Deno as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "8000/http" }], }, dev: { command: "deno task dev", }, }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash sst dev ``` Now if you go to `http://localhost:8000` you’ll see a counter update as you refresh the page. Finally, you can deploy it using `sst deploy --stage production` using a `Dockerfile` that's included in the example. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "8000/http" }], }, dev: { command: "deno task dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-deno-redis). --- ## Drizzle migrations in CI/CD An example on how to run Drizzle migrations as a part of your CI/CD. Start by creating a function that runs migrations. ```ts title="sst.config.ts" const migrator = new sst.aws.Function("DatabaseMigrator", { handler: "src/migrator.handler", link: [rds], vpc, copyFiles: [ { from: "migrations", to: "./migrations", }, ], }); ``` Where `src/migrator.ts` looks like. ```ts title="src/migrator.ts" await migrate(db, { migrationsFolder: "./migrations", }); }; ``` And we can set it up to run on every deploy. ```ts title="sst.config.ts" if (!$dev){ new aws.lambda.Invocation("DatabaseMigratorInvocation", { input: Date.now().toString(), functionName: migrator.name, }); } ``` We use the current time to make sure the function runs on every deploy. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true, nat: "ec2" }); const rds = new sst.aws.Postgres("MyPostgres", { vpc, proxy: true }); new sst.aws.Function("MyApi", { vpc, url: true, link: [rds], handler: "src/api.handler", }); const migrator = new sst.aws.Function("DatabaseMigrator", { handler: "src/migrator.handler", link: [rds], vpc, copyFiles: [ { from: "migrations", to: "./migrations", }, ], }); if (!$dev) { new aws.lambda.Invocation("DatabaseMigratorInvocation", { input: Date.now().toString(), functionName: migrator.name, }); } new sst.x.DevCommand("Studio", { link: [rds], dev: { command: "npx drizzle-kit studio", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-drizzle-migrations). --- ## AWS Aurora DSQL with Drizzle In this example, we use Drizzle ORM with an Aurora DSQL cluster. ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster"); ``` And link it to a Lambda function. ```ts title="sst.config.ts" {4} new sst.aws.Function("MyApi", { handler: "src/api.handler", link: [cluster], url: true, }); ``` Push the Drizzle schema to the database. ```bash sst shell -- bun run push.ts ``` Now in the function we can connect to the cluster using Drizzle with the DSQL connector. Learn more about [DSQL Node.js connectors](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/SECTION_Node-js-connectors.html). ```ts title="src/drizzle.ts" const pool = new AuroraDSQLPool({ host: Resource.MyCluster.endpoint, user: "admin", }); ``` ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster"); new sst.aws.Function("MyApi", { handler: "src/api.handler", link: [cluster], url: true, }); return { endpoint: cluster.endpoint, region: cluster.region, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-dsql-drizzle). --- ## AWS Aurora DSQL Multi-Region In this example, we deploy a multi-region Aurora DSQL cluster and connect to both clusters from a Lambda function. :::note Multi-region with VPCs is not currently supported. ::: Create the cluster with a witness region and a peer region. The witness must differ from both cluster regions. ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MultiRegion", { regions: { witness: "us-west-2", peer: "us-east-2", }, }); ``` Connect to both clusters from your function using the DSQL connector. Learn more about [DSQL Node.js connectors](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/SECTION_Node-js-connectors.html). ```ts title="lambda.ts" async function connectToCluster(endpoint: string) { const client = new AuroraDSQLClient({ host: endpoint, user: "admin" }); await client.connect(); return client; } // Cluster in us-east-1 const usEast1 = await connectToCluster(Resource.MultiRegion.endpoint); // Cluster in us-east-2 const usEast2 = await connectToCluster(Resource.MultiRegion.peer.endpoint); ``` ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MultiRegion", { backup: true, regions: { witness: "us-west-2", peer: "us-east-2", }, }); const fn = new sst.aws.Function("MyFunction", { handler: "lambda.handler", link: [cluster], url: true, }); return { url: fn.url, arn: cluster.arn, endpoint: cluster.endpoint, region: cluster.region, peerArn: cluster.peer.arn, peerEndpoint: cluster.peer.endpoint, peerRegion: cluster.peer.region, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-dsql-multiregion). --- ## AWS Aurora DSQL in a VPC In this example, we connect to an Aurora DSQL cluster privately from a Lambda function using VPC endpoints, without routing traffic over the public internet. Create a VPC, then create the cluster with a connection endpoint inside it. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Dsql("MyCluster", { vpc: { instance: vpc, endpoints: { connection: true }, }, }); ``` Link the cluster to a function that's also in the VPC. The linked `endpoint` will automatically resolve to the private VPC endpoint hostname instead of the public one. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "lambda.handler", vpc, link: [cluster], }); ``` Connect from your function using the DSQL connector — no config changes needed. Learn more about [DSQL Node.js connectors](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/SECTION_Node-js-connectors.html). ```ts title="lambda.ts" const client = new AuroraDSQLClient({ host: Resource.MyCluster.endpoint, user: "admin", }); await client.connect(); const result = await client.query("SELECT NOW()"); await client.end(); ``` ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("singleClusterVpc"); const cluster = new sst.aws.Dsql("MyCluster", { vpc: { instance: vpc, endpoints: { connection: true, management: false, }, }, }); const fn = new sst.aws.Function("MyFunction", { handler: "lambda.handler", vpc, link: [cluster], url: true, }); return { endpoint: cluster.endpoint, region: cluster.region, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-dsql-vpc). --- ## AWS Aurora DSQL In this example, we deploy an Aurora DSQL cluster. ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster"); ``` And link it to a Lambda function. ```ts title="sst.config.ts" {4} new sst.aws.Function("MyFunction", { handler: "lambda.handler", link: [cluster], url: true, }); ``` Now in the function we can connect to the cluster using the DSQL connector. Learn more about [DSQL Node.js connectors](https://docs.aws.amazon.com/aurora-dsql/latest/userguide/SECTION_Node-js-connectors.html). ```ts title="lambda.ts" const client = new AuroraDSQLClient({ host: Resource.MyCluster.endpoint, user: "admin", }); await client.connect(); const result = await client.query("SELECT NOW()"); await client.end(); ``` ```ts title="sst.config.ts" const cluster = new sst.aws.Dsql("MyCluster", {}); const fn = new sst.aws.Function("MyFunction", { handler: "lambda.handler", link: [cluster], url: true, }); return { endpoint: cluster.endpoint, region: cluster.region, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-dsql). --- ## DynamoDB composite keys Create a DynamoDB table with multi-attribute composite keys in a global secondary index. ```ts title="sst.config.ts" const table = new sst.aws.Dynamo("MyTable", { fields: { userId: "string", noteId: "string", region: "string", category: "string", createdAt: "number", }, primaryIndex: { hashKey: "userId", rangeKey: "noteId" }, globalIndexes: { RegionCategoryIndex: { hashKey: ["region", "category"], rangeKey: "createdAt", }, }, }); const creator = new sst.aws.Function("MyCreator", { handler: "creator.handler", link: [table], url: true, }); const reader = new sst.aws.Function("MyReader", { handler: "reader.handler", link: [table], url: true, }); return { creator: creator.url, reader: reader.url, table: table.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-dynamo-composite-keys). --- ## DynamoDB streams Create a DynamoDB table, enable streams, and subscribe to it with a function. ```ts title="sst.config.ts" const table = new sst.aws.Dynamo("MyTable", { fields: { id: "string", }, primaryIndex: { hashKey: "id" }, stream: "new-and-old-images", }); table.subscribe("MySubscriber", "subscriber.handler", { filters: [ { dynamodb: { NewImage: { message: { S: ["Hello"], }, }, }, }, ], }); const creator = new sst.aws.Function("MyCreator", { handler: "creator.handler", link: [table], url: true, }); const reader = new sst.aws.Function("MyReader", { handler: "reader.handler", link: [table], url: true, }); return { creator: creator.url, reader: reader.url, table: table.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-dynamo). --- ## EC2 with Pulumi Use raw Pulumi resources to create an EC2 instance. ```ts title="sst.config.ts" // Notice you don't need to import pulumi, it is already part of sst. const securityGroup = new aws.ec2.SecurityGroup("web-secgrp", { ingress: [ { protocol: "tcp", fromPort: 80, toPort: 80, cidrBlocks: ["0.0.0.0/0"], }, ], }); // Find the latest Ubuntu AMI const ami = aws.ec2.getAmi({ filters: [ { name: "name", values: ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"], }, ], mostRecent: true, owners: ["099720109477"], // Canonical }); // User data to set up a simple web server const userData = `#!/bin/bash ho "Hello, World!" > index.html hup python3 -m http.server 80 &`; // Create an EC2 instance const server = new aws.ec2.Instance("web-server", { instanceType: "t2.micro", ami: ami.then((ami) => ami.id), userData: userData, vpcSecurityGroupIds: [securityGroup.id], associatePublicIpAddress: true, }); return { app: server.publicIp, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-ec2-pulumi). --- ## AWS EFS with SQLite Mount an EFS file system to a function and write to a SQLite database. ```js title="index.ts" const db = sqlite3("/mnt/efs/mydb.sqlite"); ``` The file system is mounted to `/mnt/efs` in the function. :::note Given the performance of EFS, it's not recommended to use it for databases. ::: This example is for demonstration purposes only. It's not recommended to use EFS for databases in production. ```ts title="sst.config.ts" // NAT Gateways are required for Lambda functions const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" }); // Create an EFS file system to store the SQLite database const efs = new sst.aws.Efs("MyEfs", { vpc }); // Create a Lambda function that queries the database new sst.aws.Function("MyFunction", { vpc, url: true, volume: { efs, path: "/mnt/efs", }, handler: "index.handler", nodejs: { install: ["better-sqlite3"], }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-efs-sqlite). --- ## AWS EFS with SurrealDB We use the SurrealDB docker image to run a server in a container and use EFS as the file system. ```ts title="sst.config.ts" const server = new sst.aws.Service("MyService", { cluster, architecture: "arm64", image: "surrealdb/surrealdb:v2.0.2", // ... volumes: [ { efs, path: "/data" }, ], }); ``` We then connect to the server from a Lambda function. ```js title="index.ts" const endpoint = `http://${Resource.MyConfig.host}:${Resource.MyConfig.port}`; const db = new Surreal(); await db.connect(endpoint); ``` This uses the SurrealDB client to connect to the server. :::note Given the performance of EFS, it's not recommended to use it for databases. ::: This example is for demonstration purposes only. It's not recommended to use EFS for databases in production. ```ts title="sst.config.ts" const { RandomPassword } = await import("@pulumi/random"); // SurrealDB Credentials const PORT = 8080; const NAMESPACE = "test"; const DATABASE = "test"; const USERNAME = "root"; const PASSWORD = new RandomPassword("Password", { length: 32, }).result; // NAT Gateways are required for Lambda functions const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" }); // Store SurrealDB data in EFS const efs = new sst.aws.Efs("MyEfs", { vpc }); // Run SurrealDB server in a container const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const server = new sst.aws.Service("MyService", { cluster, architecture: "arm64", image: "surrealdb/surrealdb:v2.0.2", command: [ "start", "--bind", $interpolate`0.0.0.0:${PORT}`, "--log", "info", "--user", USERNAME, "--pass", PASSWORD, "surrealkv://data/data.skv", "--allow-scripting", ], volumes: [{ efs, path: "/data" }], }); // Lambda client to connect to SurrealDB const config = new sst.Linkable("MyConfig", { properties: { username: USERNAME, password: PASSWORD, namespace: NAMESPACE, database: DATABASE, port: PORT, host: server.service, }, }); new sst.aws.Function("MyApp", { handler: "index.handler", link: [config], url: true, vpc, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-efs-surrealdb). --- ## AWS EFS Mount an EFS file system to a function and a container. This allows both your function and the container to access the same file system. Here they both update a counter that's stored in the file system. ```js title="common.mjs" await writeFile("/mnt/efs/counter", newValue.toString()); ``` The file system is mounted to `/mnt/efs` in both the function and the container. ```ts title="sst.config.ts" // NAT Gateways are required for Lambda functions const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" }); // Create an EFS file system to store a counter const efs = new sst.aws.Efs("MyEfs", { vpc }); // Create a Lambda function that increments the counter new sst.aws.Function("MyFunction", { handler: "lambda.handler", url: true, vpc, volume: { efs, path: "/mnt/efs", }, }); // Create a service that increments the same counter const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http" }], }, volumes: [ { efs, path: "/mnt/efs", }, ], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-efs). --- ## AWS Express Redis Creates a hit counter app with Express and Redis. This deploys Express as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {9} new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http" }], }, dev: { command: "node --watch index.mjs", }, link: [redis], }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash npx sst dev ``` Now if you go to `http://localhost:80` you’ll see a counter update as you refresh the page. Finally, you can deploy it using `npx sst deploy --stage production` using a `Dockerfile` that's included in the example. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http" }], }, dev: { command: "node --watch index.mjs", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-express-redis). --- ## FastAPI Deploy a Python FastAPI app as a Lambda function with a linked value. ```ts title="sst.config.ts" const linkableValue = new sst.Linkable("MyLinkableValue", { properties: { foo: "Hello World", }, }); const fastapi = new sst.aws.Function("FastAPI", { handler: "functions/src/functions/api.handler", runtime: "python3.11", url: true, link: [linkableValue], }); return { fastapi: fastapi.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-fastapi). --- ## FFmpeg in Lambda Uses [FFmpeg](https://ffmpeg.org/) to process videos. In this example, it takes a `clip.mp4` and grabs a single frame from it. :::tip You don't need to use a Lambda layer to use FFmpeg. ::: We use the [`ffmpeg-static`](https://www.npmjs.com/package/ffmpeg-static) package that contains pre-built binaries for all architectures. ```ts title="index.ts" ``` We can use this to spawn a child process and run FFmpeg. ```ts title="index.ts" spawnSync(ffmpeg, ffmpegParams, { stdio: "pipe" }); ``` We don't need a layer when we deploy this because SST will use the right binary for the target Lambda architecture; including `arm64`. ```json title="sst.config.ts" { nodejs: { install: ["ffmpeg-static"] } } ``` All this is handled by [`nodejs.install`](/docs/component/aws/function#nodejs-install). ```ts title="sst.config.ts" const func = new sst.aws.Function("MyFunction", { url: true, memory: "2 GB", timeout: "15 minutes", handler: "index.handler", copyFiles: [{ from: "clip.mp4" }], nodejs: { install: ["ffmpeg-static"] }, }); return { url: func.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-ffmpeg). --- ## Flutter web Deploy a Flutter web app as a static site to S3 and CloudFront. ```ts title="sst.config.ts" new sst.aws.StaticSite("MySite", { build: { command: "flutter build web", output: "build/web", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-flutter-web). --- ## AWS ApiGatewayV2 Go Uses [aws-lambda-go-api-proxy](https://github.com/awslabs/aws-lambda-go-api-proxy/tree/master) to allow you to run a Go API with API Gateway V2. :::tip We use the `aws-lambda-go-api-proxy` package to handle the API Gateway V2 event. ::: So you write your Go function as you normally would and then use the package to handle the API Gateway V2 event. ```go title="main.go" "github.com/aws/aws-lambda-go/lambda" "github.com/awslabs/aws-lambda-go-api-proxy/httpadapter" ) func router() *http.ServeMux { mux := http.NewServeMux() mux.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") w.WriteHeader(http.StatusOK) w.Write([]byte(`{"message": "hello world"}`)) }) return mux } func main() { lambda.Start(httpadapter.NewV2(router()).ProxyWithContext) } ``` ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayV2("GoApi"); api.route("$default", { handler: "src/", runtime: "go", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-go-api-gateway-v2). --- ## AWS Lambda Go S3 Presigned Generates a presigned URL for the linked S3 bucket in a Go Lambda function. Configure the S3 Client and the PresignedClient. ```go title="main.go" cfg, err := config.LoadDefaultConfig(context.TODO()) if err != nil { panic(err) } client := s3.NewFromConfig(cfg) presignedClient := s3.NewPresignClient(client) ``` Generate the presigned URL. ```go title="main.go" bucketName, err := resource.Get("Bucket", "name") if err != nil { panic(err) } url, err := presignedClient.PresignPutObject(context.TODO(), &s3.PutObjectInput{ Bucket: aws.String(bucket.(string)), Key: aws.String(key), }) ``` ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("Bucket"); const api = new sst.aws.ApiGatewayV2("Api"); api.route("GET /upload-url", { handler: "src/", runtime: "go", link: [bucket], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-go-lambda-bucket-presigned-url). --- ## AWS Lambda Go DynamoDB An example on how to use a Go runtime Lambda with DynamoDB. You configure the DynamoDB client. ```go title="src/main.go" "github.com/sst/sst/v3/sdk/golang/resource" ) func main() { cfg, err := config.LoadDefaultConfig(context.Background()) if err != nil { panic(err) } client := dynamodb.NewFromConfig(cfg) tableName, err := resource.Get("Table", "name") if err != nil { panic(err) } } ``` And make a request to DynamoDB. ```go title="src/main.go" _, err = r.client.PutItem(ctx, &dynamodb.PutItemInput{ TableName: tableName.(string), Item: item, }) ``` ```ts title="sst.config.ts" const table = new sst.aws.Dynamo("Table", { fields: { PK: "string", SK: "string", }, primaryIndex: { hashKey: "PK", rangeKey: "SK" }, }); new sst.aws.Function("GoFunction", { url: true, runtime: "go", handler: "./src", link: [table], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-go-lambda-dynamo). --- ## AWS Hono container with Redis Creates a hit counter app with Hono and Redis. This deploys Hono API as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {2} new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash npx sst dev ``` Now if you go to `http://localhost:3000` you’ll see a counter update as you refresh the page. Finally, you can deploy it by: 1. Using the `Dockerfile` that's included in this example. 2. This compiles our TypeScript file, so we'll need add the following to the `tsconfig.json`. ```diff lang="json" title="tsconfig.json" {4,6} { "compilerOptions": { // ... + "outDir": "./dist" }, + "exclude": ["node_modules"] } ``` 3. Install TypeScript. ```bash npm install typescript --save-dev ``` 3. And add a `build` script to our `package.json`. ```diff lang="json" title="package.json" "scripts": { // ... + "build": "tsc" } ``` And finally, running `npx sst deploy --stage production`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-hono-redis). --- ## AWS Hono streaming An example on how to enable streaming for Lambda functions using Hono. ```ts title="sst.config.ts" { streaming: true } ``` ```ts title="index.ts" ``` To test this in your terminal, use the `curl` command with the `--no-buffer` option. ```bash "--no-buffer" curl --no-buffer https://u3dyblk457ghskwbmzrbylpxoi0ayrbb.lambda-url.us-east-1.on.aws ``` Streaming is also supported through API Gateway REST API. ```ts title="sst.config.ts" const hono = new sst.aws.Function("Hono", { url: true, streaming: true, timeout: "15 minutes", handler: "index.handler", }); return { api: hono.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-hono-stream). --- ## IAM permissions boundaries Use permissions boundaries to set the maximum permissions for all IAM roles that'll be created in your app. In this example, the Function has the `s3:ListAllMyBuckets` and `sqs:ListQueues` permissions. However, we create a permissions boundary that only allows `s3:ListAllMyBuckets`. And we apply it to all Roles in the app using the global [`$transform`](/docs/reference/global/#transform). As a result, the Function is only allowed to list S3 buckets. If you open the deployed URL, you'll see that the SQS list call fails. Learn more about [AWS IAM permissions boundaries](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html). ```ts title="sst.config.ts" // Create a permission boundary const permissionsBoundary = new aws.iam.Policy("MyPermissionsBoundary", { policy: aws.iam.getPolicyDocumentOutput({ statements: [ { actions: ["s3:ListAllMyBuckets"], resources: ["*"], }, ], }).json, }); // Apply the boundary to all roles $transform(aws.iam.Role, (args) => { args.permissionsBoundary = permissionsBoundary; }); // The boundary automatically applies to this Function's role const app = new sst.aws.Function("MyApp", { handler: "index.handler", permissions: [ { actions: ["s3:ListAllMyBuckets", "sqs:ListQueues"], resources: ["*"], }, ], url: true, }); return { app: app.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-iam-permission-boundary). --- ## Import existing resource Import an existing AWS resource using the `transform` option with `opts.import`. ```ts title="sst.config.ts" new sst.aws.Bucket("MyBucket", { transform: { bucket(args, opts) { opts.import = "aws-import-my-bucket"; args.bucket = "aws-import-my-bucket"; args.forceDestroy = undefined; }, }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-import). --- ## Current AWS account You can use the `aws.getXXXXOutput()` provider functions to get info about the current AWS account. Learn more about [provider functions](/docs/providers/#functions). ```ts title="sst.config.ts" return { region: aws.getRegionOutput().region, account: aws.getCallerIdentityOutput({}).accountId, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-info). --- ## AWS JSX Email Uses [JSX Email](https://jsx.email) and the `Email` component to design and send emails. To test this example, change the `sst.config.ts` to use your own email address. ```ts title="sst.config.ts" sender: "email@example.com" ``` Then run. ```bash npm install npx sst dev ``` You'll get an email from AWS asking you to confirm your email address. Click the link to verify it. Next, go to the URL in the `sst dev` CLI output. You should now receive an email rendered using JSX Email. ```ts title="index.ts" await render(Template({ email: "spongebob@example.com", name: "Spongebob Squarepants" })) ``` Once you are ready to go to production, you can: - [Request production access](https://docs.aws.amazon.com/ses/latest/dg/request-production-access.html) for SES - And [use your domain](/docs/component/aws/email/) to send emails ```ts title="sst.config.ts" const email = new sst.aws.Email("MyEmail", { sender: "email@example.com", }); const api = new sst.aws.Function("MyApi", { handler: "index.handler", link: [email], url: true, }); return { api: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-jsx-email). --- ## Kinesis streams Create a Kinesis stream, and subscribe to it with a function. ```ts title="sst.config.ts" const stream = new sst.aws.KinesisStream("MyStream"); // Create a function subscribing to all events stream.subscribe("AllSub", "subscriber.all"); // Create a function subscribing to events of `bar` type stream.subscribe("FilteredSub", "subscriber.filtered", { filters: [ { data: { type: ["bar"], }, }, ], }); const app = new sst.aws.Function("MyApp", { handler: "publisher.handler", link: [stream], url: true, }); return { app: app.url, stream: stream.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-kinesis-stream). --- ## AWS Lambda AI streaming An example on how to stream AI responses from a Lambda function using the [AI SDK](https://ai-sdk.dev). Uses `streamText` from the AI SDK to stream a response through a Lambda function URL. ```ts title="sst.config.ts" { streaming: true } ``` The handler uses `awslambda.streamifyResponse` to stream the AI response back to the client as it's generated. ```ts title="index.ts" async (_event, responseStream) => { const result = streamText({ model: "amazon/nova-micro", prompt: "Write a poem about clouds that is twenty paragraphs long.", }); responseStream.setContentType("text/plain"); for await (const chunk of result.textStream) { responseStream.write(chunk); } responseStream.end(); }, ); ``` Set the API key for the AI gateway. ```bash sst secret set AiGatewayApiKey your-api-key-here ``` Use the "Run Client" dev command in the multiplexer to invoke the server and see the streamed response. ```ts title="sst.config.ts" const server = new sst.aws.Function("Server", { url: true, streaming: true, timeout: "15 minutes", handler: "index.handler", environment: { AI_GATEWAY_API_KEY: new sst.Secret("AiGatewayApiKey").value, }, }); new sst.x.DevCommand("Client", { dev: { autostart: false, command: $interpolate`curl --no-buffer ${server.url}`, title: "Run Client", }, }); return { url: server.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-ai-stream). --- ## AWS Lambda Go This example shows how to use the [`go`](https://golang.org/) runtime in your Lambda functions. Our Go function is in the `src` directory and we point to it in our function. ```ts title="sst.config.ts" {5} new sst.aws.Function("MyFunction", { url: true, runtime: "go", link: [bucket], handler: "./src", }); ``` We are also linking it to an S3 bucket. We can reference the bucket in our function. ```go title="src/main.go" {2} func handler() (string, error) { bucket, err := resource.Get("MyBucket", "name") if err != nil { return "", err } return bucket.(string), nil } ``` The `resource.Get` function is from the SST Go SDK. ```go title="src/main.go" {2} "github.com/sst/sst/v3/sdk/golang/resource" ) ``` The `sst dev` CLI also supports running your Go function [_Live_](/docs/live). ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Function("MyFunction", { url: true, runtime: "go", link: [bucket], handler: "./src", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-golang). --- ## AWS Lambda build hook In this example we hook into the Lambda function build process with `hook.postbuild`. This is useful for modifying the generated Lambda function code before it's uploaded to AWS. It can also be used for uploading the generated sourcemaps to a service like Sentry. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { url: true, handler: "index.handler", hook: { async postbuild(dir) { console.log(`postbuild ------- ${dir}`); }, }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-hook). --- ## AWS Lambda retry with queues An example on how to retry Lambda invocations using SQS queues. Create a SQS retry queue which will be set as the destination for the Lambda function. ```ts title="src/retry.ts" const retryQueue = new sst.aws.Queue("retryQueue"); const bus = new sst.aws.Bus("bus"); const busSubscriber = bus.subscribe("busSubscriber", { handler: "src/bus-subscriber.handler", environment: { RETRIES: "2", // set the number of retries }, link: [retryQueue], // so the function can send messages to the retry queue }); new aws.lambda.FunctionEventInvokeConfig("eventConfig", { functionName: $resolve([busSubscriber.nodes.function.name]).apply( ([name]) => name, ), maximumRetryAttempts: 2, // default is 2, must be between 0 and 2 destinationConfig: { onFailure: { destination: retryQueue.arn, }, }, }); ``` Create a bus subscriber which will publish messages to the bus. Include a DLQ for messages that continue to fail. ```ts title="sst.config.ts" const dlq = new sst.aws.Queue("dlq"); retryQueue.subscribe({ handler: "src/retry.handler", link: [busSubscriber.nodes.function, retryQueue, dlq], timeout: "30 seconds", environment: { RETRIER_QUEUE_URL: retryQueue.url, }, permissions: [ { actions: ["lambda:GetFunction", "lambda:InvokeFunction"], resources: [ $interpolate`arn:aws:lambda:${aws.getRegionOutput().region}:${ aws.getCallerIdentityOutput().accountId }:function:*`, ], }, ], transform: { function: { deadLetterConfig: { targetArn: dlq.arn, }, }, }, }); ``` The Retry function will read mesaages and send back to the queue to be retried with a backoff. ```ts title="src/retry.ts" for (const record of evt.Records) { const parsed = JSON.parse(record.body); console.log("body", parsed); const functionName = parsed.requestContext.functionArn .replace(":$LATEST", "") .split(":") .pop(); if (parsed.responsePayload) { const attempt = (parsed.requestPayload.attempts || 0) + 1; const info = await lambda.send( new GetFunctionCommand({ FunctionName: functionName, }), ); const max = Number.parseInt( info.Configuration?.Environment?.Variables?.RETRIES || "", ) || 0; console.log("max retries", max); if (attempt > max) { console.log(`giving up after ${attempt} retries`); // send to dlq await sqs.send( new SendMessageCommand({ QueueUrl: Resource.dlq.url, MessageBody: JSON.stringify({ requestPayload: parsed.requestPayload, requestContext: parsed.requestContext, responsePayload: parsed.responsePayload, }), }), ); return; } const seconds = Math.min(Math.pow(2, attempt), 900); console.log( "delaying retry by ", seconds, "seconds for attempt", attempt, ); parsed.requestPayload.attempts = attempt; await sqs.send( new SendMessageCommand({ QueueUrl: Resource.retryQueue.url, DelaySeconds: seconds, MessageBody: JSON.stringify({ requestPayload: parsed.requestPayload, requestContext: parsed.requestContext, }), }), ); } if (!parsed.responsePayload) { console.log("triggering function"); try { await lambda.send( new InvokeCommand({ InvocationType: "Event", Payload: Buffer.from(JSON.stringify(parsed.requestPayload)), FunctionName: functionName, }), ); } catch (e) { if (e instanceof ResourceNotFoundException) { return; } throw e; } } } }; ``` ```ts title="sst.config.ts" const dlq = new sst.aws.Queue("dlq"); const retryQueue = new sst.aws.Queue("retryQueue"); const bus = new sst.aws.Bus("bus"); const busSubscriber = bus.subscribe("busSubscriber", { handler: "src/bus-subscriber.handler", environment: { RETRIES: "2", }, link: [retryQueue], // so the function can send messages to the queue }); const publisher = new sst.aws.Function("publisher", { handler: "src/publisher.handler", link: [bus], url: true, }); new aws.lambda.FunctionEventInvokeConfig("eventConfig", { functionName: $resolve([busSubscriber.nodes.function.name]).apply( ([name]) => name, ), maximumRetryAttempts: 1, destinationConfig: { onFailure: { destination: retryQueue.arn, }, }, }); retryQueue.subscribe({ handler: "src/retry.handler", link: [busSubscriber.nodes.function, retryQueue, dlq], timeout: "30 seconds", environment: { RETRIER_QUEUE_URL: retryQueue.url, }, permissions: [ { actions: ["lambda:GetFunction", "lambda:InvokeFunction"], resources: [ $interpolate`arn:aws:lambda:${aws.getRegionOutput().region}:${ aws.getCallerIdentityOutput().accountId }:function:*`, ], }, ], transform: { function: { deadLetterConfig: { targetArn: dlq.arn, }, }, }, }); return { publisher: publisher.url, dlq: dlq.url, retryQueue: retryQueue.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-retry-with-queues). --- ## AWS Lamda Rust multiple-binaries This example shows how to deploy multiple binary rust project to AWS Lambda. SST relies on the work of [cargo lambda](https://cargo-lambda) to build and deploy Rust Lambda functions. What is special about the following file is that we are defining multiple binaries using the `[[bin]]` section in the `Cargo.toml` file. ```toml title="Cargo.toml" {13,14,15,17,18,19} [package] name = "aws-lambda-rust-multi-bin" version = "0.1.0" edition = "2021" [dependencies] lambda_runtime = "0.13.0" serde = { version = "1.0.217", features = ["derive"] } serde_json = "1.0.138" tokio = { version = "1", features = ["macros"] } # -- please note ommited dependencies -- [[bin]] name = "push" path = "src/push.rs" [[bin]] name = "pop" path = "src/pop.rs" ``` We then utilise the . syntax to specify the handler binary ```ts title="sst.config.ts" {5,11} new sst.aws.Function("push", { url: true, runtime: "rust", link: [bucket], handler: "./.push", }); new sst.aws.Function("pop", { url: true, runtime: "rust", link: [bucket], handler: "./.pop", }); ``` ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("Bucket"); const push = new sst.aws.Function("push", { runtime: "rust", handler: "./.push", url: true, architecture: "arm64", link: [bucket], }); const pop = new sst.aws.Function("pop", { runtime: "rust", handler: "./.pop", url: true, architecture: "arm64", link: [bucket], }); return { push_url: push.url, pop_url: pop.url }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-rust-multiple-binaries). --- ## AWS Lambda streaming An example on how to enable streaming for Lambda functions. ```ts title="sst.config.ts" { streaming: true } ``` Use the `awslambda.streamifyResponse` function to wrap your handler. The `awslambda` global is provided by the Lambda execution environment at runtime, and SST provides it automatically during `sst dev` as well. For TypeScript types, importing from `@types/aws-lambda` will augment the global namespace. ```ts title="index.ts" async (event, stream) => { stream = awslambda.HttpResponseStream.from(stream, { statusCode: 200, headers: { "Content-Type": "text/plain; charset=UTF-8", "X-Content-Type-Options": "nosniff", }, }); stream.write("Hello "); await new Promise((resolve) => setTimeout(resolve, 3000)); stream.write("World"); stream.end(); }, ); ``` ```ts title="sst.config.ts" const fn = new sst.aws.Function("MyFunction", { url: true, streaming: true, timeout: "15 minutes", handler: "index.handler", }); return { url: fn.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-stream). --- ## AWS Lambda tRPC streaming An example on how to use tRPC with Lambda streaming. Uses `@trpc/server`'s `awsLambdaStreamingRequestHandler` adapter to handle streaming responses through Lambda function URLs. The `trpc-server` function defines a tRPC router and streams responses. The `trpc-client` function invokes the server using `httpBatchStreamLink`. Streaming is supported in both `sst dev` and `sst deploy`. ```ts title="sst.config.ts" const trpcServer = new sst.aws.Function('TrpcServer', { handler: 'trpc-server.handler', streaming: true, url: true, runtime: 'nodejs24.x', }); new sst.x.DevCommand('Client', { dev: { autostart: false, command: $interpolate`npx tsx trpc-client.ts`, title: 'Run Client', }, environment: { TRPC_SERVER_URL: trpcServer.url, }, }); return { serverUrl: trpcServer.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-trpc-stream). --- ## AWS Lambda in a VPC You can use SST to locally work on Lambda functions that are in a VPC. To do so, you'll need to enable `bastion` and `nat` on the `Vpc` component. ```ts title="sst.config.ts" new sst.aws.Vpc("MyVpc", { bastion: true, nat: "managed" }); ``` The NAT gateway is necessary to allow your Lambda function to connect to the internet. While, the bastion host is necessary for your local machine to be able to tunnel to the VPC. You'll need to install the tunnel, if you haven't done this before. ```bash "sudo" sudo sst tunnel install ``` This needs _sudo_ to create the network interface on your machine. You'll only need to do this once. Now you can run `sst dev`, your function can access resources in the VPC. For example, here we are connecting to a Redis cluster. ```ts title="index.ts" const redis = new Cluster( [{ host: Resource.MyRedis.host, port: Resource.MyRedis.port }], { dnsLookup: (address, callback) => callback(null, address), redisOptions: { tls: {}, username: Resource.MyRedis.username, password: Resource.MyRedis.password, }, } ); ``` The Redis cluster is in the same VPC as the function. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true, nat: "managed" }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const api = new sst.aws.Function("MyFunction", { vpc, url: true, link: [redis], handler: "index.handler" }); return { url: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-lambda-vpc). --- ## Linkable env vars Pass SST link env vars to a native `aws.ecs.TaskDefinition` container using `sst.Linkable.env()`. This lets `Resource.MyResource` work at runtime in compute not managed by SST. ```ts title="sst.config.ts" // Create an SST bucket const bucket = new sst.aws.Bucket("MyBucket"); // Create a custom linkable const linkable = new sst.Linkable("MyLinkable", { properties: { foo: "bar", }, }); // Create VPC and ECS cluster using native AWS resources const vpc = new aws.ec2.Vpc("Vpc", { cidrBlock: "10.0.0.0/16" }); const subnet = new aws.ec2.Subnet("Subnet", { vpcId: vpc.id, cidrBlock: "10.0.0.0/24", }); const cluster = new aws.ecs.Cluster("Cluster"); // Linkable.env() returns a Record, but ECS expects // environment as an array of { name, value } objects const environment = sst.Linkable.env([bucket, linkable]).apply((env) => Object.entries(env).map(([name, value]) => ({ name, value })), ); const taskDefinition = new aws.ecs.TaskDefinition("TaskDefinition", { family: $interpolate`${$app.name}-${$app.stage}`, cpu: "256", memory: "512", networkMode: "awsvpc", requiresCompatibilities: ["FARGATE"], containerDefinitions: $jsonStringify([ { name: "app", image: "public.ecr.aws/docker/library/node:20-slim", essential: true, environment, }, ]), }); new aws.ecs.Service("Service", { cluster: cluster.arn, taskDefinition: taskDefinition.arn, desiredCount: 0, launchType: "FARGATE", networkConfiguration: { subnets: [subnet.id], }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-linkable-env). --- ## AWS Load Balancer Web Application Firewall (WAF) Enable WAF for an AWS Load Balancer. The WAF is configured to enable a rate limit and enables AWS managed rules. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const service = cluster.addService("MyAppService", { image: { context: "./", dockerfile: "packages/server/Dockerfile", }, }); const rateLimitRule = { name: "RateLimitRule", statement: { rateBasedStatement: { limit: 200, aggregateKeyType: "IP", }, }, priority: 1, action: { block: {} }, visibilityConfig: { cloudwatchMetricsEnabled: true, sampledRequestsEnabled: true, metricName: "MyAppRateLimitRule", }, }; const awsManagedRules = { name: "AWSManagedRules", statement: { managedRuleGroupStatement: { name: "AWSManagedRulesCommonRuleSet", vendorName: "AWS", }, }, priority: 2, overrideAction: { none: {}, }, visibilityConfig: { cloudwatchMetricsEnabled: true, sampledRequestsEnabled: true, metricName: "MyAppAWSManagedRules", }, }; const webAcl = new aws.wafv2.WebAcl("AppAlbWebAcl", { defaultAction: { allow: {} }, scope: "REGIONAL", visibilityConfig: { cloudwatchMetricsEnabled: true, sampledRequestsEnabled: true, metricName: "AppAlbWebAcl", }, rules: [rateLimitRule, awsManagedRules], }); service.nodes.loadBalancer.arn.apply((arn) => { new aws.wafv2.WebAclAssociation("MyAppAlbWebAclAssociation", { resourceArn: arn, webAclArn: webAcl.arn, }); }); return {}; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-load-balancer-waf). --- ## AWS multi-region To deploy resources to multiple AWS regions, you can create a new provider for the region you want to deploy to. ```ts title="sst.config.ts" const provider = new aws.Provider("MyProvider", { region: "us-west-2" }); ``` And then pass that in to the resource. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "index.handler" }, { provider }); ``` If no provider is passed in, the default provider will be used. And if no region is specified, the default region from your credentials will be used. ```ts title="sst.config.ts" const east = new sst.aws.Function("MyEastFunction", { url: true, handler: "index.handler", }); const provider = new aws.Provider("MyWestProvider", { region: "us-west-2" }); const west = new sst.aws.Function( "MyWestFunction", { url: true, handler: "index.handler", }, { provider } ); return { east: east.url, west: west.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-multi-region). --- ## AWS MySQL local In this example, we connect to a locally running MySQL instance for dev. While on deploy, we use RDS. We use the [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/) CLI to start a local container with MySQL. You don't have to use Docker, you can use any other way to run MySQL locally. ```bash docker run \ --rm \ -p 3306:3306 \ -v $(pwd)/.sst/storage/mysql:/var/lib/mysql/data \ -e MYSQL_ROOT_PASSWORD=password \ -e MYSQL_DATABASE=local \ mysql:8.0 ``` The data is saved to the `.sst/storage` directory. So if you restart the dev server, the data will still be there. We then configure the `dev` property of the `Mysql` component with the settings for the local MySQL instance. ```ts title="sst.config.ts" dev: { username: "root", password: "password", database: "local", host: "localhost", port: 3306, } ``` By providing the `dev` prop for Mysql, SST will use the local MySQL instance and not deploy a new RDS database when running `sst dev`. It also allows us to access the database through a Resource `link` without having to conditionally check if we are running locally. ```ts title="index.ts" const pool = new Pool({ host: Resource.MyDatabase.host, port: Resource.MyDatabase.port, user: Resource.MyDatabase.username, password: Resource.MyDatabase.password, database: Resource.MyDatabase.database, }); ``` The above will work in both `sst dev` and `sst deploy`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2" }); const mysql = new sst.aws.Mysql("MyDatabase", { dev: { username: "root", password: "password", database: "local", host: "localhost", port: 3306, }, vpc, }); new sst.aws.Function("MyFunction", { vpc, url: true, link: [mysql], handler: "index.handler", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-mysql-local). --- ## AWS MySQL In this example, we deploy an RDS MySQL database. ```ts title="sst.config.ts" const mysql = new sst.aws.Mysql("MyDatabase", { vpc, }); ``` And link it to a Lambda function. ```ts title="sst.config.ts" {3} new sst.aws.Function("MyApp", { handler: "index.handler", link: [mysql], url: true, vpc, }); ``` Now in the function we can access the database. ```ts title="index.ts" const connection = await mysql.createConnection({ database: Resource.MyDatabase.database, host: Resource.MyDatabase.host, port: Resource.MyDatabase.port, user: Resource.MyDatabase.username, password: Resource.MyDatabase.password, }); ``` We also enable the `bastion` option for the VPC. This allows us to connect to the database from our local machine with the `sst tunnel` CLI. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. Now you can run `npx sst dev` and you can connect to the database from your local machine. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2", bastion: true }); const mysql = new sst.aws.Mysql("MyDatabase", { vpc, }); const app = new sst.aws.Function("MyApp", { handler: "index.handler", link: [mysql], url: true, vpc, }); return { app: app.url, host: mysql.host, port: mysql.port, username: mysql.username, password: mysql.password, database: mysql.database, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-mysql). --- ## AWS NestJS with Redis Creates a hit counter app with NestJS and Redis. :::note You need Node 22.12 or higher for this example to work. ::: Also make sure you have Node 22.12. Or set the `--experimental-require-module` flag. This'll allow NestJS to import the SST SDK. This deploys NestJS as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run start:dev", }, }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash npx sst dev ``` Now if you go to `http://localhost:3000` you’ll see a counter update as you refresh the page. Finally, you can deploy it using `npx sst deploy --stage production` using a `Dockerfile` that's included in the example. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc('MyVpc', { bastion: true }); const redis = new sst.aws.Redis('MyRedis', { vpc }); const cluster = new sst.aws.Cluster('MyCluster', { vpc }); new sst.aws.Service('MyService', { cluster, link: [redis], loadBalancer: { ports: [{ listen: '80/http', forward: '3000/http' }], }, dev: { command: 'npm run start:dev', }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-nestjs-redis). --- ## AWS Next.js add behavior Here's how to add additional routes or cache behaviors to the CDN of a Next.js app deployed with OpenNext to AWS. Specify the path pattern that you want to forward to your new origin. For example, to forward all requests to the `/blog` path to a different origin. ```ts title="sst.config.ts" pathPattern: "/blog/*" ``` And then specify the domain of the new origin. ```ts title="sst.config.ts" domainName: "blog.example.com" ``` We use this to `transform` our site's CDN and add the additional behaviors. ```ts title="sst.config.ts" const blogOrigin = { // The domain of the new origin domainName: "blog.example.com", originId: "blogCustomOrigin", customOriginConfig: { httpPort: 80, httpsPort: 443, originSslProtocols: ["TLSv1.2"], // If HTTPS is supported originProtocolPolicy: "https-only", }, }; const cacheBehavior = { // The path to forward to the new origin pathPattern: "/blog/*", targetOriginId: blogOrigin.originId, viewerProtocolPolicy: "redirect-to-https", allowedMethods: ["GET", "HEAD", "OPTIONS"], cachedMethods: ["GET", "HEAD"], forwardedValues: { queryString: true, cookies: { forward: "all", }, }, }; new sst.aws.Nextjs("MyWeb", { transform: { cdn: (options: sst.aws.CdnArgs) => { options.origins = $resolve(options.origins).apply(val => [...val, blogOrigin]); options.orderedCacheBehaviors = $resolve( options.orderedCacheBehaviors || [] ).apply(val => [...val, cacheBehavior]); }, }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-nextjs-add-behavior). --- ## AWS Next.js basic auth Deploys a simple Next.js app and adds basic auth to it. This is useful for dev environments where you want to share your app your team but ensure that it's not publicly accessible. :::tip You can use this for all the SSR sites, like Astro, Remix, SvelteKit, etc. ::: This works by injecting some code into a CloudFront function that checks the basic auth header and matches it against the `USERNAME` and `PASSWORD` secrets. ```ts title="sst.config.ts" { injection: $interpolate` if ( !event.request.headers.authorization || event.request.headers.authorization.value !== "Basic ${basicAuth}" ) { return { statusCode: 401, headers: { "www-authenticate": { value: "Basic" } } }; }`, } ``` To deploy this, you need to first set the `USERNAME` and `PASSWORD` secrets. ```bash sst secret set USERNAME my-username sst secret set PASSWORD my-password ``` If you are deploying this to preview environments, you might want to set the secrets using the [`--fallback`](/docs/reference/cli#secret) flag. ```ts title="sst.config.ts" const username = new sst.Secret("USERNAME"); const password = new sst.Secret("PASSWORD"); const basicAuth = $resolve([username.value, password.value]).apply( ([username, password]) => Buffer.from(`${username}:${password}`).toString("base64") ); new sst.aws.Nextjs("MyWeb", { server: { // Don't password protect prod edge: $app.stage !== "production" ? { viewerRequest: { injection: $interpolate` if ( !event.request.headers.authorization || event.request.headers.authorization.value !== "Basic ${basicAuth}" ) { return { statusCode: 401, headers: { "www-authenticate": { value: "Basic" } } }; }`, }, } : undefined, }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-nextjs-basic-auth). --- ## AWS Next.js container with Redis Creates a hit counter app with Next.js and Redis. This deploys Next.js as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash npx sst dev ``` Now if you go to `http://localhost:3000` you’ll see a counter update as you refresh the page. Finally, you can deploy it by: 1. Setting `output: "standalone"` in your `next.config.mjs` file. 2. Adding a `Dockerfile` that's included in this example. 3. Running `npx sst deploy --stage production`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-nextjs-redis). --- ## AWS Next.js streaming An example of how to use streaming Next.js RSC. Uses `Suspense` to stream an async component. ```tsx title="app/page.tsx" Loading...
}> ``` For this demo we also need to make sure the route is not statically built. ```ts title="app/page.tsx" ``` This is deployed with OpenNext, which needs a config to enable streaming. ```ts title="open-next.config.ts" {4} default: { override: { wrapper: "aws-lambda-streaming" } } }; ``` You should see the _friends_ section load after a 3 second delay. :::note Safari handles streaming differently than other browsers. ::: Safari uses a [different heuristic](https://bugs.webkit.org/show_bug.cgi?id=252413) to determine when to stream data. You need to render _enough_ initial HTML to trigger streaming. This is typically only a problem for demo apps. ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb"); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-nextjs-stream). --- ## AWS Nuxt streaming An example of how to use streaming with Nuxt.js. Uses `createEventStream` to stream data from a server API. ```ts title="server/api/streaming.ts" const eventStream = createEventStream(event); eventStream.push("Start\n\n"); // Send a message every second const interval = setInterval(async () => { await eventStream.push(`Random: ${Math.random().toFixed(5)} `); }, 1000); } ``` The client uses the Fetch API to consume the stream. ```vue title="app.vue" ``` Make sure to have your nuxt.config.ts set up to handle the streaming API correctly. ```ts title="nuxt.config.ts" {4-6} nitro: { preset: 'aws-lambda', awsLambda: { streaming: true } } }); ``` You should see random numbers streamed to the page every second for 10 seconds. :::note Safari handles streaming differently than other browsers. ::: Safari uses a [different heuristic](https://bugs.webkit.org/show_bug.cgi?id=252413) to determine when to stream data. You need to render _enough_ initial HTML to trigger streaming. This is typically only a problem for demo apps. ```ts title="sst.config.ts" new sst.aws.Nuxt("MyWeb"); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-nuxt-stream). --- ## AWS OpenSearch local In this example, we connect to a locally running OpenSearch process for dev. While on deploy, we use AWS' OpenSearch Service. We use the [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/) CLI to start a local container with OpenSearch. You don't have to use Docker, you can use any other way to run OpenSearch locally. ```bash docker run \ --rm \ -p 9200:9200 \ -v $(pwd)/.sst/storage/opensearch:/usr/share/opensearch/data \ -e discovery.type=single-node \ -e plugins.security.disabled=true \ -e OPENSEARCH_INITIAL_ADMIN_PASSWORD=^Passw0rd^ \ opensearchproject/opensearch:2.17.0 ``` The data is saved to the `.sst/storage` directory. So if you restart the dev server, the data will still be there. We then configure the `dev` property of the `OpenSearch` component with the settings for the local OpenSearch instance. ```ts title="sst.config.ts" dev: { url: "http://localhost:9200", username: "admin", password: "^Passw0rd^" } ``` By providing the `dev` prop for OpenSearch, SST will use the local OpenSearch process and not deploy a new OpenSearch domain when running `sst dev`. It also allows us to access the local process through a Resource `link` without having to conditionally check if we are running locally. ```ts title="index.ts" const client = new Client({ node: Resource.MySearch.url, auth: { username: Resource.MySearch.username, password: Resource.MySearch.password, }, }); ``` The above will work in both `sst dev` and `sst deploy`. ```ts title="sst.config.ts" const search = new sst.aws.OpenSearch("MySearch", { dev: { url: "http://localhost:9200", username: "admin", password: "^Passw0rd^", }, }); new sst.aws.Function("MyApp", { handler: "index.handler", url: true, link: [search], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-open-search-local). --- ## AWS OpenSearch In this example we create a new OpenSearch domain, link it to a function, and then query it. Start by creating a new OpenSearch domain. ```ts title="sst.config.ts" const search = new sst.aws.OpenSearch("MySearch"); ``` Once linked to a function, we can connect to it. ```ts title="index.ts" const client = new Client({ node: Resource.MySearch.url, auth: { username: Resource.MySearch.username, password: Resource.MySearch.password } }); ``` This is using the [OpenSearch JS SDK](https://docs.opensearch.org/docs/latest/clients/javascript/index) to connect to the OpenSearch domain.. ```ts title="sst.config.ts" const search = new sst.aws.OpenSearch("MySearch"); const app = new sst.aws.Function("MyApp", { handler: "index.handler", url: true, link: [search], }); return { app: app.url, url: search.url, username: search.username, password: search.password, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-open-search). --- ## AWS PlanetScale Drizzle MySQL In this example, we use PlanetScale with a branch-per-stage pattern. Every stage gets its own database branch — so each PR can have an isolated database. ```ts title="sst.config.ts" const db = planetscale.getDatabaseVitessOutput({ id: "mydb", organization: "myorg", }); const branch = $app.stage === "production" ? planetscale.getVitessBranchOutput({ id: db.defaultBranch, organization: db.organization, database: db.name, }) : new planetscale.VitessBranch("DatabaseBranch", { database: db.name, organization: db.organization, name: $app.stage, parentBranch: db.defaultBranch, }); ``` We then create a password and wrap it in a `Linkable` to link it to a function. ```ts title="sst.config.ts" {3} new sst.aws.Function("Api", { handler: "src/api.handler", link: [database], url: true, }); ``` You can push your Drizzle schema changes to PlanetScale with: ```bash bun run db:push ``` In the function we use [Drizzle ORM](https://orm.drizzle.team) with the [`Resource`](/docs/reference/sdk/#resource) helper. ```ts title="src/drizzle.ts" connection: { host: Resource.Database.host, username: Resource.Database.username, password: Resource.Database.password, }, }); ``` ```ts title="sst.config.ts" const db = planetscale.getDatabaseVitessOutput({ id: "example", organization: "vimtor", }); const branch = $app.stage === "production" ? planetscale.getVitessBranchOutput({ id: db.defaultBranch, organization: db.organization, database: db.name, }) : new planetscale.VitessBranch("DatabaseBranch", { database: db.name, organization: db.organization, name: $app.stage, parentBranch: db.defaultBranch, }); const password = new planetscale.VitessBranchPassword("DatabasePassword", { database: db.name, organization: db.organization, branch: branch.name, role: "admin", name: `${$app.name}-${$app.stage}`, }); const database = new sst.Linkable("Database", { properties: { host: password.accessHostUrl, username: password.username, password: password.plainText, database: db.name, port: 3306, }, }); const api = new sst.aws.Function("Api", { handler: "src/api.handler", link: [database], url: true, }); return { url: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-planetscale-drizzle-mysql). --- ## AWS PlanetScale Drizzle Postgres In this example, we use PlanetScale Postgres with a branch-per-stage pattern. Every stage gets its own database branch — so each PR can have an isolated database. ```ts title="sst.config.ts" const db = planetscale.getDatabasePostgresOutput({ id: "mydb", organization: "myorg", }); const branch = $app.stage === "production" ? planetscale.getPostgresBranchOutput({ id: db.defaultBranch, organization: db.organization, database: db.name, }) : new planetscale.PostgresBranch("DatabaseBranch", { database: db.name, organization: db.organization, name: $app.stage, parentBranch: db.defaultBranch, }); ``` We then create a role and wrap it in a `Linkable` to link it to a function. ```ts title="sst.config.ts" {3} new sst.aws.Function("Api", { handler: "src/api.handler", link: [database], url: true, }); ``` You can push your Drizzle schema changes to PlanetScale with: ```bash bun run db:push ``` In the function we use [Drizzle ORM](https://orm.drizzle.team) with the [`Resource`](/docs/reference/sdk/#resource) helper. ```ts title="src/drizzle.ts" const client = postgres({ host: Resource.Database.host, username: Resource.Database.username, password: Resource.Database.password, database: Resource.Database.database, }); ``` ```ts title="sst.config.ts" const db = planetscale.getDatabasePostgresOutput({ id: "mydb", organization: "myorg", }); const branch = $app.stage === "production" ? planetscale.getPostgresBranchOutput({ id: db.defaultBranch, organization: db.organization, database: db.name, }) : new planetscale.PostgresBranch("DatabaseBranch", { database: db.name, organization: db.organization, name: $app.stage, parentBranch: db.defaultBranch, }); const role = new planetscale.PostgresBranchRole("DatabaseRole", { database: db.name, organization: db.organization, branch: branch.name, name: `${$app.name}-${$app.stage}`, inheritedRoles: [ "pg_read_all_data", "pg_write_all_data", "postgres", // Only needed for pushing schema changes ], }); const database = new sst.Linkable("Database", { properties: { host: role.accessHostUrl, username: role.username, password: role.password, database: role.databaseName, port: 6432, // Use 5432 for direct connection instead of PgBouncer }, }); const api = new sst.aws.Function("Api", { handler: "src/api.handler", link: [database], url: true, }); return { url: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-planetscale-drizzle-postgres). --- ## Policy Pack Validation You can use Pulumi Policy Packs to enforce compliance and security policies on your infrastructure before deployment. Created policies with and enforcement level of "mandatory" will block the deployment. This example shows how to use the `--policy` flag with `sst diff` and `sst deploy` to validate your infrastructure against a policy pack. Run the diff command with a policy pack to preview changes and check for violations: ```bash sst diff --policy ./policy-pack --stage prod ``` Deploy with policy validation: ```bash sst deploy --policy ./policy-pack --stage prod ``` To get started you can create a new policy pack for AWS using: ```bash mkdir policy-pack cd policy-pack pulumi policy new aws-typescript ``` The example policy pack (check the full example) enforces that all IAM roles must have a permission boundary, blocking the deployment in this sst example. ```ts title="sst.config.ts" const role = new aws.iam.Role("ExampleRoleWithBoundary", { assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal({ Service: "lambda.amazonaws.com", }), // To make this compliant with the policy example, uncomment the following line: // permissionsBoundary: "arn:aws:iam::aws:policy/PowerUserAccess", }); new aws.iam.RolePolicy("S3GetItemPolicy", { role: role.id, policy: aws.iam.getPolicyDocumentOutput({ statements: [ { actions: ["s3:GetObject"], resources: ["*"], }, ], }).json, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-policy-pack). --- ## AWS Postgres local In this example, we connect to a locally running Postgres instance for dev. While on deploy, we use RDS. We use the [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/) CLI to start a local container with Postgres. You don't have to use Docker, you can use Postgres.app or any other way to run Postgres locally. ```bash docker run \ --rm \ -p 5432:5432 \ -v $(pwd)/.sst/storage/postgres:/var/lib/postgresql/data \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=password \ -e POSTGRES_DB=local \ postgres:16.4 ``` The data is saved to the `.sst/storage` directory. So if you restart the dev server, the data will still be there. We then configure the `dev` property of the `Postgres` component with the settings for the local Postgres instance. ```ts title="sst.config.ts" dev: { username: "postgres", password: "password", database: "local", port: 5432, } ``` By providing the `dev` prop for Postgres, SST will use the local Postgres instance and not deploy a new RDS database when running `sst dev`. It also allows us to access the database through a Resource `link` without having to conditionally check if we are running locally. ```ts title="index.ts" const pool = new Pool({ host: Resource.MyPostgres.host, port: Resource.MyPostgres.port, user: Resource.MyPostgres.username, password: Resource.MyPostgres.password, database: Resource.MyPostgres.database, }); ``` The above will work in both `sst dev` and `sst deploy`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2" }); const rds = new sst.aws.Postgres("MyPostgres", { dev: { username: "postgres", password: "password", database: "local", host: "localhost", port: 5432, }, vpc, }); new sst.aws.Function("MyFunction", { vpc, url: true, link: [rds], handler: "index.handler", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-postgres-local). --- ## Prisma in Lambda To use Prisma in a Lambda function you need to - Generate the Prisma Client with the right architecture - Copy the generated client to the function - Run the function inside a VPC You can set the architecture using the `binaryTargets` option in `prisma/schema.prisma`. ```prisma title="prisma/schema.prisma" // For x86 binaryTargets = ["native", "rhel-openssl-3.0.x"] // For ARM // binaryTargets = ["native", "linux-arm64-openssl-3.0.x"] ``` You can also switch to ARM, just make sure to also change the function architecture in your `sst.config.ts`. ```ts title="sst.config.ts" { // For ARM architecture: "arm64" } ``` To generate the client, you need to run `prisma generate` when you make changes to the schema. Since this [needs to be done on every deploy](https://www.prisma.io/docs/orm/more/help-and-troubleshooting/help-articles/vercel-caching-issue#a-custom-postinstall-script), we add a `postinstall` script to the `package.json`. ```json title="package.json" "scripts": { "postinstall": "prisma generate" } ``` This runs the command on `npm install`. We then need to copy the generated client to the function when we deploy. ```ts title="sst.config.ts" { copyFiles: [{ from: "node_modules/.prisma/client/" }] } ``` Our function also needs to run inside a VPC, since Prisma doesn't support the Data API. ```ts title="sst.config.ts" { vpc } ``` #### Prisma in serverless environments Prisma is [not great in serverless environments](https://www.prisma.io/docs/orm/prisma-client/setup-and-configuration/databases-connections#serverless-environments-faas). For a couple of reasons: 1. It doesn't support Data API, so you need to manage the connection pool on your own. 2. Without the Data API, your functions need to run inside a VPC. - You cannot use `sst dev` without [connecting to the VPC](/docs/live#using-a-vpc). 3. Due to the internal architecture of their client, it's also has slower cold starts. Instead we recommend using [Drizzle](https://orm.drizzle.team). This example is here for reference for people that are already using Prisma. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" }); const rds = new sst.aws.Postgres("MyPostgres", { vpc }); const api = new sst.aws.Function("MyApi", { vpc, url: true, link: [rds], // For ARM // architecture: "arm64", handler: "index.handler", copyFiles: [{ from: "node_modules/.prisma/client/" }], }); return { api: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-prisma-lambda). --- ## Puppeteer in Lambda To use Puppeteer in a Lambda function you need: 1. [`puppeteer-core`](https://www.npmjs.com/package/puppeteer-core) 2. Chromium - In `sst dev`, we'll use a locally installed Chromium version. - In `sst deploy`, we'll use the [`@sparticuz/chromium`](https://github.com/sparticuz/chromium) package. It comes with a pre-built binary for Lambda. #### Chromium version Since Puppeteer has a preferred version of Chromium, we'll need to check the version of Chrome that a given version of Puppeteer supports. Head over to the [Puppeteer's Chromium Support page](https://pptr.dev/chromium-support) and check which versions work together. For example, Puppeteer v23.1.1 supports Chrome for Testing 127.0.6533.119. So, we'll use the v127 of `@sparticuz/chromium`. ```bash npm install puppeteer-core@23.1.1 @sparticuz/chromium@127.0.0 ``` #### Install Chromium locally To use this locally, you'll need to install Chromium. ```bash npx @puppeteer/browsers install chromium@latest --path /tmp/localChromium ``` Once installed you'll see the location of the Chromium binary, `/tmp/localChromium/chromium/mac_arm-1350406/chrome-mac/Chromium.app/Contents/MacOS/Chromium`. Update this in your Lambda function. ```ts title="index.ts" // This is the path to the local Chromium binary const YOUR_LOCAL_CHROMIUM_PATH = "/tmp/localChromium/chromium/mac_arm-1350406/chrome-mac/Chromium.app/Contents/MacOS/Chromium"; ``` You'll notice we are using the right binary with the `SST_DEV` environment variable. ```ts title="index.ts" {4-6} const browser = await puppeteer.launch({ args: chromium.args, defaultViewport: chromium.defaultViewport, executablePath: process.env.SST_DEV ? YOUR_LOCAL_CHROMIUM_PATH : await chromium.executablePath(), headless: chromium.headless, }); ``` #### Deploy We don't need a layer to deploy this because `@sparticuz/chromium` comes with a pre-built binary for Lambda. :::note As of writing this, `arm64` is not supported by `@sparticuz/chromium`. ::: We just need to set it in the [`nodejs.install`](/docs/component/aws/function#nodejs-install). ```ts title="sst.config.ts" { nodejs: { install: ["@sparticuz/chromium"] } } ``` And on deploy, SST will use the right binary. :::tip You don't need to use a Lambda layer to use Puppeteer. ::: We are giving our function more memory and a longer timeout since running Puppeteer can take a while. ```ts title="sst.config.ts" const api = new sst.aws.Function("MyFunction", { url: true, memory: "2 GB", timeout: "15 minutes", handler: "index.handler", nodejs: { install: ["@sparticuz/chromium"], }, }); return { url: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-puppeteer). --- ## AWS Lambda Python container Python Lambda function that use large dependencies like `numpy` and `pandas`, can hit the 250MB Lambda package limit. To work around this, you can deploy them as a container image to Lambda. :::tip Container images on Lambda have a limit of 10GB. ::: In this example, we deploy two functions as container image. ```ts title="sst.config.ts" {2-4} const base = new sst.aws.Function("PythonFn", { python: { container: true, }, handler: "./functions/src/functions/api.handler", runtime: "python3.11", link: [linkableValue], url: true, }); ``` Now when you run `sst deploy`, it uses a built-in Dockerfile to build the image and deploy it. You'll need to have the Docker daemon running. :::note You need to have the Docker daemon running locally. ::: To use a custom Dockerfile, you can place a `Dockerfile` in the root of the uv workspace for your function. ```ts title="sst.config.ts" {5} const custom = new sst.aws.Function("PythonFnCustom", { python: { container: true, }, handler: "./custom_dockerfile/src/custom_dockerfile/api.handler", runtime: "python3.11", link: [linkableValue], url: true, }); ``` Here we have a `Dockerfile` in the `custom_dockerfile/` directory. ```dockerfile title="custom_dockerfile/Dockerfile" # The python version to use is supplied as an arg from SST ARG PYTHON_VERSION=3.11 # Use an official AWS Lambda base image for Python FROM public.ecr.aws/lambda/python:${PYTHON_VERSION} # ... ``` The project structure looks something like this. ```txt {5} ├── sst.config.ts ├── pyproject.toml └── custom_dockerfile ├── pyproject.toml ├── Dockerfile └── src └── custom_dockerfile └── api.py ``` Locally, you want to set the Python version in your `pyproject.toml` to make sure that `sst dev` uses the same version as `sst deploy`. ```ts title="sst.config.ts" const linkableValue = new sst.Linkable("MyLinkableValue", { properties: { foo: "Hello World", }, }); const base = new sst.aws.Function("PythonFn", { python: { container: true, }, handler: "./functions/src/functions/api.handler", runtime: "python3.11", link: [linkableValue], url: true, }); const custom = new sst.aws.Function("PythonFnCustom", { python: { container: true, }, handler: "./custom_dockerfile/src/custom_dockerfile/api.handler", runtime: "python3.11", link: [linkableValue], url: true, }); return { base: base.url, custom: custom.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-python-container). --- ## AWS Lambda Python Hugging Face Uses a Python Lambda container image to deploy a lightweight [Hugging Face](https://huggingface.co/) model. Uses the [transformers](https://github.com/huggingface/transformers) library to generate text using the [TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M) model. The backend is the pytorch cpu runtime. :::note This is not a production ready example. ::: This example also shows how it is possible to use custom index resolution to get dependencies from a private pypi server such as the pytorch cpu link. This example also shows how to use a custom Dockerfile to handle complex builds such as installing pytorch and pruning the build size. ```ts title="sst.config.ts" new sst.aws.Function("MyPythonFunction", { python: { container: true, }, handler: "functions/src/functions/api.handler", runtime: "python3.12", timeout: "60 seconds", url: true, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-python-huggingface). --- ## AWS Lambda Python SST uses [uv](https://docs.astral.sh/uv/) to manage your Python runtime, make sure you have it [installed](https://docs.astral.sh/uv/getting-started/installation/). Any [uv workspace](https://docs.astral.sh/uv/concepts/projects/workspaces/#workspace-sources) package can be built and deployed as a Lambda function using SST. Drop-in mode is currently not supported. :::note Builds currently do not tree shake so lots of workspaces can make the build larger than necessary. ::: In this example we deploy a handler from the `functions/` directory. It depends on shared code from another uv workspace in the `core/` directory. ```txt ├── sst.config.ts ├── pyproject.toml ├── core │ ├── pyproject.toml │ └── src │ └── core │ └── __init__.py └── functions ├── pyproject.toml └── src └── functions ├── __init__.py └── api.py ``` The `handler` is the path to the handler file and the name of the handler function in it. ```ts title="sst.config.ts" {2} new sst.aws.Function("MyPythonFunction", { handler: "functions/src/functions/api.handler", runtime: "python3.11", link: [linkableValue], url: true, }); ``` SST will traverse up from the handler path and look for the nearest `pyproject.toml`. And will throw an error if it can't find one. To access linked resources, you can use the SST SDK. ```py title="functions/src/functions/api.py" {1} from sst import Resource def handler(event, context): print(Resource.MyLinkableValue.foo) ``` Where the `sst` package can be added to your `pyproject.toml`. ```toml title="functions/pyproject.toml" [tool.uv.sources] sst = { git = "https://github.com/sst/sst.git", subdirectory = "sdk/python", branch = "dev" } ``` You also want to set the Python version in your `pyproject.toml` to the same version as the one in Lambda. ```toml title="functions/pyproject.toml" requires-python = "==3.11.*" ``` This makes sure that your functions work the same in `sst dev` as `sst deploy`. ```ts title="sst.config.ts" const linkableValue = new sst.Linkable("MyLinkableValue", { properties: { foo: "Hello World", }, }); new sst.aws.Function("MyPythonFunction", { handler: "functions/src/functions/api.handler", runtime: "python3.11", link: [linkableValue], url: true, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-python). --- ## Subscribe to queues Create an SQS queue, subscribe to it, and publish to it from a function. ```ts title="sst.config.ts" const queue = new sst.aws.Queue("MyQueue"); queue.subscribe("subscriber.handler"); const app = new sst.aws.Function("MyApp", { handler: "publisher.handler", link: [queue], url: true, }); return { app: app.url, queue: queue.url, }; ``` The subscriber will read messages from the queue in batches. This array of messages exists on the `Records` property of the `SQSEvent`. ```ts title="subscriber.ts" for (const record of event.Records){ // Message bodies are always strings console.log(record.body) } return; }; ``` By default, all messages in the batch become visible in the queue again if an error occurs. This can lead to unnecessary extra processing and messages being processed more than once. The solution is to enable [partial batch responsese](https://docs.aws.amazon.com/lambda/latest/dg/services-sqs-errorhandling.html#services-sqs-batchfailurereporting) and return which specific messages within the batch should be made visible again in the queue. Update the queue subscriber. ```ts title="sst.config.ts" queue.subscribe("subscriber.handler", { batch: { partialResponses: true, } }); ``` Then update the handler to return the failed items. ```ts title="subscriber.ts" const batchItemFailures = [] for (const record of event.Records){ try { console.log(record.body) if (Math.random() < 0.1){ throw new Error("An error occurred") } } catch (e) { batchItemFailures.push({ itemIdentifier: record.messageId }); } } // Failed items will be made visible in the queue again return { batchItemFailures }; }; ``` ```ts title="sst.config.ts" const queue = new sst.aws.Queue("MyQueue"); queue.subscribe("subscriber.handler", { batch: { partialResponses: true, } }); const app = new sst.aws.Function("MyApp", { handler: "publisher.handler", link: [queue], url: true, }); return { app: app.url, queue: queue.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-queue). --- ## AWS Quota Increase Use the Pulumi AWS provider to request an increase to an AWS service quota. In this example, we increase the Lambda concurrent executions quota. You can find service and quota codes in the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas) or by running `aws service-quotas list-service-quotas --service-code `. ```ts title="sst.config.ts" new aws.servicequotas.ServiceQuota("LambdaConcurrentExecutions", { serviceCode: "lambda", quotaCode: "L-B99A9384", value: 2000, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-quota-increase). --- ## Rails container Deploy a Ruby on Rails app in a container with a linked public S3 bucket. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public", }); const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, environment: { RAILS_MASTER_KEY: (await import("fs")).readFileSync( "config/master.key", "utf8" ), }, dev: { command: "bin/rails server", }, link: [bucket], }); return { vpc: vpc.id }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-rails). --- ## AWS RDS MySQL public Create a publicly accessible MySQL RDS instance with a security group that allows external connections. ```ts title="sst.config.ts" const MYSQL_PORT = 3306; const ALL_IPS = '0.0.0.0/0'; const publicSecurityGroup = new aws.ec2.SecurityGroup( 'MyPublicSecurityGroup', { ingress: [ { // Expose to public connection. Remove if not needed protocol: 'tcp', fromPort: MYSQL_PORT, toPort: MYSQL_PORT, cidrBlocks: [ALL_IPS], }, ], }, ); const identifier = 'my-db-instance'; const database = new aws.rds.Instance( 'MyDbInstanceMySQL', { identifier, engine: 'mysql', // free-tier instanceClass: 'db.t3.micro', allocatedStorage: 20, // free-tier 20GB // credentials username: 'dev-user', password: 'dev-password', dbName: 'dev-database', // settings tags: { Name: identifier }, skipFinalSnapshot: true, // allow public access vpcSecurityGroupIds: [publicSecurityGroup.id], publiclyAccessible: true, }, ); return { Database: database.address }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-rds-instance-mysql-public). --- ## AWS Redis local In this example, we connect to a local Docker Redis instance for dev. While on deploy, we use Redis ElastiCache. We use the [`docker run`](https://docs.docker.com/reference/cli/docker/container/run/) CLI to start a local Redis server. You don't have to use Docker, you can run it locally any way you want. ```bash docker run \ --rm \ -p 6379:6379 \ -v $(pwd)/.sst/storage/redis:/data \ redis:latest ``` The data is persisted to the `.sst/storage` directory. So if you restart the dev server, the data will still be there. We then configure the `dev` property of the `Redis` component with the settings for the local Redis server. ```ts title="sst.config.ts" dev: { host: "localhost", port: 6379 } ``` By providing the `dev` prop for Redis, SST will use the local Redis server and not deploy a new Redis ElastiCache cluster when running `sst dev`. It also allows us to access Redis through a Resource `link`. ```ts title="index.ts" const client = Resource.MyRedis.host === "localhost" ? new Redis({ host: Resource.MyRedis.host, port: Resource.MyRedis.port, }) : new Cluster( [{ host: Resource.MyRedis.host, port: Resource.MyRedis.port, }], { redisOptions: { tls: { checkServerIdentity: () => undefined }, username: Resource.MyRedis.username, password: Resource.MyRedis.password, }, }, ); ``` The local Redis server is running in `standalone` mode, whereas on deploy it'll be in `cluster` mode. So our Lambda function needs to connect using the right config. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "managed" }); const redis = new sst.aws.Redis("MyRedis", { dev: { host: "localhost", port: 6379, }, vpc, }); new sst.aws.Function("MyApp", { vpc, url: true, link: [redis], handler: "index.handler", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-redis-local). --- ## AWS Remix container with Redis Creates a hit counter app with Remix and Redis. This deploys Remix as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash npx sst dev ``` Now if you go to `http://localhost:5173` you’ll see a counter update as you refresh the page. Finally, you can deploy it by adding the `Dockerfile` that's included in this example and running `npx sst deploy --stage production`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-remix-redis). --- ## AWS Remix streaming Follows the [Remix Streaming](https://remix.run/docs/en/main/guides/streaming) guide to create an app that streams data. Uses the `defer` utility to stream data through the `loader` function. ```tsx title="app/routes/_index.tsx" return defer({ spongebob, friends: friendsPromise, }); ``` Then uses the the `Suspense` and `Await` components to render the data. ```tsx title="app/routes/_index.tsx" Loading...}> { /* ... */ } ``` You should see the _friends_ section load after a 3 second delay. :::note Safari handles streaming differently than other browsers. ::: Safari uses a [different heuristic](https://bugs.webkit.org/show_bug.cgi?id=252413) to determine when to stream data. You need to render _enough_ initial HTML to trigger streaming. This is typically only a problem for demo apps. Streaming works out of the box with the `Remix` component. ```ts title="sst.config.ts" new sst.aws.Remix("MyWeb"); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-remix-stream). --- ## Router and bucket Creates a router that serves static files from the `public` folder of a given bucket. ```ts title="sst.config.ts" // Create a bucket that CloudFront can access const bucket = new sst.aws.Bucket("MyBucket", { access: "cloudfront", }); // Upload the image to the `public` folder new aws.s3.BucketObjectv2("MyImage", { bucket: bucket.name, key: "public/spongebob.svg", contentType: "image/svg+xml", source: $asset("spongebob.svg"), }); const router = new sst.aws.Router("MyRouter", { routes: { "/*": { bucket, rewrite: { regex: "^/(.*)$", to: "/public/$1" }, }, }, }); return { image: $interpolate`${router.url}/spongebob.svg`, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-router-bucket). --- ## Router protection with OAC Creates a router with Origin Access Control (OAC) to secure Lambda function URLs behind CloudFront. Direct access to the Lambda URL returns 403. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { protection: "oac", }); const api = new sst.aws.Function("MyApi", { handler: "api.handler", url: { router: { instance: router, path: "/api" }, }, }); return { router: router.url, api: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-router-protection). --- ## Router with WAF Enable WAF (Web Application Firewall) for a Router to protect against common web exploits and bots. WAF includes rate limiting per IP, and AWS managed rules for core rule set, known bad inputs, and SQL injection protection. You can also enable WAF logging to CloudWatch to monitor requests. ```ts title="sst.config.ts" const api = new sst.aws.Function("MyApi", { handler: "api.handler", url: true, }); const router = new sst.aws.Router("MyRouter", { routes: { "/*": api.url, }, waf: { rateLimitPerIp: 1000, managedRules: { coreRuleSet: true, knownBadInputs: true, sqlInjection: true, }, logging: true, }, }); return { url: router.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-router-waf). --- ## Router and function URL Creates a router that routes all requests to a function with a URL. ```ts title="sst.config.ts" const api = new sst.aws.Function("MyApi", { handler: "api.handler", url: true, }); const bucket = new sst.aws.Bucket("MyBucket", { access: "public", }); const router = new sst.aws.Router("MyRouter", { routes: { "/api/*": api.url, "/*": $interpolate`https://${bucket.domain}`, }, }); return { router: router.url, bucket: bucket.domain, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-router). --- ## Rust function Deploy a Rust Lambda function with a function URL and a linked S3 bucket. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("Bucket"); const lambda = new sst.aws.Function("RustFunction", { runtime: "rust", handler: "./", url: true, architecture: "arm64", link: [bucket], }); return { url: lambda.url }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-rust-api). --- ## Rust container Deploy a Rust app in a container with a load balancer using a Dockerfile. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "gateway" }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const service = new sst.aws.Service("MyService", { cluster, image: { context: "./", dockerfile: "Dockerfile", }, loadBalancer: { domain: "rust.dockerfile.dev.sst.dev", ports: [ { listen: "80/http" }, { listen: "443/https", forward: "80/http" }, ], }, }); return { url: service.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-rust-cluster). --- ## Rust Loco Deploy a Rust Loco app with a Postgres database, Redis, and a background worker service. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("LocoVpc", { bastion: true, }); const database = new sst.aws.Postgres("LocoDatabase", { vpc }); const redis = new sst.aws.Redis("LocoRedis", { vpc }); const DATABASE_URL = $interpolate`postgres://${ database.username }:${database.password.apply(encodeURIComponent)}@${database.host}:${ database.port }/${database.database}`; const REDIS_URL = $interpolate`redis://${ redis.username }:${redis.password.apply(encodeURIComponent)}@${redis.host}:${redis.port}`; const locoCluster = new sst.aws.Cluster("LocoCluster", { vpc }); // external facing http service const locoServer = new sst.aws.Service("LocoApp", { cluster: locoCluster, architecture: "x86_64", scaling: { min: 2, max: 4 }, command: ["start"], loadBalancer: { ports: [{ listen: "80/http", forward: "5150/http" }], }, environment: { DATABASE_URL, REDIS_URL, }, link: [database, redis], dev: { command: "cargo loco start", }, }); // add a worker that uses redis to process jobs off a queue new sst.aws.Service("LocoWorker", { cluster: locoCluster, architecture: "x86_64", command: ["start", "--worker"], environment: { DATABASE_URL, REDIS_URL, }, link: [database, redis], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-rust-loco). --- ## AWS Cluster Service Discovery In this example, we are connecting to a service running on a cluster using its AWS Cloud Map service host name. This is useful for service discovery. We are deploying a service to a cluster in a VPC. And we can access it within the VPC using the service's cloud map hostname. ```ts title="lambda.ts" const response = await fetch(`http://${Resource.MyService.service}`); ``` Here we are accessing it through a Lambda function that's linked to the service and is deployed to the same VPC. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2" }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const service = new sst.aws.Service("MyService", { cluster }); new sst.aws.Function("MyFunction", { vpc, url: true, link: [service], handler: "lambda.handler", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-service-discovery). --- ## AWS Shared ALB Creates a standalone ALB that is shared across stages. In dev, the ALB is referenced via `Alb.get()`. In production, it's created fresh. Uses the `$dev ? get : new` pattern to share infrastructure across stages. ```ts title="sst.config.ts" const vpc = $dev ? sst.aws.Vpc.get("MyVpc", "vpc-xxx") : new sst.aws.Vpc("MyVpc"); const cluster = $dev ? sst.aws.Cluster.get("MyCluster", { id: "cluster-xxx", vpc }) : new sst.aws.Cluster("MyCluster", { vpc }); const alb = $dev ? sst.aws.Alb.get("SharedAlb", "arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/xxx") : new sst.aws.Alb("SharedAlb", { vpc, listeners: [ { port: 80, protocol: "http" }, ], }); if ($dev) { new sst.aws.Service("Web", { cluster, image: { context: "web/" }, loadBalancer: { instance: alb, rules: [ { listen: "80/http", forward: "3000/http", conditions: { path: "/app/*" }, priority: 200, }, ], }, }); } new sst.aws.Service("Api", { cluster, image: { context: "api/" }, loadBalancer: { instance: alb, rules: [ { listen: "80/http", forward: "3000/http", conditions: { path: "/api/*" }, priority: 100, }, ], }, }); return { url: alb.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-shared-alb-static). --- ## AWS Shared ALB Creates a standalone ALB shared across multiple services. Shows advanced routing with path conditions, header conditions, and health checks. ```ts title="sst.config.ts" const alb = new sst.aws.Alb("SharedAlb", { vpc, listeners: [ { port: 80, protocol: "http" }, ], }); ``` Services can use header-based routing in addition to path-based: ```ts title="sst.config.ts" new sst.aws.Service("InternalApi", { cluster, image: { context: "api/" }, loadBalancer: { instance: alb, rules: [ { listen: "80/http", forward: "3000/http", conditions: { path: "/api/*", header: { name: "x-internal", values: ["true"] }, }, priority: 50, }, ], }, }); ``` This example creates: - A shared ALB with an HTTP listener - An API service with path-based routing and custom health check - A Web service with path-based routing - Both services share the same ALB ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); // Create a shared ALB with an HTTP listener const alb = new sst.aws.Alb("SharedAlb", { vpc, listeners: [{ port: 80, protocol: "http" }], }); // API service — handles /api/* with a custom health check path new sst.aws.Service("Api", { cluster, image: { context: "api/" }, loadBalancer: { instance: alb, rules: [ { listen: "80/http", forward: "3000/http", conditions: { path: "/api/*" }, priority: 100, }, ], health: { "3000/http": { path: "/api/health", interval: "10 seconds", timeout: "5 seconds", healthyThreshold: 2, unhealthyThreshold: 3, }, }, }, }); // Web service — handles everything else under /app/* new sst.aws.Service("Web", { cluster, image: { context: "web/" }, loadBalancer: { instance: alb, rules: [ { listen: "80/http", forward: "3000/http", conditions: { path: "/app/*" }, priority: 200, }, ], health: { "3000/http": { path: "/app/health", interval: "10 seconds", timeout: "5 seconds", }, }, }, }); return { url: alb.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-shared-alb). --- ## Sharp in Lambda Uses the [Sharp](https://sharp.pixelplumbing.com/) library to resize images. In this example, it resizes a `logo.png` local file to 100x100 pixels. ```json title="sst.config.ts" { nodejs: { install: ["sharp"] } } ``` We don't need a layer to deploy this because `sharp` comes with a pre-built binary for Lambda. This is handled by [`nodejs.install`](/docs/component/aws/function#nodejs-install). :::tip You don't need to use a Lambda layer to use Sharp. ::: In dev, this uses the sharp npm package locally. ```json title="package.json" { "dependencies": { "sharp": "^0.33.5" } } ``` On deploy, SST will use the right binary from the sharp package for the target Lambda architecture. ```ts title="sst.config.ts" const func = new sst.aws.Function("MyFunction", { url: true, handler: "index.handler", nodejs: { install: ["sharp"] }, copyFiles: [{ from: "logo.png" }], }); return { url: func.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-sharp). --- ## AWS SolidStart WebSocket endpoint Deploys a SolidStart app with a [WebSocket endpoint](https://docs.solidjs.com/solid-start/advanced/websocket) in a container to AWS. Uses the experimental WebSocket support in Nitro. ```ts title="app.config.ts" {4} server: { experimental: { websocket: true, }, }, }).addRouter({ name: "ws", type: "http", handler: "./src/ws.ts", target: "server", base: "/ws", }); ``` Once deployed you can test the `/ws` endpoint and it'll send a message back after a 3s delay. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-solid-container-ws). --- ## AWS static site basic auth This deploys a simple static site and adds basic auth to it. This is useful for dev environments where you want to share a static site with your team but ensure that it's not publicly accessible. This works by injecting some code into a CloudFront function that checks the basic auth header and matches it against the `USERNAME` and `PASSWORD` secrets. ```ts title="sst.config.ts" { injection: $interpolate` if ( !event.request.headers.authorization || event.request.headers.authorization.value !== "Basic ${basicAuth}" ) { return { statusCode: 401, headers: { "www-authenticate": { value: "Basic" } } }; }`, } ``` To deploy this, you need to first set the `USERNAME` and `PASSWORD` secrets. ```bash sst secret set USERNAME my-username sst secret set PASSWORD my-password ``` If you are deploying this to preview environments, you might want to set the secrets using the [`--fallback`](/docs/reference/cli#secret) flag. ```ts title="sst.config.ts" const username = new sst.Secret("USERNAME"); const password = new sst.Secret("PASSWORD"); const basicAuth = $resolve([username.value, password.value]).apply( ([username, password]) => Buffer.from(`${username}:${password}`).toString("base64") ); new sst.aws.StaticSite("MySite", { path: "site", // Don't password protect prod edge: $app.stage !== "production" ? { viewerRequest: { injection: $interpolate` if ( !event.request.headers.authorization || event.request.headers.authorization.value !== "Basic ${basicAuth}" ) { return { statusCode: 401, headers: { "www-authenticate": { value: "Basic" } } }; }`, }, } : undefined, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-static-site-basic-auth). --- ## AWS static site Deploy a simple HTML file as a static site with S3 and CloudFront. The website is stored in the `site/` directory. ```ts title="sst.config.ts" new sst.aws.StaticSite("MySite", { path: "site", errorPage: "404.html", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-static-site). --- ## AWS Step Functions task token Use Step Functions with task tokens to pause execution, send a message to an SQS queue, and resume after processing. ```ts title="sst.config.ts" // Create a queue the state machine will send messages to const queue = new sst.aws.Queue("MyQueue"); // Define all the states of the state machine const sendMessage = sst.aws.StepFunctions.sqsSendMessage({ name: "SendMessage", integration: "token", queue, messageBody: { // Task token passed in the message body MyTaskToken: "{% $states.context.Task.Token %}", }, }); const success = sst.aws.StepFunctions.succeed({ name: "Succeed" }); // Create the state machine const stepFunction = new sst.aws.StepFunctions("MyStateMachine", { definition: sendMessage.next(success), }); // Create a function that will receive messages from the queue queue.subscribe({ handler: "index.handler", // Linking the state machine to grant permissions to call `SendTaskSuccess` link: [stepFunction], }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-step-functions-task-token). --- ## AWS SvelteKit container with Redis Creates a hit counter app with SvelteKit and Redis. This deploys SvelteKit as a Fargate service to ECS and it's linked to Redis. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` Since our Redis cluster is in a VPC, we’ll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You’ll only need to do this once on your machine. To start your app locally run. ```bash npx sst dev ``` Now if you go to `http://localhost:5173` you’ll see a counter update as you refresh the page. Finally, you can deploy it by adding the `Dockerfile` that's included in this example and running `npx sst deploy --stage production`. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const redis = new sst.aws.Redis("MyRedis", { vpc }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [redis], loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-svelte-redis). --- ## Swift in Lambda Deploys a simple Swift application to Lambda using the `al2023` runtime. :::note Building this function requires Docker. ::: Check out the README in the repo for more details. ```ts title="sst.config.ts" const swift = new sst.aws.Function("Swift", { runtime: "provided.al2023", architecture: process.arch === "arm64" ? "arm64" : "x86_64", bundle: build("app"), handler: "bootstrap", url: true, }); const router = new sst.aws.Router("SwiftRouter", { routes: { "/*": swift.url, }, domain: "swift.dev.sst.dev", }); return { url: router.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-swift). --- ## T3 Stack in AWS Deploy [T3 stack](https://create.t3.gg) with Drizzle and Postgres to AWS. This example was created using `create-t3-app` and the following options: tRPC, Drizzle, no auth, Tailwind, Postgres, and the App Router. Instead of a local database, we'll be using an RDS Postgres database. ```ts title="src/server/db/index.ts" {2-6} const pool = new Pool({ host: Resource.MyPostgres.host, port: Resource.MyPostgres.port, user: Resource.MyPostgres.username, password: Resource.MyPostgres.password, database: Resource.MyPostgres.database, }); ``` Similarly, for Drizzle Kit. ```ts title="drizzle.config.ts" {8-12} schema: "./src/server/db/schema.ts", dialect: "postgresql", dbCredentials: { ssl: { rejectUnauthorized: false, }, host: Resource.MyPostgres.host, port: Resource.MyPostgres.port, user: Resource.MyPostgres.username, password: Resource.MyPostgres.password, database: Resource.MyPostgres.database, }, tablesFilter: ["aws-t3_*"], } satisfies Config; ``` In our Next.js app we can access our Postgres database because we [link them](/docs/linking/) both. We don't need to use our `.env` files. ```ts title="sst.config.ts" {5} const rds = new sst.aws.Postgres("MyPostgres", { vpc, proxy: true }); new sst.aws.Nextjs("MyWeb", { vpc, link: [rds] }); ``` To run this in dev mode run: ```bash npm install npx sst dev ``` It'll take a few minutes to deploy the database and the VPC. This also starts a tunnel to let your local machine connect to the RDS Postgres database. Make sure you have it installed, you only need to do this once for your local machine. ```bash sudo npx sst tunnel install ``` Now in a new terminal you can run the database migrations. ```bash npm run db:push ``` We also have the Drizzle Studio start automatically in dev mode under the **Studio** tab. ```ts title="sst.config.ts" new sst.x.DevCommand("Studio", { link: [rds], dev: { command: "npx drizzle-kit studio", }, }); ``` And to make sure our credentials are available, we update our `package.json` with the [`sst shell`](/docs/reference/cli) CLI. ```json title="package.json" "db:generate": "sst shell drizzle-kit generate", "db:migrate": "sst shell drizzle-kit migrate", "db:push": "sst shell drizzle-kit push", "db:studio": "sst shell drizzle-kit studio", ``` So running `npm run db:push` will run Drizzle Kit with the right credentials. To deploy this to production run: ```bash npx sst deploy --stage production ``` Then run the migrations. ```bash npx sst shell --stage production npx drizzle-kit push ``` If you are running this locally, you'll need to have a tunnel running. ```bash npx sst tunnel --stage production ``` If you are doing this in a CI/CD pipeline, you'd want your build containers to be in the same VPC. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc", { bastion: true, nat: "ec2" }); const rds = new sst.aws.Postgres("MyPostgres", { vpc, proxy: true }); new sst.aws.Nextjs("MyWeb", { vpc, link: [rds] }); new sst.x.DevCommand("Studio", { link: [rds], dev: { command: "npx drizzle-kit studio", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-t3). --- ## AWS Task Cron Use the [`Task`](/docs/component/aws/task) and [`Cron`](/docs/component/aws/cron) components for long running background tasks. We have a node script that we want to run in `index.mjs`. It'll be deployed as a Docker container using `Dockerfile`. It'll be invoked by a cron job that runs every 2 minutes. ```ts title="sst.config.ts" new sst.aws.Cron("MyCron", { task, schedule: "rate(2 minutes)" }); ``` When this is run in `sst dev`, the task is executed locally using `dev.command`. ```ts title="sst.config.ts" dev: { command: "node index.mjs" } ``` To deploy, you need the Docker daemon running. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const task = new sst.aws.Task("MyTask", { cluster, link: [bucket], dev: { command: "node index.mjs", }, }); new sst.aws.Cron("MyCron", { task, schedule: "rate(2 minutes)", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-task-cron). --- ## AWS Task Use the [`Task`](/docs/component/aws/task) component to run background tasks. We have a node script that we want to run in `image/index.mjs`. It'll be deployed as a Docker container using `image/Dockerfile`. We also have a function that the task is linked to. It uses the [SDK](/docs/reference/sdk/) to start the task. ```ts title="index.ts" {5} const ret = await task.run(Resource.MyTask); return { statusCode: 200, body: JSON.stringify(ret, null, 2), }; }; ``` When this is run in `sst dev`, the task is executed locally using `dev.command`. ```ts title="sst.config.ts" dev: { command: "node index.mjs" } ``` To deploy, you need the Docker daemon running. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); const vpc = new sst.aws.Vpc("MyVpc", { nat: "ec2" }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); const task = new sst.aws.Task("MyTask", { cluster, public: true, link: [bucket], image: { context: "image", }, dev: { command: "node index.mjs", }, }); new sst.aws.Function("MyApp", { vpc, url: true, link: [task], handler: "index.handler", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-task). --- ## Subscribe to topics Create an SNS topic, publish to it from a function, and subscribe to it with a function and a queue. ```ts title="sst.config.ts" const queue = new sst.aws.Queue("MyQueue"); queue.subscribe("subscriber.handler"); const topic = new sst.aws.SnsTopic("MyTopic"); topic.subscribe("MySubscriber1", "subscriber.handler", {}); topic.subscribeQueue("MySubscriber2", queue.arn); const app = new sst.aws.Function("MyApp", { handler: "publisher.handler", link: [topic], url: true, }); return { app: app.url, topic: topic.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-topic). --- ## Vector search Store and search for vector data using the Vector component. Includes a seeder API that uses an LLM to generate embeddings for some movies and optionally their posters. Once seeded, you can call the search API to query the vector database. ```ts title="sst.config.ts" const OpenAiApiKey = new sst.Secret("OpenAiApiKey"); const vector = new sst.aws.Vector("MyVectorDB", { dimension: 1536, }); const seeder = new sst.aws.Function("Seeder", { handler: "index.seeder", link: [OpenAiApiKey, vector], copyFiles: [ { from: "iron-man.jpg", to: "iron-man.jpg" }, { from: "black-widow.jpg", to: "black-widow.jpg", }, { from: "spider-man.jpg", to: "spider-man.jpg", }, { from: "thor.jpg", to: "thor.jpg" }, { from: "captain-america.jpg", to: "captain-america.jpg", }, ], url: true, }); const app = new sst.aws.Function("MyApp", { handler: "index.app", link: [OpenAiApiKey, vector], url: true, }); return { seeder: seeder.url, app: app.url }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-vector). --- ## React SPA with Vite Deploy a React single-page app (SPA) with Vite to S3 and CloudFront. ```ts title="sst.config.ts" new sst.aws.StaticSite("Web", { build: { command: "pnpm run build", output: "dist", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-vite). --- ## AWS Workflow Bus Creates an [AWS Lambda durable workflow](https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html) and triggers it with a [`Bus`](/docs/component/aws/bus). ```ts title="sst.config.ts" const workflow = new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", }); const bus = new sst.aws.Bus("Bus"); bus.subscribe("Workflow", workflow, { pattern: { detailType: ["app.workflow.requested"], }, }); const publisher = new sst.aws.Function("Publisher", { handler: "src/publisher.handler", url: true, link: [bus], }); return { bus: bus.name, publisher: publisher.url, workflow: workflow.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-workflow-bus). --- ## AWS Workflow Cron Creates an [AWS Lambda durable workflow](https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html) and triggers it on a schedule using [`CronV2`](/docs/component/aws/cron-v2). Since `CronV2` accepts a `Workflow`, the setup is just: ```ts title="sst.config.ts" const workflow = new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", }); new sst.aws.CronV2("MyCron", { schedule: "rate(1 minute)", function: workflow, }); ``` ```ts title="sst.config.ts" const workflow = new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", }); const cron = new sst.aws.CronV2("MyCron", { schedule: "rate(1 minute)", function: workflow, }); return { schedule: cron.nodes.schedule.name, workflow: workflow.name, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-workflow-cron). --- ## AWS Workflow Python Uses the [`Workflow`](/docs/component/aws/workflow) component to create an [AWS Lambda durable workflow](https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html) using the Python runtime. Hit the `Invoker` URL to start the workflow. The workflow logs a callback URL with a `token` query parameter. Open that URL to resume the waiting step. ```ts title="sst.config.ts" const workflow = new sst.aws.Workflow("Workflow", { handler: "workflow/main.handler", runtime: "python3.13", }); const resolver = new sst.aws.Function("Resolver", { handler: "resolver/main.handler", runtime: "python3.13", url: true, link: [workflow], }); const invoker = new sst.aws.Function("Invoker", { handler: "invoker/main.handler", runtime: "python3.13", url: true, link: [workflow, resolver], }); return { workflow: workflow.name, invoker: invoker.url, resolver: resolver.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-workflow-python). --- ## AWS Workflow Uses the [`Workflow`](/docs/component/aws/workflow) component to create an [AWS Lambda durable workflow](https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html). Hit the `Invoker` URL to start the workflow. The workflow logs a callback URL with a `token` query parameter. Open that URL to resume the waiting step. ```ts title="sst.config.ts" const workflow = new sst.aws.Workflow("Workflow", { handler: "src/workflow.handler", }); const resolver = new sst.aws.Function("Resolver", { handler: "src/resolver.handler", url: true, link: [workflow], }); const invoker = new sst.aws.Function("Invoker", { handler: "src/invoker.handler", url: true, link: [workflow, resolver], }); return { workflow: workflow.name, invoker: invoker.url, resolver: resolver.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-workflow). --- ## Zero sync engine Deploy the Zero sync engine with a Postgres database configured for logical replication in a VPC cluster. ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("Vpc", { bastion: true, }); const db = new sst.aws.Postgres("Database", { vpc, transform: { parameterGroup: { parameters: [ { name: "rds.logical_replication", value: "1", applyMethod: "pending-reboot", }, { name: "rds.force_ssl", value: "0", applyMethod: "pending-reboot", }, { name: "max_connections", value: "1000", applyMethod: "pending-reboot", }, ], }, }, }); const cluster = new sst.aws.Cluster("Cluster", { vpc }); const connection = $interpolate`postgres://${db.username}:${db.password}@${db.host}:${db.port}`; new sst.aws.Service("Zero", { cluster, image: "rocicorp/zero", dev: { command: "npx zero-cache", }, loadBalancer: { ports: [{ listen: "80/http", forward: "4848/http" }], }, environment: { ZERO_UPSTREAM_DB: $interpolate`${connection}/${db.database}`, ZERO_CVR_DB: $interpolate`${connection}/${db.database}_cvr`, ZERO_CHANGE_DB: $interpolate`${connection}/${db.database}_change`, ZERO_REPLICA_FILE: "zero.db", ZERO_NUM_SYNC_WORKERS: "1", }, }); return { connection: $interpolate`${connection}/${db.database}`, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/aws-zero). --- ## Cloudflare Cron This example creates a Cloudflare Worker that runs on a schedule. ```ts title="sst.config.ts" const cron = new sst.cloudflare.Cron("Cron", { job: "index.ts", schedules: ["* * * * *"] }); return {}; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/cloudflare-cron). --- ## Cloudflare KV This example creates a Cloudflare KV namespace and links it to a worker. Now you can use the SDK to interact with the KV namespace in your worker. ```ts title="sst.config.ts" const storage = new sst.cloudflare.Kv("MyStorage"); const worker = new sst.cloudflare.Worker("Worker", { url: true, link: [storage], handler: "index.ts", }); return { url: worker.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/cloudflare-kv). --- ## Cloudflare Queue This example creates a Cloudflare Queue with a producer and consumer worker. ```ts title="sst.config.ts" const queue = new sst.cloudflare.Queue("MyQueue"); queue.subscribe("consumer.ts"); const producer = new sst.cloudflare.Worker("Producer", { handler: "producer.ts", link: [queue], url: true, }); return { url: producer.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/cloudflare-queue). --- ## Cloudflare SPA with Vite Deploy a single-page app (SPA) with Vite to Cloudflare. ```ts title="sst.config.ts" new sst.cloudflare.StaticSiteV2("Vite", { notFound: "single-page-application", build: { command: "pnpm run build", output: "dist", }, }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/cloudflare-vite). --- ## Link multiple secrets You might have multiple secrets that need to be used across your app. It can be tedious to create a new secret and link it to each function or resource. A common pattern to addresses this is to create an object with all your secrets and then link them all at once. Now when you have a new secret, you can add it to the object and it will be automatically available to all your resources. ```ts title="sst.config.ts" // Manage all secrets together const secrets = { secret1: new sst.Secret("Secret1", "some-secret-value-1"), secret2: new sst.Secret("Secret2", "some-secret-value-2"), }; const allSecrets = Object.values(secrets); const bucket = new sst.aws.Bucket("MyBucket"); const api = new sst.aws.Function("MyApi", { link: [bucket, ...allSecrets], handler: "index.handler", url: true, }); return { url: api.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/secret-link-all). --- ## Default function props Set default props for all the functions in your app using the global [`$transform`](/docs/reference/global/#transform). ```ts title="sst.config.ts" $transform(sst.aws.Function, (args) => { args.runtime = "nodejs14.x"; args.environment = { FOO: "BAR", }; }); new sst.aws.Function("MyFunction", { handler: "index.ts", }); ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/sst-transform). --- ## Vercel domains Creates a router that uses domains purchased through and hosted in your Vercel account. Ensure the `VERCEL_API_TOKEN` and `VERCEL_TEAM_ID` environment variables are set. ```ts title="sst.config.ts" const router = new sst.aws.Router("MyRouter", { domain: { name: "ion.sst.moe", dns: sst.vercel.dns({ domain: "sst.moe" }), }, routes: { "/*": "https://sst.dev", }, }); return { router: router.url, }; ``` View the [full example](https://github.com/sst/sst/tree/dev/examples/vercel-domain). --- ## IAM Credentials Configure the IAM credentials that's used to deploy your app. https://sst.dev/docs/iam-credentials SST deploys your AWS resources using your AWS credentials. In this guide we'll look at how to set these credentials, the basic set of permissions it needs, and how to customize it. --- ## Credentials There are a couple of different ways to set the credentials that your app will use. The simplest is using a credentials file. However, if you're still figuring out how to configure your AWS account, we recommend [following our guide on it](/docs/aws-accounts). --- #### From a file By default, your AWS credentials are in a file: - `~/.aws/credentials` on Linux, Unix, macOS - `C:\Users\USER_NAME\.aws\credentials` on Windows If the credentials file does not exist on your machine. 1. Follow this to [create an IAM user](https://sst.dev/chapters/create-an-iam-user.html) 2. And then use this to [configure the credentials](https://sst.dev/chapters/configure-the-aws-cli.html) Below we'll look at how to customize the permissions that are granted to this user. --- Your credentials file might look like: ```bash title="~/.aws/credentials" [default] aws_access_key_id = aws_secret_access_key = ``` Where `default` is the name of the credentials profile. And if you have multiple credentials, it might look like: ```bash title="~/.aws/credentials" [default] aws_access_key_id = aws_secret_access_key = [staging] aws_access_key_id = aws_secret_access_key = [production] aws_access_key_id = aws_secret_access_key = ``` By default, SST uses the credentials for the `default` profile. To use one of the other profiles, set the `profile` in your `sst.config.ts`. ```ts title="sst.config.ts" { providers: { aws: { profile: "staging" } } } ``` You can customize this for the stage your app is being deployed to. ```ts title="sst.config.ts" app(input) { return { // ... providers: { aws: { profile: input?.stage === "staging" ? "staging" : "default" } } }; }, ``` If you've configured AWS credentials previously through the `AWS_PROFILE` environment variable or through a `.env` file, it will override the profile set in your `sst.config.ts`. So make sure to remove any references to `AWS_PROFILE`. --- #### From environment variables SST can also detect AWS credentials in your environment and use them to deploy. - `AWS_ACCESS_KEY_ID` - `AWS_SECRET_ACCESS_KEY` If you are using temporary credentials, you can also set the `AWS_SESSION_TOKEN`. This is useful when you are deploying through a CI environment and there are no credential files around. --- ### Precedence If you have AWS credentials set in multiple places, SST will first look at: 1. Environment variables This includes `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, or `AWS_SESSION_TOKEN`, and `AWS_PROFILE`. This also includes environment variables set in a `.env` file. 2. SST config Then it'll check for the credentials or `profile` in your `sst.config.ts`. 3. AWS config It'll then check for the `[default]` profile in your `~/.aws/config` or `C:\Users\USER_NAME\.aws\config`. 4. Credential files Finally, it'll look for any static credentials in your `~/.aws/credentials` or `C:\Users\USER_NAME\.aws\credentials`. --- ## IAM permissions The credentials above are for an IAM user and it comes with an IAM policy. This defines what resources the given user has access to. By default, we are using `AdministratorAccess`. This gives your user complete access. However, if you are using SST at your company, you want to secure these permissions. Here we'll look at exactly what SST needs and how you can go about customizing it. --- Let's start with an IAM policy you can _copy and paste_.
**Copy IAM Policy** ```json title="iam-policy.json" { "Version": "2012-10-17", "Statement": [ { "Sid": "ManageBootstrapStateBucket", "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutBucketVersioning", "s3:PutBucketNotification", "s3:PutBucketPolicy", "s3:DeleteObject", "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::sst-state-*" ] }, { "Sid": "ManageBootstrapAssetBucket", "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutBucketVersioning", "s3:PutBucketNotification", "s3:PutBucketPolicy", "s3:DeleteObject", "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::sst-asset-*" ] }, { "Sid": "ManageBootstrapECRRepo", "Effect": "Allow", "Action": [ "ecr:CreateRepository", "ecr:DescribeRepositories" ], "Resource": [ "arn:aws:ecr:REGION:ACCOUNT:repository/sst-asset" ] }, { "Sid": "ManageBootstrapSSMParameter", "Effect": "Allow", "Action": [ "ssm:GetParameters", "ssm:PutParameter" ], "Resource": [ "arn:aws:ssm:REGION:ACCOUNT:parameter/sst/passphrase/*", "arn:aws:ssm:REGION:ACCOUNT:parameter/sst/bootstrap" ] }, { "Sid": "Deployments", "Effect": "Allow", "Action": [ "*" ], "Resource": [ "*" ] }, { "Sid": "ManageSecrets", "Effect": "Allow", "Action": [ "ssm:DeleteParameter", "ssm:GetParameter", "ssm:GetParameters", "ssm:GetParametersByPath", "ssm:PutParameter", "ssm:AddTagsToResource", "ssm:ListTagsForResource" ], "Resource": [ "arn:aws:ssm:REGION:ACCOUNT:parameter/sst/*" ] }, { "Sid": "LiveLambdaSocketConnection", "Effect": "Allow", "Action": [ "appsync:EventSubscribe", "appsync:EventPublish", "appsync:EventConnect" ], "Resource": [ "*" ] } ] } ```
This list roughly breaks down into the following: 1. Permissions needed to bootstrap SST in your AWS account 2. Permissions needed to deploy your app 3. Permissions needed by the CLI Let's look at them in detail. --- ### Bootstrap SST needs to [bootstrap](/docs/state/#bootstrap) each AWS account, in each region, once. This happens automatically when you run `sst deploy` or `sst dev`. There are a couple of different things being bootstrapped and these are the permissions they need: - Permissions to create the bootstrap bucket for storing state. ```json { "Sid": "ManageBootstrapStateBucket", "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutBucketVersioning", "s3:PutBucketNotification", "s3:DeleteObject", "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::sst-state-*" ] } ``` - Permissions to create the bootstrap bucket for storing the assets in your app. These include the Lambda function bundles and static assets in your frontends. ```json { "Sid": "ManageBootstrapAssetBucket", "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:PutBucketVersioning", "s3:DeleteObject", "s3:GetObject", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::sst-asset-*" ] } ``` - Permissions to create the bootstrap ECR repository for hosting the Docker images in your app. ```json { "Sid": "ManageBootstrapECRRepo", "Effect": "Allow", "Action": [ "ecr:CreateRepository", "ecr:DescribeRepositories" ], "Resource": [ "arn:aws:ecr:REGION:ACCOUNT:repository/sst-asset" ] } ``` - Permissions to create the bootstrap SSM parameter. This parameter stores information about the deployed bootstrap resources. ```json { "Sid": "ManageBootstrapSSMParameter", "Effect": "Allow", "Action": [ "ssm:GetParameters", "ssm:PutParameter" ], "Resource": [ "arn:aws:ssm:REGION:ACCOUNT:parameter/sst/passphrase/*", "arn:aws:ssm:REGION:ACCOUNT:parameter/sst/bootstrap" ] } ``` --- ### Deploy The permissions that SST needs to deploy the resources in your app, depends on what you have in your app. The following block is placed as a template in the IAM policy above for you to customize. ```json { "Sid": "Deployments", "Effect": "Allow", "Action": [ "*" ], "Resource": [ "*" ] } ``` Below we'll look at how you can try customizing this. --- ### CLI The SST CLI also makes some AWS SDK calls to your account. Here are the IAM permissions it needs. - Permissions to manage your [secrets](/docs/component/secret). ```json { "Sid": "ManageSecrets", "Effect": "Allow", "Action": [ "ssm:DeleteParameter", "ssm:GetParameter", "ssm:GetParameters", "ssm:GetParametersByPath", "ssm:PutParameter" ], "Resource": [ "arn:aws:ssm:us-east-1:112233445566:parameter/sst/*" ] } ``` - And permissions to connect to the IoT endpoint in `sst dev` to run your functions [_Live_](/docs/live). ```json { "Sid": "LiveLambdaSocketConnection", "Effect": "Allow", "Action": [ "iot:DescribeEndpoint", "iot:Connect", "iot:Subscribe", "iot:Publish", "iot:Receive" ], "Resource": [ "*" ] } ``` --- ## Minimize permissions Editing the above policy based on the resources you are adding to your app can be tedious. Here's an approach to consider. - Sandbox accounts Start by creating separate AWS accounts for your teammates for their dev usage. In these sandbox accounts, you can grant `AdministratorAccess`. This avoids having to modify their permissions every time they make some changes. - IAM Access Analyzer For your staging accounts, you can start by granting a broad permissions policy. Then after deploying your app and allowing it to run for a period of time. You can use your CloudTrail events to identify the actions and services used by that IAM user. The [IAM Access Analyzer](https://aws.amazon.com/iam/access-analyzer/) can then generate an IAM policy based on this activity, which you can use to replace the original policy. You can now use this for your production accounts. Learn more about how to use the [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html). In general, you want to make sure you audit the IAM permissions you are granting on a regular basis. --- ## Import Resources Import previously created resources into your app. https://sst.dev/docs/import-resources Importing is the process of bringing some previously created resources into your SST app. This'll allow SST to manage them moving forward. This is useful for when you are migrating to SST or if you had manually created some resources in the past. --- ## How it works SST keeps a [state](/docs/state/) of your app. It contains all the resources that are managed by your app. :::note Once you import a resource it's managed by SST moving forward. ::: When you import a resource, it gets added to this state. This means that if you remove this resource in your code, it'll also remove the resource. It's as if this resource had been created by your app. --- #### When not to import This is fine for most cases. But for some teams these resources might be managed by other teams. Or they are being managed by a different IaC tool. Meaning that you don't want to manage it in your app. :::caution Do not import resources that are being managed by another team or a different IaC tool. ::: In these cases you should not be importing these resources. You are probably looking to [reference these resources](/docs/reference-resources/). --- ## How to import You import resources by passing in a property of the resource you want to import into your app. Resources have a property that you can import with and this is different for different resources. We'll look at this below. If you are importing into an SST component, you'll need to use a [`transform`](/docs/components/#transform) to pass it into the underlying resource. So let's look at two examples. 1. Importing into an SST component 2. Importing into a Pulumi resource --- ### SST component Let's start with an existing S3 Bucket with the following name. ```txt mybucket-xnbmhcvd ``` We want to import this bucket into the [`Bucket`](/docs/component/aws/bucket/) component. 1. Start by adding the `import` option in the `transform`. ```ts title="sst.config.ts" {4} new sst.aws.Bucket("MyBucket", { transform: { bucket: (args, opts) => { opts.import = "mybucket-xnbmhcvd"; } } }); ``` The `transform.bucket` is telling this component that instead of creating a new underlying S3 Bucket resource, we want to import an existing bucket. Let's deploy this. ```bash frame="none" sst deploy ``` This will give you an error that looks something like this. ```txt frame="none" ✕ Failed inputs to import do not match the existing resource Set the following in your transform: - `args.bucket = "mybucket-xnbmhcvd";` - `args.forceDestroy = undefined;` ``` This is telling us that the resource that the `Bucket` component is trying to create does not match the one you are trying to import. This makes sense because you might've previously created this with a configuration that's different from what SST creates by default. 2. Update the `args` The above error tells us exactly what we need to do next. Add the given lines to your `transform`. ```ts title="sst.config.ts" {4,5} new sst.aws.Bucket("MyBucket", { transform: { bucket: (args, opts) => { args.bucket = "mybucket-xnbmhcvd"; args.forceDestroy = undefined; opts.import = "mybucket-xnbmhcvd"; } } }); ``` Now if you deploy this again. ```bash frame="none" sst deploy ``` You'll notice that the bucket has been imported. ```bash frame="none" | Imported MyBucket aws:s3:BucketV2 ``` 3. Finally, to clean things up we can remove the `import` line. ```diff lang="ts" title="sst.config.ts" new sst.aws.Bucket("MyBucket", { transform: { bucket: (args, opts) => { args.bucket = "mybucket-xnbmhcvd"; args.forceDestroy = undefined; - opts.import = "mybucket-xnbmhcvd"; } } }); ``` This bucket is now managed by your app and you can now deploy as usual. You **do not want to remove** the `args` changes. This matters for the `args.bucket` prop because the name is generated by SST. So if you remove this, SST will generate a new bucket name and remove the old one! --- ### Pulumi resource You might want to also import resources into your SST app that don't have a built-in SST component. In these cases, you can import them into a low-level Pulumi resource. Let's take the same S3 Bucket example. Say you have an existing bucket with the following name. ```txt mybucket-xnbmhcvd ``` We want to import this bucket into the [`aws.s3.BucketV2`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketv2/) resource. 1. Start by adding the `import` option. ```ts title="sst.config.ts" {6} new aws.s3.BucketV2("MyBucket", { objectLockEnabled: undefined }, { import: "mybucket-xnbmhcvd" } ); ``` The `objectLockEnabled` prop here, is for illustrative purposes. We are trying to demonstrate a case where you are importing a resource in a way that it wasn't created. Let's deploy this. ```bash frame="none" sst deploy ``` This will give you an error that looks something like this. ```txt frame="none" ✕ Failed inputs to import do not match the existing resource Set the following: - `objectLockEnabled: undefined,` ``` This is telling us that the resource that the `BucketV2` component is trying to create does not match the one you are trying to import. This makes sense because you might've previously created this with a configuration that's different from what you are defining. Recall the `objectLockEnabled` prop we had added above. 2. Update the `args` The above error tells us exactly what we need to do next. Add the given lines in your `args`. ```ts title="sst.config.ts" {3} new aws.s3.BucketV2("MyBucket", { objectLockEnabled: undefined }, { import: "mybucket-xnbmhcvd" } ); ``` Now if you deploy this again. ```bash frame="none" sst deploy ``` You'll notice that the bucket has been imported. ```bash frame="none" | Imported MyBucket aws:s3:BucketV2 ``` 3. Finally, to clean things up we can remove the `import` line. ```diff lang="ts" title="sst.config.ts" new aws.s3.BucketV2("MyBucket", { objectLockEnabled: undefined }, - { - import: "mybucket-xnbmhcvd" - } ); ``` This bucket is now managed by your app and you can now deploy as usual. --- ## Import properties In the above examples we are importing a bucket using the bucket name. We need the bucket name because that's what AWS internally uses to do a lookup. But this is different for different resources. So we've compiled a list of the most common resources you might import, along with the **property to import them with**. You can look this up by going to the **Import** section of a resource's doc. For example, here's the one for a [`aws.s3.BucketV2`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketv2/#import). --- The following table lists the properties you need to pass in to the `import` prop of the given resource to be able to import it. For example, for `aws.s3.BucketV2`, the property is _bucket name_ and it looks something like, `some-unique-bucket-name`. | Resource | Property | Example | |----------|----------|---------| | [`aws.ec2.Vpc`](https://www.pulumi.com/registry/packages/aws/api-docs/ec2/vpc/) | VPC ID | `vpc-a01106c2` | | [`aws.iam.Role`](https://www.pulumi.com/registry/packages/aws/api-docs/iam/role/) | Role name | `role-name` | | [`aws.sqs.Queue`](https://www.pulumi.com/registry/packages/aws/api-docs/sqs/queue/) | Queue URL | `https://queue.amazonaws.com/80398EXAMPLE/MyQueue` | | [`aws.sns.Topic`](https://www.pulumi.com/registry/packages/aws/api-docs/sns/topic/) | Topic ARN | `arn:aws:sns:us-west-2:0123456789012:my-topic` | | [`aws.rds.Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/rds/cluster/) | Cluster identifier | `aurora-prod-cluster` | | [`aws.ecs.Service`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/) | Cluster and service name | `cluster-name/service-name` | | [`aws.ecs.Cluster`](https://www.pulumi.com/registry/packages/aws/api-docs/ecs/cluster/) | Cluster name | `cluster-name` | | [`aws.s3.BucketV2`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketv2/) | Bucket name | `bucket-name` | | [`aws.kinesis.Stream`](https://www.pulumi.com/registry/packages/aws/api-docs/kinesis/stream/) | Stream name | `my-kinesis-stream` | | [`aws.dynamodb.Table`](https://www.pulumi.com/registry/packages/aws/api-docs/dynamodb/table/) | Table name | `table-name` | | [`aws.lambda.Function`](https://www.pulumi.com/registry/packages/aws/api-docs/lambda/function/) | Function name | `function-name` | | [`aws.apigatewayv2.Api`](https://www.pulumi.com/registry/packages/aws/api-docs/apigatewayv2/api/) | API ID | `12345abcde` | | [`aws.cognito.UserPool`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/userpool/) | User Pool ID | `us-east-1_abc123` | | [`aws.apigateway.RestApi`](https://www.pulumi.com/registry/packages/aws/api-docs/apigateway/restapi/) | REST API ID | `12345abcde` | | [`aws.cloudwatch.LogGroup`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudwatch/loggroup/) | Log Group name | `my-log-group` | | [`aws.cognito.IdentityPool`](https://www.pulumi.com/registry/packages/aws/api-docs/cognito/identitypool/) | Identity Pool ID | `us-east-1:1a234567-8901-234b-5cde-f6789g01h2i3` | | [`aws.cloudfront.Distribution`](https://www.pulumi.com/registry/packages/aws/api-docs/cloudfront/distribution/) | Distribution ID | `E74FTE3EXAMPLE` | Feel free to _Edit this page_ and submit a PR if you want to add to this list. --- ## What is SST Build full-stack apps on your own infrastructure. https://sst.dev/docs/index SST is a framework that makes it easy to build modern full-stack applications on your own infrastructure. :::note SST supports over 150 providers. Check out the [full list](/docs/all-providers#directory). ::: What makes SST different is that your _entire_ app is **defined in code** — in a single `sst.config.ts` file. This includes databases, buckets, queues, Stripe webhooks, or any one of **150+ providers**. With SST, **everything is automated**. --- ## Components You start by defining parts of your app, _**in code**_. For example, you can add your frontend and set the domain you want to use. **Next.js** ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { domain: "my-app.com" }); ``` **Remix** ```ts title="sst.config.ts" new sst.aws.Remix("MyWeb", { domain: "my-app.com" }); ``` **Astro** ```ts title="sst.config.ts" new sst.aws.Astro("MyWeb", { domain: "my-app.com" }); ``` **Svelte** ```ts title="sst.config.ts" new sst.aws.SvelteKit("MyWeb", { domain: "my-app.com" }); ``` **Solid** ```ts title="sst.config.ts" new sst.aws.SolidStart("MyWeb", { domain: "my-app.com" }); ``` Just like the frontend, you can configure backend features _in code_. Like your API deployed in a container. Or any Lambda functions, Postgres databases, S3 Buckets, or cron jobs. **Containers** ```ts title="sst.config.ts" const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http" }] } }); ``` **Functions** ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler" }); ``` **Postgres** ```ts title="sst.config.ts" new sst.aws.Postgres("MyDatabase", { vpc }); ``` **Bucket** ```ts title="sst.config.ts" new sst.aws.Bucket("MyBucket"); ``` **Cron** ```ts title="sst.config.ts" new sst.aws.Cron("MyCronJob", { job: "src/cron.handler", schedule: "rate(1 minute)" }); ``` You can even set up your Stripe products in code. ```ts title="sst.config.ts" new stripe.Product("MyStripeProduct", { name: "SST Paid Plan", description: "This is how SST makes money", }); ``` You can check out the full list of components in the sidebar. --- ## Infrastructure The above are called **Components**. They are a way of defining the features of your application in code. You can define any feature of your application with them. In the above examples, they create the necessary infrastructure in your AWS account. All without using the AWS Console. Learn more about [Components](/docs/components/). --- ### Configure SST's components come with sensible defaults designed to get you started. But they can also be configured completely. For example, the `sst.aws.Function` can be configured with all the common Lambda function options. ```ts {3,4} title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", timeout: "3 minutes", memory: "1024 MB" }); ``` But with SST you can take it a step further and transform how the Function component creates its low level resources. For example, the Function component also creates an IAM Role. You can transform the IAM Role using the `transform` prop. ```ts {3-7} title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", transform: { role: (args) => ({ name: `${args.name}-MyRole` }) } }); ``` Learn more about [transforms](/docs/components#transforms). --- ### Providers SST has built-in components for AWS and Cloudflare that make these services easier to use. However it also supports components from any one of the **150+ Pulumi/Terraform providers**. For example, you can use Vercel for your frontends. ```ts title="sst.config.ts" new vercel.Project("MyFrontend", { name: "my-nextjs-app" }); ``` Learn more about [Providers](/docs/providers) and check out the full list in the [Directory](/docs/all-providers#directory). --- ## Link resources Once you've added a couple of features, SST can help you link them together. This is great because you **won't need to hardcode** anything in your app. Let's say you are deploying an Express app in a container and you want to upload files to an S3 bucket. You can `link` the bucket to your container. ```ts title="sst.config.ts" {6} const bucket = new sst.aws.Bucket("MyBucket"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [bucket], loadBalancer: { ports: [{ listen: "80/http" }] } }); ``` You can then use SST's [SDK](/docs/reference/sdk/) to access the S3 bucket in your Express app. ```ts title="index.mjs" "Resource.MyBucket.name" console.log(Resource.MyBucket.name); ``` Learn more about [resource linking](/docs/linking/). --- ## Project structure We've looked at a couple of different types of files. Let's take a step back and see what an SST app looks like in practice. --- ### Drop-in mode The simplest way to run SST is to use it as a part of your app. This is called _drop-in mode_. For example, if you are building a Next.js app, you can add a `sst.config.ts` file to the root. ```txt {3} my-nextjs-app ├─ next.config.js ├─ sst.config.ts ├─ package.json ├─ app ├─ lib └─ public ``` View an example Next.js app using SST in drop-in mode. --- ### Monorepo Alternatively, you might use SST in a monorepo. This is useful because most projects have a frontend, a backend, and some functions. In this case, the `sst.config.ts` is still in the root but you can split it up into parts in the `infra/` directory. ```txt {2,9} my-sst-app ├─ sst.config.ts ├─ package.json ├─ packages │  ├─ functions │  ├─ frontend │  ├─ backend │  └─ core └─ infra ``` Learn more about our [monorepo setup](/docs/set-up-a-monorepo/). --- ## CLI To make this all work, SST comes with a [CLI](/docs/reference/cli/). You can install it as a part of your Node project. ```bash npm install sst ``` Or if you are not using Node, you can install it globally. ```bash curl -fsSL https://sst.dev/install | bash ``` Learn more about the [CLI](/docs/reference/cli/). --- ### Dev The CLI includes a `dev` command that starts a local development environment. ```bash sst dev ``` This brings up a _multiplexer_ that: 1. Starts a watcher that deploys any infrastructure changes. 2. Runs your functions [_Live_](/docs/live/), letting you make and test changes without having to redeploy them. 3. Creates a [_tunnel_](/docs/reference/cli#tunnel) to connect your local machine to any resources in a VPC. 4. Starts your frontend and container services in dev mode and links it to your infrastructure. The `sst dev` CLI makes it so that you won’t have to start your frontend or container applications separately. Learn more about [`sst dev`](/docs/reference/cli/#dev). --- ### Deploy When you're ready to deploy your app, you can use the `deploy` command. ```bash sst deploy --stage production ``` --- #### Stages The stage name is used to namespace different environments of your app. So you can create one for dev. ```bash sst deploy --stage dev ``` Or for a pull request. ```bash sst deploy --stage pr-123 ``` Learn more about [stages](/docs/reference/cli#stage). --- ## Console Once you are ready to go to production, you can use the [SST Console](/docs/console/) to **auto-deploy** your app, create **preview environments**, and **monitor** for any issues. ![SST Console](../../../assets/docs/sst-console-home.png) Learn more about the [Console](/docs/console/). --- ## FAQ Here are some questions that we frequently get. --- **Is SST open-source if it's based on Pulumi and Terraform?** SST uses Pulumi behind the scenes for the providers and the deployment engine. And Terraform's providers are _bridged_ through Pulumi. SST only relies on the open-source parts of Pulumi and Terraform. It does not require a Pulumi account and all the data about your app and its resources stay on your side. --- **How does SST compare to CDK for Terraform or Pulumi?** Both CDKTF and Pulumi allow you to define your infrastructure using a programming language like TypeScript. SST is also built on top of Pulumi. So you might wonder how SST compares to them and why you would use SST instead of them. In a nutshell, SST is for developers, while CDKTF and Pulumi are primarily for DevOps engineers. There are 3 big things SST does for developers: 1. Higher-level components SST's built-in components like [`Nextjs`](/docs/component/aws/nextjs/) or [`Email`](/docs/component/aws/email/) make it easy for developers to add features to their app. You can use these without having to figure out how to work with the underlying Terraform resources. 2. Linking resources SST makes it easy to link your infrastructure to your application and access them at runtime in your code. 3. Dev mode Finally, SST features a unified local developer environment that deploys your app through a watcher, runs your functions [_Live_](/docs/live/), creates a [_tunnel_](/docs/reference/cli#tunnel) to your VPC, starts your frontend and backend, all together. --- **How does SST make money?** While SST is open-source and free to use, we also have the [Console](/docs/console/) that can auto-deploy your apps and monitor for any issues. It's optional and includes a free tier but it's a SaaS service. It's used by a large number of teams in our community, including ours. --- #### Next steps 1. [Learn the SST basics](/docs/basics/) 2. Create your first SST app - [Build a Next.js app in AWS](/docs/start/aws/nextjs/) - [Deploy Bun in a container to AWS](/docs/start/aws/bun/) - [Build a Hono API with Cloudflare Workers](/docs/start/cloudflare/hono/) --- ## Linking Link resources together and access them in a typesafe and secure way. https://sst.dev/docs/linking Resource Linking allows you to access your **infrastructure** in your **runtime code** in a typesafe and secure way. 1. Create a resource that you want to link to. For example, a bucket. ```ts title="sst.config.ts" {6,11} const bucket = new sst.aws.Bucket("MyBucket"); ``` 2. Link it to your function or frontend, using the `link` prop. **Next.js** ```ts title="sst.config.ts" {2} new sst.aws.Nextjs("MyWeb", { link: [bucket] }); ``` **Remix** ```ts title="sst.config.ts" {2} new sst.aws.Remix("MyWeb", { link: [bucket] }); ``` **Astro** ```ts title="sst.config.ts" {2} new sst.aws.Astro("MyWeb", { link: [bucket] }); ``` **Function** ```ts title="sst.config.ts" {3} new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [bucket] }); ``` 3. Use the [SDK](/docs/reference/sdk/) to access the linked resource in your runtime in a typesafe way. **Next.js** ```js title="app/page.tsx" import { Resource } from "sst"; console.log(Resource.MyBucket.name); ``` **Remix** ```js title="app/routes/_index.tsx" import { Resource } from "sst"; console.log(Resource.MyBucket.name); ``` **Astro** ```astro title="src/pages/index.astro" --- import { Resource } from "sst"; console.log(Resource.MyBucket.name); --- ``` **Function** ```js title="src/lambda.ts" import { Resource } from "sst"; console.log(Resource.MyBucket.name); ``` :::tip The SDK currently supports JS/TS, Python, Golang, and Rust. ::: Learn how to use the SDK in [Python](/docs/reference/sdk/#python), [Golang](/docs/reference/sdk/#golang), and [Rust](/docs/reference/sdk/#rust). --- ### Working locally The above applies to your app deployed through `sst deploy`. To access linked resources locally you'll need to be running `sst dev`. By default, the `sst dev` CLI runs a multiplexer that also starts your frontend for you. This loads all your linked resources in the environment. Read more about [`sst dev`](/docs/reference/cli/#dev). However if you are not using the multiplexer. ```bash frame="none" sst dev --mode=basic ``` You'll need to wrap your frontend's dev command with the `sst dev` command. **Next.js** ```bash sst dev next dev ``` **Remix** ```bash sst dev remix dev ``` **Astro** ```bash sst dev astro dev ``` **Function** ```bash sst dev ``` --- ## How it works At high level when you link a resource to a function or frontend, the following happens: 1. The _links_ that the resource exposes are injected into the function package. :::tip The links a component exposes are listed in its API reference. For example, you can [view a Bucket's links here](/docs/component/aws/bucket/#links). ::: 2. The types to access these links are generated. 3. The function is given permission to access the linked resource. --- ### Injecting links Resource links are injected into your function or frontend package when you run `sst dev` or `sst deploy`. But this is done in a slightly different way for both these cases. #### Functions The functions in SST are tree shaken and bundled using [esbuild](https://esbuild.github.io/). While bundling, SST injects the resource links into the [`globalThis`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/globalThis). These are encrypted and added to the function bundle. And they are synchronously decrypted on load by the SST SDK. #### Frontends The frontends are not bundled by SST. Instead, when they are built, SST injects the resource links into the `process.env` object using the prefix `SST_RESOURCE_`. This is why when you are running your frontend locally, it needs to be wrapped in the `sst dev` command. :::note Links are only available on the server of your frontend. ::: Resource links are only available on the server-side of your frontend. If you want to access them in your client components, you'll need to pass them in explicitly. --- ### Generating types When you run `sst dev` or `sst deploy`, it generates the types to access the linked resources. These are generated as: 2. A `sst-env.d.ts` file in the project root with types for **all** the linked resources in the app. 1. A `sst-env.d.ts` file in the same directory of the nearest `package.json` of the function or frontend that's _receiving_ the links. This references the root `sst-env.d.ts` file. You can check the generated `sst-env.d.ts` types into source control. This will let your teammates see the types without having to run `sst dev` when they pull your changes. --- ## Extending linking The examples above are built into SST's components. You might want to modify the permissions that are granted as a part of these links. Or, you might want to link other resources from the Pulumi/Terraform ecosystem. Or want to link a different set of outputs than what SST exposes. You can do this using the [`sst.Linkable`](/docs/component/linkable/) component. --- ### Link any value The `Linkable` component takes a list of properties that you want to link. These can be outputs from other resources or constants. ```ts title="sst.config.ts" const myLinkable = new sst.Linkable("MyLinkable", { properties: { foo: "bar" } }); ``` You can optionally include permissions or bindings for the linked resource. Now you can now link this resource to your frontend or a function. ```ts title="sst.config.ts" {3} new sst.aws.Function("MyApi", { handler: "src/lambda.handler", link: [myLinkable] }); ``` Then use the [SDK](/docs/reference/sdk/) to access that at runtime. ```js title="src/lambda.ts" console.log(Resource.MyLinkable.foo); ``` Read more about [`sst.Linkable`](/docs/component/linkable/). --- ### Link any resource You can also wrap any resource class to make it linkable with the `Linkable.wrap` static method. ```ts title="sst.config.ts" Linkable.wrap(aws.dynamodb.Table, (table) => ({ properties: { tableName: table.name } })); ``` Now you create an instance of `aws.dynamodb.Table` and link it in your app like any other SST component. ```ts title="sst.config.ts" {7} const table = new aws.dynamodb.Table("MyTable", { attributes: [{ name: "id", type: "S" }], hashKey: "id" }); new sst.aws.Nextjs("MyWeb", { link: [table] }); ``` And use the [SDK](/docs/reference/sdk/) to access it at runtime. ```js title="app/page.tsx" console.log(Resource.MyTable.tableName); ``` --- ### Modify built-in links You can also modify the links SST creates. For example, you might want to change the permissions of a linkable resource. ```ts title="sst.config.ts" "sst.aws.Bucket" sst.Linkable.wrap(sst.aws.Bucket, (bucket) => ({ properties: { name: bucket.name }, include: [ sst.aws.permission({ actions: ["s3:GetObject"], resources: [bucket.arn] }) ] })); ``` This overrides the existing link and lets you create your own. Read more about [`sst.Linkable.wrap`](/docs/component/linkable/#static-wrap). --- ### Link integration with external providers If you want to pass links to compute not managed by SST, like a native ECS task definition or a Kubernetes pod, use `Linkable.env()`. It converts your linked resources into `SST_RESOURCE_*` environment variables so `Resource.MyResource` works at runtime. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); const environment = sst.Linkable.env([bucket]); ``` This returns an `Output>` that you can pass to any provider that accepts environment variables. ```ts title="sst.config.ts" new kubernetes.apps.v1.Deployment("MyDeployment", { spec: { template: { spec: { containers: [{ name: "app", image: "my-image", env: sst.Linkable.env([bucket]).apply((env) => // Kubernetes requires environment variables to be an array of objects Object.entries(env).map(([name, value]) => ({ name, value: String(value) })), ), }], }, }, }, }); ``` Read more about [`sst.Linkable.env`](/docs/component/linkable/#static-env). [Check out an example](/docs/examples/#aws-linkable-env). --- ## Live Make changes to your Lambda functions in milliseconds. https://sst.dev/docs/live Live is a feature of SST that lets you test changes made to your AWS Lambda functions in milliseconds. Your changes work without having to redeploy. And they can be invoked remotely. :::tip By default, `sst dev` will run all the functions in your app _"live"_. ::: It works by proxying requests from AWS to your local machine, executing it locally, and proxying the response back. --- ## Advantages This setup of running your functions locally and proxying the results back allows you to do a couple of things: - Your changes are **reloaded in under 10ms**. - You can set **breakpoints to debug** your function in your favorite IDE. - Functions can be invoked remotely. For example, say `https://my-api.com/hello` is your API endpoint. Hitting that will run the local version of that function. - This applies to more than just APIs. Any cron job or async event that gets invoked remotely will also run your local function. - It allows you to very easily debug and **test webhooks**, since you can just give the webhook your API endpoint. - Supports all function triggers, there's no need to mock an event. - Uses the **right IAM permissions**, so if a Lambda fails on AWS due to the lack of IAM permissions, it would fail locally as well. --- ## How it works Live uses [AWS AppSync Events](https://docs.aws.amazon.com/appsync/latest/eventapi/event-api-welcome.html) to communicate between your local machine and the remote Lambda function. When you run `sst dev`, it [bootstraps](/docs/state#bootstrap) a new AppSync Events API for the region you are using. This is roughly what the flow looks like: 1. When you run `sst dev`, it deploys your app and replaces the Lambda functions with a _stub_ version. 2. It also starts up a local WebSocket client and connects to the AppSync API endpoint. 3. When a Lambda function in your app is invoked, it publishes an event, where the payload is the Lambda function request. 4. Your local WebSocket client receives this event. It publishes an event acknowledging that it received the request. 5. Next, it runs the local version of the function and publishes an event with the function response as the payload. The local version is run as a Node.js Worker. 6. Finally, the stub Lambda function receives the event and responds with the payload. --- ### Quirks There are a couple of quirks with this setup that are worth noting. 1. **Runtime change** The stub function that's deployed uses a **different runtime** than your Lambda function. You might run into this when you change the runtime in your config but the runtime of the Lambda function in the AWS Console doesn't change. :::tip The _stub_ function that’s deployed uses a different runtime than the actual function. ::: We use a different runtime because we want the function to be as fast as possible at proxying requests. 2. **Live mode persists** If you kill the `sst dev` CLI, your functions are not run locally anymore but the stub function in AWS are still there. This means that it'll attempt to proxy requests to your machine and timeout. :::tip Only use `sst dev` in your personal stage. ::: You can fix this by running `sst deploy` and it'll deploy the real version of your app. But the next time you run `sst dev` it'll need to deploy the stub back. This'll take a couple of minutes. So we recommend only using your personal stages for `sst dev`. And avoid flipping back and forth between `dev` and `deploy`. --- ### Live mode When a function is running live it sets the `SST_DEV` environment variable to `true`. So in your Node.js functions you can access it using `process.env.SST_DEV`. ```js title="src/lambda.js" "process.env.SST_DEV" const body = process.env.SST_DEV ? "Hello, Live!" : "Hello, World!"; return { body, statusCode: 200, }; } ``` This is useful if you want to access some resources locally. --- #### Connect to a local DB For example, when running locally you might want to connect to a local database. You can do that with the `SST_DEV` environment variable. ```js const dbHost = process.env.SST_DEV ? "localhost" : "amazon-string.rds.amazonaws.com"; ``` --- ## Cost AWS AppSync Events that powers Live is **completely serverless**. So you don't get charged when it's not in use. It's also pretty cheap. It's roughly $1.00 per million messages and $0.08 per million connection minutes. You can [check out the details here](https://aws.amazon.com/appsync/pricing/#AppSync_Events_). This approach has been economical even for large teams with dozens of developers. --- ## Privacy All the data stays between your local machine and your AWS account. There are **no 3rd party services** that are used. Live also supports connecting to AWS resources inside a VPC. --- ### Using a VPC By default your local functions cannot connect to resources in a VPC. You can fix this by either setting up a VPN connection or creating a tunnel. --- #### Creating a tunnel To create a tunnel, you'll need to: 1. Enable the `bastion` host in your VPC. ```ts title="sst.config.ts" "{ bastion: true }" new sst.aws.Vpc("MyVpc", { bastion: true }); ``` 2. Install the tunnel. ```bash "sudo" sudo sst tunnel install ``` This needs _sudo_ to create the network interface on your machine. You only need to do this once. :::note For NixOS users you will also need to declare the following sudo configuration to complete the setup: ```nix security.sudo.extraRules = [ { users = ["jay"]; # Your user commands = [ { command = "/opt/sst/tunnel tunnel start *"; options = ["NOPASSWD" "SETENV"]; } ]; } ]; ``` ::: 3. Run `sst dev`. ```bash sst dev ``` This starts the tunnel automatically; notice the **Tunnel** tab on the left. Now your local environment can connect to resources in your VPC. --- #### Setting up a VPN connection To set up a VPN connection, you'll need to: 1. Setup a VPN connection from your local machine to your VPC network. You can use the AWS Client VPN service to set it up. [Follow the Mutual authentication section in this doc](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#mutual) to setup the certificates and import them into your Amazon Certificate Manager. 2. Then [create a Client VPC Endpoint](https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-client-vpn-to-securely-access-aws-and-on-premises-resources/), and associate it with your VPC. 3. And, finally install [Tunnelblick](https://tunnelblick.net) locally to establish the VPN connection. Note that, the AWS Client VPC service is billed on an hourly basis but it's fairly inexpensive. [Read more on the pricing here](https://aws.amazon.com/vpn/pricing/). --- ## Breakpoints Since Live runs your functions locally, you can set breakpoints and debug your functions in your favorite IDE. ![VS Code Enable Auto Attach](../../../assets/docs/live/vs-code-enable-auto-attach.png) For VS Code, you'll need to enable Auto Attach from the Command Palette. Hit `Ctrl+Shift+P` or `Cmd+Shift+P`, type in **Debug: Toggle Auto Attach** and select **Always**. :::note You need to start a new terminal **in VS Code** after enabling Auto Attach. ::: Now open a new terminal in VS Code, run `sst dev`, set a breakpoint in a function, and invoke the function. --- ## Changelog Live is a feature that was created by SST when it first launched back in 2021. It's gone through a few different iterations since then. | SST Version | Change | | --- | --- | | **v0.5.0** | Then called _Live Lambda_, used a API Gateway WebSocket API and a DynamoDB table. | | **v2.0.0** | Switched to using AWS IoT, this was roughly 2-3x faster. | | **v3.3.1** | Switched to using AWS AppSync Events, which is even faster and handles larger payloads better. | --- ## Migrate From v2 Migrate your SST v2 apps to v3. https://sst.dev/docs/migrate-from-v2 This guide will help you migrate your SST v2 apps to v3. We look at the major differences between v2 and v3 below. But to get a quick intro, we recommend reading the [What is SST](/docs/) and [Basics](/docs/basics/) docs. :::tip We recently [migrated our demo notes app](https://github.com/sst/demo-notes-app/pull/8/files) from v2 to v3. You use these changes as reference. ::: We'll then go over a migration plan that you can use. The exact details of this will be different from team to team depending on the resources in it, and sensitivity of downtime. --- #### Getting help SST v3 has been around for a few months with a pretty sizeable community on Discord. We've created a channel for folks looking to migrate. Join `#migrate-from-v2` on Discord. --- #### Not supported While the goal with v3 is to support most of what's in v2, there are a few things that haven't been supported yet. There are also a couple of them that are currently in beta and will be released in the near future. | Construct | GitHub Issue | |----------|-------| | `Script` | [#811](https://github.com/sst/sst/issues/4323) | | `Function` non-Node.js runtimes | [Container](https://github.com/sst/sst/issues/4462), [Custom](https://github.com/sst/sst/issues/4826) | Feel free to let us know via the linked GitHub issues if these are blockers for you. It'll help us prioritize this list. --- ## Major changes If you are coming from SST v2, it's worth starting with the big differences between v2 and v3. It'll help you understand the types of changes you'll need to make as you migrate. --- #### No CloudFormation Let's start with the obvious. SST v3 moves away from CloudFormation and CDK, [we've written in detail about why we decided to do this](https://sst.dev/blog/moving-away-from-cdk.html). No CloudFormation, means a couple of things: 1. There are no stacks, all the resources are defined through the same function in the `sst.config.ts`. 2. The outputs of constructs or _components_ are different. These used to be tokens that would get replaced on deploy. Now they are something called [_Outputs_](/docs/components/#outputs). 3. The state of your app is stored locally and backed up to S3. Learn more about [State](/docs/state/). --- #### No CDK And moving away from CDK means: 1. You cannot fall back to CDK constructs if something isn't supported by SST. Instead there is the [AWS](https://www.pulumi.com/registry/packages/aws/) provider from Pulumi that's built on Terraform. There are also 150+ other providers that allow you to build on any cloud. Check out the [Directory](/docs/all-providers#directory). If you are using a lot of higher level CDK constructs in your v2 app, it's going to be really hard to migrate to v3. The Pulumi/Terraform ecosystem is fairly complete but it's mainly made up of low level resources. You might not have ready replacements for your CDK constructs. 2. Since the constructs or _components_ are no longer built on CDK; they don't have a `cdk` prop. Instead, there's a `transform` prop that lets you modify the props that a component sends to its underlying resources. Learn more about the [`transform`](/docs/components/#transform) prop. --- #### sst.config.ts The `sst.config.ts` is similar in v3 but there are some changes. Here's a comparison of the general structure, we look at this in detail in a [section below](#sstconfigts-1). **v3** ```ts title="sst.config.ts" export default $config({ // Your app's config app(input) { return { name: "my-sst-app", home: "aws" }; }, // Your app's resources async run() { } }); ``` **v2** ```ts title="sst.config.ts" export default { // Your app's config config(_input) { return { name: "my-sst-app", region: "us-east-1" }; }, // Your app's resources stacks(app) { } } satisfies SSTConfig; ``` Learn more about the new [`sst.config.ts`](/docs/reference/config/). --- #### sst dev The `sst dev` CLI has been completely reworked. It now runs a _multiplexer_ that deploys your app and runs your frontends together. So you don't need to: - Start your frontend separately - Need to wrap your frontend `dev` script with `sst bind` Learn more about [`sst dev`](/docs/reference/cli/#dev). --- #### sst build There is no `sst build` CLI. Instead you can run `sst diff` to see what changes will be deployed, without doing an actual deploy. Learn more about [`sst diff`](/docs/reference/cli/#diff). --- #### Resource binding Resource binding is now called resource linking, the `bind` prop is now renamed to `link`. The Node.js client or _JS SDK_ has been reworked so that all linked resources are now available through the `Resource` object. We'll look at this in [detail below](#clients). The client handlers and hooks have not been supported yet. Learn more about [Resource linking](/docs/linking/). --- #### Secrets Secrets are not stored in SSM. Instead they are encrypted and stored in your state file. It's encrypted using a passphrase that's stored in SSM. Loading secrets in your functions no longer needs a top-level await. --- ## Migration plan Say you have a v2 app in a git repo that's currently deployed to production. Here's how we recommend carrying out the migration. 1. Use the steps below to migrate over your app to a non-prod stage. You don't need to import any resources, just recreate them. 2. Test your non-prod version of your v3 app. 3. Then for your prod stage, follow the steps below and make the import, domain, and subscriber changes. 4. Once the prod version of your v3 app is running, clean up some of the v2 prod resources. :::caution These are recommendations and the specific details depend on the type of resources you have. ::: The general idea here is to have the v2 app hand over control of the underlying resources to the v3 version of the app. --- ### Setup 1. Start by setting the removal policy to `retain` in the v2 app for the production stages. This ensures resources don't get accidentally removed. ```ts app.setDefaultRemovalPolicy("retain"); ``` :::caution You'll want to deploy your app once after setting this. ::: 2. Create a new branch in your repo for the upcoming changes. 3. For the prod version of the v3 app, pick a different stage name. Say your prod stage in v2 is called `production`. Maybe use `prod`, `main`, or `live` for your v3 app. Or vice versa. This isn't strictly necessary, but we recommend doing this because you don't want to change the wrong resources by mistake. --- ### Init v3 Now let's set up our new v3 app in the root of your project. 1. Update SST to v3. Or set the version by hand in your `package.json`. Make sure to this across all the packages in your repo. ```bash frame="none" npm update sst ``` Ensure v3 is installed. ```bash frame="none" npx sst version ``` 2. Backup the v2 config with. ```bash frame="none" mv sst.config.ts sst.config.ts.bk ``` 3. Init a v3 app. ```bash frame="none" npx sst init ``` :::caution Make sure to use the same app name. ::: 4. Set the removal policy to `retain`. Similar to `setDefaultRemovalPolicy` in v2, you can configure the removal policy in `sst.config.ts` in v3. ```ts title="sst.config.ts" {4} app(input) { return { name: "my-sst-app", removal: input?.stage === "production" ? "retain" : "remove" }; } ``` By default, v3 has removal policy set to `retain` for the `production` stage, and `remove` for other stages. 5. Deploy an empty app to ensure the app is configured correctly. ```bash frame="none" npx sst deploy ``` 6. Update the dev scripts for your frontend. Remove the `sst bind` from the `dev` script in your `package.json`. For example, for a Next.js app. ```diff lang="js" title="package.json" - "dev": "sst bind next dev", + "dev": "next dev", ``` 7. Remove any CDK related packages from your `package.json`. --- ### Migrate stacks Now before we start making changes to our constructs, you might have some stacks code in your `sst.config.ts`. Take a look at the [**list below**](#sstconfigts-1) and apply the changes that matter to you. --- #### Restructure Since you don't have to import the constructs and there are no stacks, you'll need to change how your constructs are structured. For example, in the [monorepo notes app](https://github.com/sst/demo-notes-app/pull/8) we made these changes. **v3** ```ts title="sst.config.ts" export default $config({ // ... async run() { await import("./infra/api"); await import("./infra/storage"); await import("./infra/frontend"); const auth = await import("./infra/auth"); return { UserPool: auth.userPool.id, Region: aws.getRegionOutput().region, IdentityPool: auth.identityPool.id, UserPoolClient: auth.userPoolClient.id, }; } } ``` **v2** ```ts title="sst.config.ts" import { SSTConfig } from "sst"; import { ApiStack } from "./stacks/ApiStack"; import { AuthStack } from "./stacks/AuthStack"; import { StorageStack } from "./stacks/StorageStack"; import { FrontendStack } from "./stacks/FrontendStack"; export default { // ... stacks(app) { app .stack(StorageStack) .stack(ApiStack) .stack(AuthStack) .stack(FrontendStack); } } satisfies SSTConfig; ``` We store our infrastructure files in the `infra/` directory in v3. You can refer to the [demo notes app](https://github.com/sst/demo-notes-app) to see how these are structured. --- ### Migrate runtime For your runtime code, your functions and frontend; there are fairly minimal changes. The clients or the _JS SDK_ have been reorganized. You can make these changes now or as you are migrating each construct. [**Check out**](#clients) the steps below. --- ### Migrate constructs Constructs in v2 have their equivalent _components_ in v3. Constructs fall into roughly these 3 categories: 1. Transient — these don't contain data, like `Function`, `Topic`, or `Queue`. 2. Data — these contain application data, like `RDS`, `Table`, or `Bucket`. 3. Custom domains — these have custom domains configured, like `Api`, `StaticSite`, or `NextjsSite`. 4. Subscribers — these are constructs that subscribe to other constructs, like the `Bucket`, `Queue`, or `Table` subscribers. We'll go over each of these types and copy our v2 constructs over as v3 components. --- #### Transient constructs Constructs like `Function`, `Cron`, `Topic`, `Queue`, and `KinesisStream` do not contain data. They can be recreated in the v3 app. Simply copy them over using the [**reference below**](#constructs). --- #### Data constructs Constructs like `RDS`, `Table`, `Bucket`, and `Cognito` contain data. If you do not need to keep the data, you can recreate them like what you did above. This is often the case for non-production stages. However, for production stages, you need to import the underlying AWS resource into the v3 app. For example, here are the steps for importing an S3 bucket named `app-prod-MyBucket`. 1. **Import the resource** Say the bucket is defined in SST v2, and the bucket name is `app-prod-MyBucket`. ```ts title="v2" const bucket = new Bucket(stack, "MyBucket"); ``` You can use the `import` and `transform` props to import it. ```ts title="v3" const bucket = new sst.aws.Bucket("MyBucket", { transform: { bucket: (args, opts) => { args.bucket = "app-prod-MyBucket"; opts.import = "app-prod-MyBucket"; }, cors: (args, opts) => { opts.import = "app-prod-MyBucket"; }, policy: (args, opts) => { opts.import = "app-prod-MyBucket"; }, publicAccessBlock: (args, opts) => { opts.import = "app-prod-MyBucket"; } } }); ``` Import is a process of bringing previously created resources into your SST app and allowing it to manage it moving forward. Learn more about [importing resources](/docs/import-resources/). 2. **Deploy** You'll get an error if the resource configurations in your code does not match the exact configuration of the bucket in your AWS account. This is good because we don’t want to change our old resource. 3. **Update props** In the error message, you'll see the props you need to change. Add them to the corresponding `transform` block. And deploy again. :::caution Make sure the v2 app is set to `retain` to avoid accidentally removing imported resources. ::: After the bucket has been imported, the v2 app can still make changes to the resource. If you try to remove the v2 app or remove the bucket from the v2 app, the S3 bucket will get removed. To prevent this, ensure that had the removal policy in the v2 app to `retain`. --- #### Constructs with custom domains Constructs like the following have custom domains. - Frontends like `StaticSite`, `NextjsSite`, `SvelteKitSite`, `RemixSite`, `AstroSite`, `SolidStartSite` - APIs like `Api`, `ApiGatewayv1`, `AppSyncApi`, `WebSocketApi` - `Service` For non-prod stages you can just recreate them. However, if they have a custom domain, you need to deploy them in steps to avoid any downtime. 1. First, create the resource in v3 without a custom domain. So for `Nextjs` for example. **v3** ```ts title="sst.config.ts" new sst.aws.Nextjs("MySite"); ``` **v2** ```ts title="sst.config.ts" new NextjsSite(stack, "MySite", { customDomain: "domain.com" }); ``` 2. Deploy your v3 app. 3. When you are ready, flip the domain using the `override` prop. ```ts title="sst.config.ts" {4} new sst.aws.Nextjs("MySite", { domain: { name: "domain.com", dns: sst.aws.dns({ override: true }) } }); ``` This updates the DNS record to point to your new Next.js app. And deploy again. :::caution Make sure the v2 app is set to `retain` to avoid accidentally removing imported resources. ::: After the DNS record has been overridden, the v2 app can still make changes to it. If you try to remove the v2 app, the record will get removed. To prevent this, ensure that the removal policy in the v2 app to `retain`. --- #### Subscriber constructs Many constructs have subscribers that help with async processing. For example, the `Queue` has a consumer, `Bucket` has the notification, and `Table` constructs have streams. You can recreate the constructs in your v3 app. However recreating the subscribers for a production stage with an imported resource is not straight forward: - Recreating the consumer for an imported Queue will fail because a `Queue` can only have 1 consumer. - And, recreating the consumer for an imported DynamoDB Table will result in double processing. As in, an event will be processed both in your v2 and v3 app. Here's how we recommend getting around this. 1. Deploy the v3 app without the subscribers. Either by commenting out the `.subscribe()` call, or by returning early in the subscriber function. 2. When you are ready to flip, remove the subscribers in the v2 app and deploy. 3. Add the subscribers to the v3 app and deploy. This ensures that the same subscriber is only attached once to a resource. --- ### Clean up Now that your v3 app is handling production traffic. We can optionally go clean up a few things from the v2 app. :::tip We recommend doing a clean up after your v3 app has been in production for a good amount of time. ::: The resources that were recreated in v3, the ones that were not imported, can now be removed. However, since we have v2 app set to `retain`, this is going to be a manual process. You can go to the CloudFormation console, look at the list of resources in your v2 app's stacks and remove them manually. Finally, when you run `sst remove` for your v2 app, it'll remove the CloudFormation stacks as well. --- ### CI/CD You probably have _git push to deploy_ or CI/CD set up for your apps. If you are using GitHub Actions; there shouldn't be much of a difference between v2 and v3. If you are using [**_Seed_**](https://seed.run) to deploy your v2 app; then you'll want to migrate to using [Autodeploy](/docs/console/#autodeploy) on the [SST Console](/docs/console/). We are currently [not planning to support v3 on Seed](https://seed.run/blog/seed-and-sst-v3). There are a couple of key reasons to Autodeploy through the Console: - The builds are run in your AWS account. - You can configure your workflow through your `sst.config.ts`. - And you can see which resources were updated as a part of the deploy. To enable Autodeploy on the Console, you'll need to: 1. Create a new account on the Console — console.sst.dev 2. Link your AWS account 3. Connect your repo 4. Configure your environments 5. And _git push_ Learn more about [Console](/docs/console/) and [Autodeploy](/docs/console/#autodeploy). --- ## sst.config.ts Listed below are some of the changes to your [`sst.config.ts`](/docs/reference/config/) in general. --- #### No imports All the constructs or _components_ are available in the global context. So there's no need to import anything. Your app's `package.json` only needs the `sst` package. There are no extra CDK or infrastructure related packages. --- #### Globals There are a couple of global variables, `$app` and `$dev` that replace the `app` argument that's passed into the `stacks()` method of your `sst.config.ts`. 1. `$app.name` gives you the name of app. Used to be `app.name`. 2. `$app.stage` gives you the name of stage. Used to be `app.stage`. 3. `$dev === true` tells you if you are in dev mode. Used to be `app.mode === "dev`. 4. `$dev === false` tells you if it's being deployed. Used to be `app.mode === "deploy`. 5. There is no `app.mode === remove` replacement since your components are not evaluated on `sst remove`. 6. There is no `app.region` since in v3 you can deploy resources to different regions or AWS profiles or _providers_. To get the default AWS provider you can use `aws.getRegionOutput().region`. --- #### No stacks Also since there are no stacks. You don't have access to the `stack` argument inside your stack function. And no `stack.addOutputs({})` method. You can still group your constructs or _components_ in files. But to output something you return in the `run` method of your config. ```ts title="sst.config.ts" async run() { const auth = await import("./infra/auth"); return { UserPool: auth.userPool.id, IdentityPool: auth.identityPool.id, UserPoolClient: auth.userPoolClient.id }; } ``` --- #### Defaults The set of methods that applied defaults to all the functions in your app like; `addDefaultFunctionBinding`, `addDefaultFunctionEnv`, `addDefaultFunctionPermissions`, and `setDefaultFunctionProps` can be replaced with the global `$transform`. ```ts title="sst.config.ts" $transform(sst.aws.Function, (args, opts) => { // Set the default if it's not set by the component if (args.runtime === undefined) { args.runtime = "nodejs18.x"; } }) ``` Learn more about [`$transform`](/docs/reference/global/#transform). --- ## Clients The Node.js client, now called the [JS SDK](/docs/reference/sdk/) has a couple of minor changes. Update `sst` to the latest version in your `package.json`. If you have a monorepo, make sure to update `sst` in all your packages. --- ### Bind In SST v3, you access all bound or _linked_ resources through the `Resource` module. Say you link a bucket to a function. **v3** ```ts title="sst.config.ts" {5} const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [bucket] }); ``` **v2** ```ts title="sst.config.ts" {5} const bucket = new Bucket(stack, "MyBucket"); new Function(stack, "MyFunction", { handler: "src/lambda.handler", bind: [bucket] }); ``` In your function you would access it like so. **v3** ```ts title="src/lambda.ts" "Resource.MyBucket.name" "sst" import { Resource } from "sst"; console.log(Resource.MyBucket.name); ``` **v2** ```ts title="src/lambda.ts" "Bucket.MyBucket.bucketName" "sst/node/bucket" import { Bucket } from "sst/node/bucket"; console.log(Bucket.MyBucket.bucketName); ``` --- ### Config The same applies to `Config` as well. Let's look at a secret. **v3** ```ts title="sst.config.ts" {5} const secret = new sst.Secret("MySecret"); new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [secret] }); ``` **v2** ```ts title="sst.config.ts" {5} const secret = new Config.Secret(stack, "MySecret"); new Function(stack, "MyFunction", { handler: "src/lambda.handler", bind: [secret] }); ``` And in your function you access it in the same way. **v3** ```ts title="src/lambda.ts" "Resource.MySecret.value" "sst" import { Resource } from "sst"; console.log(Resource.MySecret.value); ``` **v2** ```ts title="src/lambda.ts" "Config.MySecret" "sst/node/config" import { Config } from "sst/node/config"; console.log(Config.MySecret); ``` --- ### Handlers In v2, some modules in the Node client had [handlers and hooks](https://v2.sst.dev/clients#handlers). ```ts title="v2" ``` These were experimental and are not currently supported in v3. To continue using them you can import them by first adding it to your `package.json`. ```diff lang="json" title="package.json" { + "sstv2": "npm:sst@^2", "sst": "^3" } ``` This means that you have both v2 and v3 installed in your project. Since, they both have an `sst` binary, you want to make sure v3 takes precedence. So v3 should be listed **after** v2. :::caution Make sure v3 is listed after v2 in your `package.json`. ::: And then import them via the `sstv2` alias. ```ts title="v3" ``` --- ## Constructs Below shows the v3 component version of a v2 construct. --- ### Api **v3** ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayV2("MyApi", { domain: "api.example.com" }); api.route("GET /", "src/get.handler"); api.route("POST /", "src/post.handler"); ``` **v2** ```ts title="sst.config.ts" const api = new Api(stack, "MyApi", { customDomain: "api.example.com" }); api.addRoutes(stack, { "GET /": "src/get.main", "POST /": "src/post.handler" }); ``` --- ### Job The `Task` component that replaces `Job` is based on AWS Fargate. It runs a container task in the background. **v3** ```ts title="sst.config.ts" const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Task("MyTask", { cluster, image: { context: "./src", dockerfile: "Dockerfile" } }); ``` **v2** ```ts title="sst.config.ts" new Job(stack, "MyJob", { handler: "src/job.main" }); ``` There are some key differences between `Job` and `Task`. 1. `Task` is based on AWS Fargate. `Job` used a combination of AWS CodeBuild and Lambda. 2. Since `Task` is natively based on Fargate, you can use the AWS SDK to interact with it, even in runtimes the SST SDK doesn't support. 3. In dev mode, `Task` uses Fargate only, whereas `Job` used Lambda. 4. While CodeBuild is billed per minute, Fargate is a lot cheaper than CodeBuild. Roughly **$0.02 per hour** vs **$0.3 per hour** on X86 machines. Learn more about [`Task`](/blog/tasks-in-v3). --- ### RDS **v3** ```ts title="sst.config.ts" const vpc = new sst.aws.Vpc("MyVpc"); new sst.aws.Aurora("MyDatabase", { vpc, engine: "postgres", version: "15.5", databaseName: "acme" }); ``` **v2** ```ts title="sst.config.ts" new RDS(stack, "MyDatabase", { engine: "postgresql15.5", defaultDatabaseName: "acme", migrations: "path/to/migration/scripts" }); ``` The `Aurora` component uses [Amazon Aurora Serverless v2](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html). For migrations, we recommend using [Drizzle Kit](https://orm.drizzle.team/kit-docs/overview). Check out our [Drizzle example](/docs/start/aws/drizzle/). --- ### Cron **v3** ```ts title="sst.config.ts" new sst.aws.Cron("MyCronJob", { schedule: "rate(1 minute)", function: "src/cron.handler" }); ``` **v2** ```ts title="sst.config.ts" new Cron(stack, "MyCronJob", { schedule: "rate(1 minute)", job: "src/cron.handler" }); ``` --- ### Table **v3** ```ts title="sst.config.ts" const table = new sst.aws.Dynamo("MyTable", { fields: { id: "string" }, primaryIndex: { hashKey: "id" } }); table.subscribe("MySubscriber", "src/subscriber.handler"); ``` **v2** ```ts title="sst.config.ts" const table = new Table(stack, "MyTable", { fields: { id: "string" }, primaryIndex: { partitionKey: "id" } }); table.addConsumers(stack, { consumer: "src/subscriber.handler" }); ``` --- ### Topic **v3** ```ts title="sst.config.ts" const topic = new sst.aws.SnsTopic("MyTopic"); topic.subscribe("MySubscriber", "src/subscriber.handler"); ``` **v2** ```ts title="sst.config.ts" const topic = new Topic(stack, "MyTopic"); topic.addSubscribers(stack, { subscriber: "src/subscriber.handler" }); ``` --- ### Queue **v3** ```ts title="sst.config.ts" const queue = new sst.aws.Queue("MyQueue"); queue.subscribe("src/subscriber.handler"); ``` **v2** ```ts title="sst.config.ts" const queue = new Queue(stack, "MyQueue"); queue.addConsumer(stack, "src/subscriber.handler"); ``` --- ### Config The `Config` construct is now broken into a `Secret` component and v3 has a separate way to bind any value or _parameter_. --- #### Secret **v3** ```ts title="sst.config.ts" const secret = new sst.Secret("MySecret"); ``` **v2** ```ts title="sst.config.ts" const secret = new Config.Secret(stack, "MySecret"); ``` There's also a slight change to the CLI for setting secrets. **v3** ```bash "secret" npx sst secret set MySecret sk_test_abc123 ``` **v2** ```bash "secrets" npx sst secrets set MySecret sk_test_abc123 ``` --- #### Parameter The `Linkable` component lets you bind or _link_ any value. **v3** ```ts title="sst.config.ts" const secret = new sst.Linkable("MyParameter", { properties: { version: "1.2.0" } }); ``` **v2** ```ts title="sst.config.ts" const secret = new Config.Parameter(stack, "MyParameter", { value: "1.2.0" }); ``` In your function you'd access this using. **v3** ```ts title="src/lambda.ts" import { Resource } from "sst"; console.log(Resource.MyParameter.version); ``` **v2** ```ts title="src/lambda.ts" import { Config } from "sst/node/config"; console.log(Config.MyParameter); ``` --- ### Bucket **v3** ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); bucket.subscribe("src/subscriber.handler"); ``` **v2** ```ts title="sst.config.ts" const bucket = new Bucket(stack, "MyBucket"); bucket.addNotifications(stack, { notification: "src/notification.main" }); ``` --- ### Service **v3** ```ts title="sst.config.ts" const cluster = new sst.aws.Cluster("MyCluster", { vpc: { id: "vpc-0d19d2b8ca2b268a1", securityGroups: ["sg-0399348378a4c256c"], publicSubnets: ["subnet-0b6a2b73896dc8c4c", "subnet-021389ebee680c2f0"], privateSubnets: ["subnet-0db7376a7ad4db5fd ", "subnet-06fc7ee8319b2c0ce"] } }); new sst.aws.Service("MyService", { cluster, loadBalancer: { domain: "my-app.com", ports: [ { listen: "80/http" } ] } }); ``` **v2** ```ts title="sst.config.ts" new Service(stack, "MyService", { customDomain: "my-app.com", path: "./service", port: 80, cdk: { vpc: Vpc.fromLookup(stack, "VPC", { vpcId: "vpc-0d19d2b8ca2b268a1" }) } }); ``` --- ### Cognito **v3** ```ts title="sst.config.ts" const userPool = new sst.aws.CognitoUserPool("MyUserPool"); const client = userPool.addClient("MyClient"); new sst.aws.CognitoIdentityPool("MyIdentityPool", { userPools: [{ userPool: userPool.id, client: client.id }] }); ``` **v2** ```ts title="sst.config.ts" new Cognito(stack, "MyAuth"); ``` --- ### Function **v3** ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler" }); ``` **v2** ```ts title="sst.config.ts" new Function(stack, "MyFunction", { handler: "src/lambda.handler" }); ``` --- ### AstroSite **v3** ```ts title="sst.config.ts" new sst.aws.Astro("MyWeb", { domain: "my-app.com" }); ``` **v2** ```ts title="sst.config.ts" new AstroSite(stack, "MyWeb", { customDomain: "my-app.com" }); ``` --- ### EventBus **v3** ```ts title="sst.config.ts" const bus = new sst.aws.Bus("Bus"); bus.subscribe("MySubscriber1", "src/function1.handler", { pattern: { source: ["myevent"] } }); bus.subscribe("MySubscriber2", "src/function2.handler", { pattern: { source: ["myevent"] } }); ``` **v2** ```ts title="sst.config.ts" new EventBus(stack, "Bus", { rules: { myRule: { pattern: { source: ["myevent"] }, targets: { myTarget1: "src/function1.handler", myTarget2: "src/function2.handler" } } } }); ``` --- ### StaticSite **v3** ```ts title="sst.config.ts" new sst.aws.StaticSite("MyWeb", { domain: "my-app.com" }); ``` **v2** ```ts title="sst.config.ts" new StaticSite(stack, "MyWeb", { customDomain: "my-app.com" }); ``` --- ### RemixSite **v3** ```ts title="sst.config.ts" new sst.aws.Remix("MyWeb", { domain: "my-app.com" }); ``` **v2** ```ts title="sst.config.ts" new RemixSite(stack, "MyWeb", { customDomain: "my-app.com" }); ``` --- ### NextjsSite **v3** ```ts title="sst.config.ts" new sst.aws.Nextjs("MyWeb", { domain: "my-app.com" }); ``` **v2** ```ts title="sst.config.ts" new NextjsSite(stack, "MyWeb", { customDomain: "my-app.com" }); ``` --- ### AppSyncApi **v3** ```ts title="sst.config.ts" const api = new sst.aws.AppSync("MyApi", { schema: "schema.graphql", domain: "api.domain.com" }); const lambdaDS = api.addDataSource({ name: "lambdaDS", lambda: "src/lambda.handler" }); api.addResolver("Query user", { dataSource: lambdaDS.name }); ``` **v2** ```ts title="sst.config.ts" const api = new AppSyncApi(stack, "MyApi", { schema: "graphql/schema.graphql", customDomain: "api.example.com" }); api.addDataSources(stack, { lambdaDS: "src/lambda.handler" }); api.addResolvers(stack, { "Query user": "lambdaDS" }); ``` --- ### SvelteKitSite **v3** ```ts title="sst.config.ts" new sst.aws.SvelteKit("MyWeb", { domain: "my-app.com" }); ``` **v2** ```ts title="sst.config.ts" new SvelteKitSite(stack, "MyWeb", { customDomain: "my-app.com" }); ``` --- ### SolidStartSite **v3** ```ts title="sst.config.ts" new sst.aws.SolidStart("MyWeb", { domain: "my-app.com" }); ``` **v2** ```ts title="sst.config.ts" new SolidStartSite(stack, "MyWeb", { customDomain: "my-app.com" }); ``` --- ### WebSocketApi **v3** ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayWebSocket("MyApi", { domain: "api.example.com" }); api.route("$connect", "src/connect.handler"); api.route("$disconnect", "src/disconnect.handler"); ``` **v2** ```ts title="sst.config.ts" const api = new WebSocketApi(stack, "MyApi", { customDomain: "api.example.com" }); api.addRoutes(stack, { $connect: "src/connect.handler", $disconnect: "src/disconnect.handler" }); ``` --- ### KinesisStream **v3** ```ts title="sst.config.ts" const stream = new sst.aws.KinesisStream("MyStream"); stream.subscribe("MySubscriber", "src/subscriber.handler"); ``` **v2** ```ts title="sst.config.ts" const stream = new KinesisStream(stack, "MyStream"); stream.addConsumers(stack, { consumer: "src/subscriber.handler" }); ``` --- ### ApiGatewayV1Api **v3** ```ts title="sst.config.ts" const api = new sst.aws.ApiGatewayV1("MyApi", { domain: "api.example.com" }); api.route("GET /", "src/get.handler"); api.route("POST /", "src/post.handler"); api.deploy(); ``` **v2** ```ts title="sst.config.ts" const api = new ApiGatewayV1Api(stack, "MyApi", { customDomain: "api.example.com" }); api.addRoutes(stack, { "GET /": "src/get.handler", "POST /": "src/post.handler" }); ``` --- Congrats on getting through the migration. If you find any errors or if you'd like to add some details to this guide, feel free to _Edit this page_ and submit a PR. --- ## Migrate From v3 Migrate your SST v3 apps to v4. https://sst.dev/docs/migrate-from-v3 SST v4 upgrades the underlying [Pulumi AWS provider](https://www.pulumi.com/registry/packages/aws/) from v6 to v7. This guide covers migrating your SST v3 apps to v4. :::tip SST components are already updated for v7. Most users only need to run `sst refresh` and deploy. ::: For the full list of upstream changes, refer to the [Pulumi AWS v7 migration guide](https://www.pulumi.com/registry/packages/aws/how-to-guides/7-0-migration). --- ## Breaking changes No code changes are needed if you are only using SST components without transforms or direct `@pulumi/aws` usage. Otherwise, refer to the [Pulumi AWS v7 migration guide](https://www.pulumi.com/registry/packages/aws/how-to-guides/7-0-migration) for the full list of breaking changes. The internal changes to SST components were minimal: mostly S3 resource renames (dropping the `V2` suffix) and switching from `tags` to `tagsAll`. You can see the full list of changes in the [upgrade PR](https://github.com/anomalyco/sst/pull/6259/). --- ## Migration steps When you update SST and the AWS provider version changes, `sst deploy` will be blocked until you migrate your state: 1. **Install the latest v4** Make sure you're on the latest v4 version before running any commands. 2. **Update your config** If you have any breaking changes from above, update your `sst.config.ts` accordingly. 3. **Review changes** Run `sst diff` to preview what will change. This is a **one-way migration**, so it's worth reviewing first. ```bash frame="none" sst diff ``` 4. **Migrate state** Run `sst refresh` to migrate your state. :::caution Do not use the `--target` flag during migration. The state refresh must cover all resources to ensure a consistent migration. ::: ```bash frame="none" sst refresh ``` Repeat for each stage. ```bash frame="none" sst refresh --stage production ``` If the stage was deployed using `sst dev`, use the `--dev` flag. ```bash frame="none" sst refresh --dev ``` If you [share resources across stages](/docs/share-across-stages), run `sst refresh` on the stage where the resource is created first — not the stage that references it via `.get()`. 5. **Deploy** Once refreshed, deploy as usual. ```bash frame="none" sst deploy ``` That's it — once refreshed and deployed, your app is fully migrated to v4. --- ## Upgrade testimonials Upgrading your infra framework can feel scary. Here's what teams have shared in Discord after migrating: --- ## PlanetScale Learn how to use SST with PlanetScale https://sst.dev/docs/planetscale [PlanetScale](https://planetscale.com) is a cloud database platform built for speed and scalability. This guide covers how to set it up with SST. ## Install Add the PlanetScale provider to your SST app. Learn more about [providers](/docs/providers). ```bash sst add planetscale ``` This adds the provider to your `sst.config.ts`. ```ts title="sst.config.ts" {3} { providers: { planetscale: "1.0.0", }, } ``` Then set the `PLANETSCALE_SERVICE_TOKEN` and `PLANETSCALE_SERVICE_TOKEN_ID` environment variables. You can create a service token in the PlanetScale dashboard under [Settings > Service tokens](https://app.planetscale.com/~/settings/service-tokens). The token needs at least the following permissions for the upcoming examples: - `connect_branch` - `connect_production_branch` - `create_branch` - `delete_branch` - `delete_branch_password` - `read_branch` - `read_database` --- ## Reference a database Reference an existing PlanetScale Vitess database directly from your SST config. ```ts title="sst.config.ts" const db = planetscale.getDatabaseVitessOutput({ id: "mydb", organization: "myorg", }); ``` :::note This guide uses the Vitess resources. PlanetScale also supports Postgres with equivalent resources and functions: `PostgresBranch`, `PostgresBranchRole`, `getDatabasePostgresOutput`, and `getPostgresBranchOutput`. ::: --- ## Branch per stage PlanetScale supports database branching natively. Combined with SST, every stage and every pull request can get its own isolated database branch automatically. Conditionally create or reference a branch based on the current stage. ```ts title="sst.config.ts" const branch = $app.stage === "production" ? planetscale.getVitessBranchOutput({ id: db.defaultBranch, organization: db.organization, database: db.name, }) : new planetscale.VitessBranch("DatabaseBranch", { database: db.name, organization: db.organization, name: $app.stage, parentBranch: db.defaultBranch, }); ``` In production, it references the existing default branch. In any other stage, it creates a new branch from it. So `sst deploy --stage pr-42` creates a `pr-42` branch in PlanetScale. :::tip Configure [Autodeploy](/docs/console/#autodeploy) in the SST Console to do this automatically on every pull request. ::: This is a pattern used in production by [terminal.shop](https://github.com/terminaldotshop/terminal) and [OpenCode](https://github.com/anomalyco/opencode). --- ## Link to your app Create a password for the branch and wrap it in a `sst.Linkable`. ```ts title="sst.config.ts" const password = new planetscale.VitessBranchPassword("DatabasePassword", { database: db.name, organization: db.organization, branch: branch.name, role: "admin", name: `${$app.name}-${$app.stage}`, }); properties: { host: password.accessHostUrl, username: password.username, password: password.plainText, database: db.name, port: 3306, }, }); ``` The [`Linkable`](/docs/component/linkable) component lets you wrap arbitrary values and make them available to any function or service you link it to. Learn more about [linking resources](/docs/linking). --- ## Connect to the database Link the database to any function or service. ```ts title="sst.config.ts" new sst.aws.Function("Api", { handler: "src/api.handler", link: [database], }); ``` Then access the credentials in a type-safe way through `Resource`. For example, with [Drizzle ORM](https://orm.drizzle.team). ```ts title="src/drizzle.ts" connection: { host: Resource.Database.host, username: Resource.Database.username, password: Resource.Database.password, }, }); ``` You can use any other ORM or database driver the same way. --- ## Examples Check out the full examples: - [PlanetScale with Drizzle and MySQL](/docs/examples/#aws-planetscale-drizzle-mysql) - [PlanetScale with Drizzle and Postgres](/docs/examples/#aws-planetscale-drizzle-postgres) --- ## Policy Packs Enforce compliance and security rules on your infrastructure before deploying. https://sst.dev/docs/policy-packs Policy packs let you enforce rules on your infrastructure before deploying. They use [Pulumi Policy Packs](https://www.pulumi.com/docs/iac/packages-and-automation/crossguard/) under the hood, and work with the `sst deploy`, `sst diff` and `sst dev` commands. --- ## Quick start Say you want to require permission boundaries on all IAM roles. Start by creating a policy pack. ```bash frame="none" mkdir policy-pack && cd policy-pack pulumi policy new aws-typescript ``` This creates a `PulumiPolicy.yaml`, `index.ts`, and `package.json`. Update the `index.ts` with your policy. ```ts title="policy-pack/index.ts" new PolicyPack("aws-policies", { policies: [ { name: "iam-role-requires-permission-boundary", description: "IAM roles must have a permission boundary.", enforcementLevel: "mandatory", validateResource: validateResourceOfType( aws.iam.Role, (role, _args, reportViolation) => { if (!role.permissionsBoundary) { reportViolation( "IAM roles must have a permission boundary." ); } } ), }, ], }); ``` Then deploy with the `--policy` flag. ```bash frame="none" sst deploy --policy ./policy-pack ``` If any resource violates a mandatory policy, the deploy is blocked. --- ## Enforcement levels Each policy has an `enforcementLevel` that controls what happens when a resource violates it. - **`mandatory`** — blocks the deploy. The resource must be fixed before it can be deployed. - **`advisory`** — prints a warning but allows the deploy to continue. ```ts title="policy-pack/index.ts" { name: "no-wildcard-resources", description: "Avoid wildcard resources in IAM policies.", enforcementLevel: "advisory", validateResource: validateResourceOfType( aws.iam.RolePolicy, (policy, _args, reportViolation) => { // ... } ), } ``` --- ## Writing a policy pack A policy pack is a directory with three files: - `PulumiPolicy.yaml` — metadata and runtime config - `index.ts` — your policies - `package.json` — dependencies The `PulumiPolicy.yaml` looks like this. ```yaml title="policy-pack/PulumiPolicy.yaml" description: A minimal Policy Pack for AWS using TypeScript. runtime: nodejs main: dist/index.js ``` And the `package.json` needs the `@pulumi/policy` package, plus any provider packages your policies check against. ```json title="policy-pack/package.json" { "dependencies": { "@pulumi/aws": "^6.0.0", "@pulumi/policy": "^1.20.0" } } ``` You can check the [full example on GitHub](https://github.com/sst/sst/tree/dev/examples/aws-policy-pack) and the [Pulumi Policy Pack docs](https://www.pulumi.com/docs/iac/packages-and-automation/crossguard/) for more details. --- ## Providers Providers allow you to interact with cloud services. https://sst.dev/docs/providers A provider is what allows SST to interact with the APIs of various cloud services. These are packages that can be installed through your `sst.config.ts`. SST is built on Pulumi/Terraform and **supports 150+ providers**. This includes the major clouds like AWS, Azure, and GCP; but also services like Cloudflare, Stripe, Vercel, Auth0, etc. Check out the full list in the [Directory](/docs/all-providers#directory). --- ## Install To add a provider to your app run. ```bash sst add ``` This command adds the provider to your config, installs the packages, and adds the namespace of the provider to your globals. :::caution Do not `import` the provider packages in your `sst.config.ts`. ::: SST manages these packages internally and you do not need to import the package in your `sst.config.ts`. :::tip Your app can have multiple providers. ::: The name of a provider comes from the **name of the package** in the [Directory](/docs/all-providers#directory). For example, `sst add planetscale`, will add the following to your `sst.config.ts`. ```ts title="sst.config.ts" { providers: { planetscale: "0.0.7" } } ``` You can add multiple providers to your app. ```ts title="sst.config.ts" { providers: { aws: "6.27.0", cloudflare: "5.37.1" } } ``` Read more about the [`sst add`](/docs/reference/cli/#add) command. --- ## Configure You can configure a provider in your `sst.config.ts`. For example, to change the region for AWS. ```ts title="sst.config.ts" { providers: { aws: { region: "us-west-2" } } } ``` You can check out the available list of options that you can configure for a provider over on the provider's docs. For example, here are the ones for [AWS](https://www.pulumi.com/registry/packages/aws/api-docs/provider/#inputs) and [Cloudflare](https://www.pulumi.com/registry/packages/cloudflare/api-docs/provider/#inputs). --- ### Versions By default, SST installs the latest version. If you want to use a specific version, you can change it in your config. ```ts title="sst.config.ts" { providers: { aws: { version: "6.27.0" } } } ``` If you make any changes to the `providers` in your config, you'll need to run `sst install`. :::tip You'll need to run `sst install` if you update the `providers` in your config. ::: The version of the provider is always pinned to what's in the `sst.config.ts` and does not auto-update. This is the case, even if there is no version set. This is to make sure that the providers don't update in the middle of your dev workflow. :::note Providers don't auto-update. They stick to the version that was installed initially. ::: So if you want to update it, you'll need to change it manually and run `sst install`. --- ### Credentials Most providers will read your credentials from the environment. For example, for Cloudflare you might set your token like so. ```bash ``` Follow the [Cloudflare guide](/docs/cloudflare/) for the recommended setup. However, some providers also allow you to pass in the credentials through the config. ```ts title="sst.config.ts" { providers: { cloudflare: { apiToken: "aaaaaaaa_aaaaaaaaaaaa_aaaaaaaa" } } } ``` Read more about [configuring providers](/docs/reference/config/#providers). --- ## Components The provider packages come with components that you can use in your app. For example, running `sst add aws` will allow you to use all the components under the `aws` namespace. ```ts title="sst.config.ts" new aws.s3.BucketV2("b", { bucket: "mybucket", tags: { Name: "My bucket" } }); ``` Aside from components in the providers, SST also has a list of built-in components. These are typically higher level components that make it easy to add features to your app. You can check these out in the sidebar. Read more about [Components](/docs/components/). --- ## Functions Aside from the components, there are a collection of functions that are exposed by a provider. These are listed in the Pulumi docs as `getXXXXXX` on the sidebar. For example, to get the AWS account being used in your app. ```ts title="sst.config.ts" const current = await aws.getCallerIdentity({}); const accountId = current.accountId; const callerArn = current.arn; const callerUser = current.userId; ``` Or to get the current region. ```ts title="sst.config.ts" const current = await aws.getRegion({}); const region = current.name; ``` --- #### Output versions The above are _async_ methods that return promises. That means that if you call these in your app, they'll block the deployment of any resources that are defined after it. :::tip Outputs don't block your deployments. ::: So we instead recommend using the _Output_ version of these functions. For example, if we wanted to set the above as environment variables in a function, we would do something like this ```ts title="sst.config.ts" new sst.aws.Function("MyFunction, { handler: "src/lambda.handler", environment: { ACCOUNT: aws.getCallerIdentityOutput({}).accountId, REGION: aws.getRegionOutput().region } } ``` The `aws.getXXXXOutput` functions typically return an object of type _`Output`_. Read more about [Outputs](/docs/components/#outputs). --- ## Instances You can create multiple instances of a provider that's in your config. By default SST creates one instance of each provider in your `sst.config.ts` with the defaults. By you can create multiple instances in your app. ```ts title="sst.config.ts" const useast1 = new aws.Provider("AnotherAWS"); ``` This is useful for multi-region or multi-account deployments. --- ### Multi-region You might want to create multiple providers in cases where some resources in your app need to go to one region, while others need to go to a separate region. Let's look at an example. Assume your app is normally deployed to `us-west-1`. But you need to create an ACM certificate that needs to be deployed to `us-east-1`. ```ts {1} title="sst.config.ts" "{ provider: useast1 }" const useast1 = new aws.Provider("useast1", { region: "us-east-1" }); new sst.aws.Function("MyFunction, "src/lambda.handler"); new aws.acm.Certificate("cert", { domainName: "foo.com", validationMethod: "EMAIL", }, { provider: useast1 }); ``` Here the function is created in your default region, `us-west-1`. While the certificate is created in `us-east-1`. --- ## Reference Resources Reference externally created resources in your app. https://sst.dev/docs/reference-resources Referencing is the process of _using_ some externally created resources in your SST app; without having SST manage them. This is for when you have some resources that are either managed by a different team or a different IaC tool. Typically these are low-level resources and not SST's built-in components. There are a few different ways this shows up in SST. --- ## Lookup a resource Let's say you have an existing resource that you want to use in your SST app. You can look it up by passing in a property of the resource. For example, to look up a previously created S3 Bucket with the following name. ```txt mybucket-xnbmhcvd ``` We can use the [`static aws.s3.BucketV2.get`](https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketv2/#look-up) method. ```ts title="sst.config.ts" const bucket = aws.s3.BucketV2.get("MyBucket", "mybucket-xnbmhcvd"); ``` This gives you the same bucket object that you'd get if you had created this resource in your app. Here we are assuming the bucket wasn't created through an SST app. This is why we are using the low-level `aws.s3.BucketV2`. If this was created in an SST app or in another stage in the same app, there's a similar `static sst.aws.Bucket.get` method. Learn more about [sharing across stages](/docs/share-across-stages). --- #### How it works When you create a resource in your SST app, two things happen. First, the resource is created by making a bunch of calls to your cloud provider. Second, SST makes a call to _get_ the resource from the cloud provider. The data that it gets back is stored in your [state](/docs/state/). :::note When you lookup a resource or create it, you get the same type of object. ::: When you lookup a resource, it skips the creation step and just gets the resource. It does this every time you deploy. But the object you get in both cases is the same. --- #### Lookup properties The properties used to do a lookup are the same ones that you'd use if you were trying to import them. :::tip You can look up a resource with its [Import Property](/docs/import-resources/#import-properties). ::: We've compiled a list of the most commonly lookedup low-level resources and their [Import Properties](/docs/import-resources/#import-properties). Most low-level resources come with a `static get` method that use this property to look up the resource. --- #### Make it linkable Let's take it a step further. You can use the [`sst.Linkable`](/docs/component/linkable/) component, to be able to link any property of this resource. ```ts title="sst.config.ts" {3} const storage = new sst.Linkable("MyStorage", { properties: { domain: bucket.bucketDomainName } }); ``` Here we are using the domain name of the bucket as an example. --- #### Link to it And link it to a function. ```ts title="sst.config.ts" {9} new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [storage] }); ``` Now you can use the [SDK](/docs/reference/sdk/) to access them at runtime. ```js title="src/lambda.ts" console.log(Resource.MyStorage.domain); ``` --- ## Pass in a resource Aside from looking up resources, you can also pass existing resources in to the built-in SST components. This is typically when you are trying to create a new resource and it takes another resource as a part of it. For example, let's say you want to add a previously created function as a subscriber to a queue. You can do so by passing its ARN. ```ts title="sst.config.ts" {3} const queue = new sst.aws.Queue("MyQueue"); queue.subscribe("arn:aws:lambda:us-east-1:123456789012:function:my-function"); ``` --- #### How it works SST is simply asking for the props the underlying resource needs. In this case, it only needs the function ARN. However, for more complicated resources like VPCs, you might have to pass in a lot more. Here we are creating a new function in an existing VPC. ```ts title="sst.config.ts" new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", vpc: { subnets: ["subnet-0be8fa4de860618bb"], securityGroups: ["sg-0be8fa4de860618bb"] } }); ``` Whereas, for the `Cluster` component, you might need to pass in a lot more. ```ts title="sst.config.ts" new sst.aws.Cluster("MyCluster", { vpc: { id: "vpc-0be8fa4de860618bb", securityGroups: ["sg-0be8fa4de860618bb"], containerSubnets: ["subnet-0be8fa4de860618bb"], loadBalancerSubnets: ["subnet-8be8fa4de850618ff"] } }); ``` These are listed under the `vpc` prop of the `Cluster` component. --- ## Attach to a resource Referencing existing resources also comes in handy when you are attaching to an existing resource. For example, to add a subscriber to an externally created queue: ```ts title="sst.config.ts" {3} sst.aws.Queue.subscribe("arn:aws:sqs:us-east-1:123456789012:MyQueue", "src/subscriber.handler"); ``` Here we are using the `static subscribe` method of the `Queue` component. And it takes the queue ARN that you are trying to attach to. There are a few other built-in SST components that have `static` methods like this. - `Bus` - `Dynamo` - `SnsTopic` - `KinesisStream` With this you can continue to manage the root resource outside of SST, while still being able to attach to them. --- ## CLI Reference doc for the SST CLI. https://sst.dev/docs/reference/cli The CLI helps you manage your SST apps. If you are using SST as a part of your Node project, we recommend installing it locally. ```bash npm install sst ``` --- If you are not using Node, you can install the CLI globally. ```bash curl -fsSL https://sst.dev/install | bash ``` :::note The CLI currently supports macOS, Linux, and WSL. Windows support is in beta. ::: To install a specific version. ```bash "VERSION=0.0.403" curl -fsSL https://sst.dev/install | VERSION=0.0.403 bash ``` --- #### With a package manager You can also use a package manager to install the CLI. - **macOS** The CLI is available via a Homebrew Tap, and as downloadable binary in the [releases](https://github.com/sst/sst/releases/latest). ```bash brew install sst/tap/sst # Upgrade brew upgrade sst ``` You might have to run `brew upgrade sst`, before the update. - **Linux** The CLI is available as downloadable binaries in the [releases](https://github.com/sst/sst/releases/latest). Download the `.deb` or `.rpm` and install with `sudo dpkg -i` and `sudo rpm -i`. For Arch Linux, it's available in the [aur](https://aur.archlinux.org/packages/sst-bin). --- #### Usage Once installed you can run the commands using. ```bash sst [command] ``` The CLI takes a few global flags. For example, the deploy command takes the `--stage` flag ```bash sst deploy --stage production ``` --- #### Environment variables You can access any environment variables set in the CLI in your `sst.config.ts` file. For example, running: ```bash ENV_VAR=123 sst deploy ``` Will let you access `ENV_VAR` through `process.env.ENV_VAR`. --- ## Global Flags ### stage **Type** `string` Set the stage the CLI is running on. ```bash frame="none" sst [command] --stage production ``` The stage is a string that is used to prefix the resources in your app. This allows you to have multiple _environments_ of your app running in the same account. :::tip Changing the stage will redeploy your app to a new stage with new resources. The old resources will still be around in the old stage. ::: You can also use the `SST_STAGE` environment variable. ```bash frame="none" SST_STAGE=dev sst [command] ``` This can also be declared in a `.env` file or in the CLI session. If the stage is not passed in, then the CLI will: 1. Use the username on the local machine. - If the username is `root`, `admin`, `prod`, `dev`, `production`, then it will prompt for a stage name. 2. Store this in the `.sst/stage` file and reads from it in the future. This stored stage is called your **personal stage**. ### verbose **Type** `boolean` Prints extra information to the log files in the `.sst/` directory. ```bash sst [command] --verbose ``` To also view this on the screen, use the `--print-logs` flag. ### print-logs **Type** `boolean` Print the logs to the screen. These are logs that are written to the `.sst/` directory. ```bash sst [command] --print-logs ``` It can also be set using the `SST_PRINT_LOGS` environment variable. ```bash SST_PRINT_LOGS=1 sst [command] ``` This is useful when running in a CI environment. ### config **Type** `string` Optionally, pass in a path to the SST config file. This default to `sst.config.ts` in the current directory. ```bash sst --config path/to/config.ts [command] ``` This is useful when your monorepo has multiple SST apps in it. You can run the SST CLI for a specific app by passing in the path to its config file. ### help **Type** `boolean` Prints help for the given command. ```sh frame="none" sst [command] --help ``` Or the global help. ```sh frame="none" sst --help ``` ## Commands ### init ```sh frame="none" sst init ``` #### Flags - `yes` `boolean` Skip interactive confirmation for detected framework. Initialize a new project in the current directory. This will create a `sst.config.ts` and `sst install` your providers. If this is run in a Next.js, Remix, Astro, or SvelteKit project, it'll init SST in drop-in mode. To skip the interactive confirmation after detecting the framework. ```bash frame="none" sst init --yes ``` ### dev ```sh frame="none" sst dev [command] ``` #### Args - `command?` The command to run #### Flags - `mode` `string` Defaults to using `multi` mode. Use `mono` to get a single stream of all child process logs or `basic` to not spawn any child processes. - `policy` `string` Run policy pack validation against the preview changes. Run your app in dev mode. By default, this starts a multiplexer with processes that deploy your app, run your functions, and start your frontend. :::note The tabbed terminal UI is only available on Linux/macOS and WSL. ::: Each process is run in a separate tab that you can click on in the sidebar. ![sst dev multiplexer mode](../../../../assets/docs/cli/sst-dev-multiplexer-mode.png) The multiplexer makes it so that you won't have to start your frontend or your container applications separately. Here's what happens when you run `sst dev`. - Deploy most of your resources as-is. - Except for components that have a `dev` prop. - `Function` components are run [_Live_](/docs/live/) in the **Functions** tab. - `Task` components have their _stub_ versions deployed that proxy the task and run their `dev.command` in the **Tasks** tab. - Frontends like `Nextjs`, `Remix`, `Astro`, `StaticSite`, etc. have their dev servers started in a separate tab and are not deployed. - `Service` components are not deployed, and instead their `dev.command` is started in a separate tab. - `Postgres`, `Aurora`, and `Redis` link to a local database if the `dev` prop is set. - Start an [`sst tunnel`](#tunnel) session in a new tab if your app has a `Vpc` with `bastion` enabled. - Load any [linked resources](/docs/linking) in the environment. - Start a watcher for your `sst.config.ts` and redeploy any changes. :::note The `Service` component and the frontends like `Nextjs` or `StaticSite` are not deployed by `sst dev`. ::: Optionally, you can disable the multiplexer and not spawn any child processes by running `sst dev` in basic mode. ```bash frame="none" sst dev --mode=basic ``` This will only deploy your app and run your functions. If you are coming from SST v2, this is how `sst dev` used to work. However in `basic` mode, you'll need to start your frontend separately by running `sst dev` in a separate terminal session by passing in the command. For example: ```bash frame="none" sst dev next dev ``` By wrapping your command, it'll load your [linked resources](/docs/linking) in the environment. To pass in a flag to the command, use `--`. ```bash frame="none" sst dev -- next dev --turbo ``` You can also disable the tabbed terminal UI by running `sst dev` in mono mode. ```bash frame="none" sst dev --mode=mono ``` Unlike `basic` mode, this'll spawn child processes. But instead of a tabbed UI it'll show their outputs in a single stream. This is used by default in Windows. ### deploy ```sh frame="none" sst deploy ``` #### Flags - `target` `` Only run it for the given component. - `exclude` `string` Exclude the specified component from the operation. - `continue` `boolean` Continue on error and try to deploy as many resources as possible. - `dev` `boolean` Deploy resources like `sst dev` would. - `policy` `string` Run policy pack validation against the preview changes. Deploy your application. By default, it deploys to your personal stage. You typically want to deploy it to a specific stage. ```bash frame="none" sst deploy --stage production ``` Optionally, deploy a specific component by passing in the name of the component from your `sst.config.ts`. ```bash frame="none" sst deploy --target MyComponent ``` Alternatively, exclude a specific component from the deploy. ```bash frame="none" sst deploy --exclude MyComponent ``` All the resources are deployed as concurrently as possible, based on their dependencies. For resources like your container images, sites, and functions; it first builds them and then deploys the generated assets. :::tip Configure the concurrency if your CI builds are running out of memory. ::: Since the build processes for some of these resources take a lot of memory, their concurrency is limited by default. However, this can be configured. | Resource | Concurrency | Flag | | -------- | ----------- | ---- | | Sites | 1 | `SST_BUILD_CONCURRENCY_SITE` | | Functions | 4 | `SST_BUILD_CONCURRENCY_FUNCTION` | | Containers | 1 | `SST_BUILD_CONCURRENCY_CONTAINER` | So only one site is built at a time, 4 functions are built at a time, and only 1 container is built at a time. You can set the above environment variables to change this when you run `sst deploy`. This is useful for CI environments where you want to control this based on how much memory your CI machine has. For example, to build a maximum of 2 sites concurrently. ```bash frame="none" SST_BUILD_CONCURRENCY_SITE=2 sst deploy ``` Or to configure all these together. ```bash frame="none" SST_BUILD_CONCURRENCY_SITE=2 SST_BUILD_CONCURRENCY_CONTAINER=2 SST_BUILD_CONCURRENCY_FUNCTION=8 sst deploy ``` Typically, this command exits when there's an error deploying a resource. But sometimes you want to be able to `--continue` deploying as many resources as possible; ```bash frame="none" sst deploy --continue ``` This is useful when deploying a new stage with a lot of resources. You want to be able to deploy as many resources as possible and then come back and fix the errors. The `sst dev` command deploys your resources a little differently. It skips deploying resources that are going to be run locally. Sometimes you want to deploy a personal stage without starting `sst dev`. ```bash frame="none" sst deploy --dev ``` The `--dev` flag will deploy your resources as if you were running `sst dev`. ### diff ```sh frame="none" sst diff ``` #### Flags - `target` `` Only run it for the given component. - `exclude` `string` Exclude the specified component from the operation. - `dev` `boolean` Compare to the dev version of this stage. - `policy` `string` Run policy pack validation against the preview changes. - `json` `boolean` Output the diff result as JSON to stdout. Useful for CI pipelines and scripting. Builds your app to see what changes will be made when you deploy it. It displays a list of resources that will be created, updated, or deleted. For each of these resources, it'll also show the properties that are changing. :::tip Run a `sst diff` to see what changes will be made when you deploy your app. ::: This is useful for cases when you pull some changes from a teammate and want to see what will be deployed; before doing the actual deploy. Optionally, you can diff a specific component by passing in the name of the component from your `sst.config.ts`. ```bash frame="none" sst diff --target MyComponent ``` Alternatively, exclude a specific component from the diff. ```bash frame="none" sst diff --exclude MyComponent ``` By default, this compares to the last deploy of the given stage as it would be deployed using `sst deploy`. But if you are working in dev mode using `sst dev`, you can use the `--dev` flag. ```bash frame="none" sst diff --dev ``` This is useful because in dev mode, you app is deployed a little differently. ### add ```sh frame="none" sst add ``` #### Args - `provider` The provider to add. Adds and installs the given provider. For example, ```bash frame="none" sst add aws ``` This command will: 1. Installs the package for the AWS provider. 2. Add `aws` to the globals in your `sst.config.ts`. 3. And, add it to your `providers`. ```ts title="sst.config.ts" { providers: { aws: { package: "@pulumi/aws", version: "6.27.0" } } } ``` You can use any provider listed in the [Directory](/docs/all-providers#directory). :::note Running `sst add aws` above is the same as manually adding the provider to your config and running `sst install`. ::: By default, the latest version of the provider is installed. If you want to use a specific version, you can change it in your config. ```ts title="sst.config.ts" { providers: { aws: { package: "@pulumi/aws", version: "6.26.0" } } } ``` You'll need to run `sst install` if you update the `providers` in your config. By default, these packages are fetched from the NPM registry. If you want to use a different registry, you can set the `NPM_REGISTRY` environment variable. ```bash frame="none" NPM_REGISTRY=https://my-registry.com sst add aws ``` You can also set the registry in your `.npmrc` file. If your registry requires authentication, SST supports `_authToken`, `_auth`, and `username`/`_password` from `.npmrc`. ### install ```sh frame="none" sst install ``` Installs the providers in your `sst.config.ts`. You'll need this command when: 1. You add a new provider to the `providers` or `home` in your config. 2. Or, when you want to install new providers after you `git pull` some changes. :::tip The `sst install` command is similar to `npm install`. ::: Behind the scenes, it installs the packages for your providers and adds the providers to your globals. If you don't have a version specified for your providers in your `sst.config.ts`, it'll install their latest versions. ### secret #### Flags - `fallback` `boolean` Manage the fallback values of secrets. #### Subcommands - [`set`](#secret-set) - [`remove`](#secret-remove) - [`load`](#secret-load) - [`list`](#secret-list) Manage the secrets in your app defined with `sst.Secret`. The `--fallback` flag can be used to manage the fallback values of a secret. Applies to all the sub-commands in `sst secret`. set ```sh frame="none" sst secret set [value] ``` #### Args - `name` The name of the secret. - `value` The value of the secret. Set the value of the secret. The secrets are encrypted and stored in an S3 Bucket in your AWS account. They are also stored in the package of the functions using the secret. :::tip If you are not running `sst dev`, you'll need to `sst deploy` to apply the secret. ::: For example, set the `sst.Secret` called `StripeSecret` to `123456789`. ```bash frame="none" sst secret set StripeSecret dev_123456789 ``` Optionally, set the secret in a specific stage. ```bash frame="none" sst secret set StripeSecret prod_123456789 --stage production ``` You can also set a _fallback_ value for a secret with `--fallback`. ```bash frame="none" sst secret set StripeSecret dev_123456789 --fallback ``` So if the secret is not set for a specific stage, it'll use the fallback instead. This only works for stages that are in the same AWS account. :::tip Set fallback values for your PR stages. ::: This is useful for preview environments that are automatically deployed. You won't have to set the secret for the stage after it's deployed. To set something like an RSA key, you can first save it to a file. ```bash frame="none" cat > tmp.txt < ``` #### Args - `name` The name of the secret. Remove a secret. For example, remove the `sst.Secret` called `StripeSecret`. ```bash frame="none" frame="none" sst secret remove StripeSecret ``` Optionally, remove a secret in a specific stage. ```bash frame="none" frame="none" sst secret remove StripeSecret --stage production ``` Remove the fallback value of the secret. ```bash frame="none" frame="none" sst secret remove StripeSecret --fallback ``` load ```sh frame="none" sst secret load ``` #### Args - `file` The file to load the secrets from. Load all the secrets from a file and set them. ```bash frame="none" sst secret load ./secrets.env ``` The file needs to be in the _dotenv_ or bash format of key-value pairs. ```sh title="secrets.env" KEY_1=VALUE1 KEY_2=VALUE2 ``` Optionally, set the secrets in a specific stage. ```bash frame="none" sst secret load --stage production ./prod.env ``` Set these secrets as _fallback_ values. ```bash frame="none" frame="none" sst secret load ./secrets.env --fallback ``` This command can be paired with the `secret list` command to get all the secrets from one stage and load them into another. ```bash frame="none" sst secret list > ./secrets.env sst secret load --stage production ./secrets.env ``` This works because `secret list` outputs the secrets in the right format. list ```sh frame="none" sst secret list ``` Lists all the secrets. Optionally, list the secrets in a specific stage. ```bash frame="none" frame="none" sst secret list --stage production ``` List only the fallback secrets. ```bash frame="none" frame="none" sst secret list --fallback ``` ### shell ```sh frame="none" sst shell [command] ``` #### Args - `command?` A command to run. #### Flags - `target` `` Only run it for the given component. Run a command with **all the resources linked** to the environment. This is useful for running scripts against your infrastructure. For example, let's say you have the following resources in your app. ```js title="sst.config.ts" {5,9} new sst.aws.Bucket("MyMainBucket"); new sst.aws.Bucket("MyAdminBucket"); ``` We can now write a script that'll can access both these resources with the [JS SDK](/docs/reference/sdk/). ```js title="my-script.js" "Resource.MyMainBucket.name" "Resource.MyAdminBucket.name" console.log(Resource.MyMainBucket.name, Resource.MyAdminBucket.name); ``` And run the script with `sst shell`. ```bash frame="none" frame="none" sst shell node my-script.js ``` This'll have access to all the buckets from above. :::tip Run the command with `--` to pass arguments to it. ::: To pass arguments into the script, you'll need to prefix it using `--`. ```bash frame="none" frame="none" /--(?!a)/ sst shell -- node my-script.js --arg1 --arg2 ``` If no command is passed in, it opens a shell session with the linked resources. ```bash frame="none" frame="none" sst shell ``` This is useful if you want to run multiple commands, all while accessing the resources in your app. Optionally, you can run this for a specific component by passing in the name of the component. ```bash frame="none" frame="none" sst shell --target MyComponent ``` Here the linked resources for `MyComponent` and its environment variables are available. ### remove ```sh frame="none" sst remove ``` #### Flags - `target` `string` Only run it for the given component. Removes your application. By default, it removes your personal stage. :::tip The resources in your app are removed based on the `removal` setting in your `sst.config.ts`. ::: This does not remove the SST _state_ and _bootstrap_ resources in your account as these might still be in use by other apps. You can remove them manually if you want to reset your account, [learn more](/docs/state/#reset). Optionally, remove your app from a specific stage. ```bash frame="none" frame="none" sst remove --stage production ``` You can also remove a specific component by passing in the name of the component from your `sst.config.ts`. ```bash frame="none" sst remove --target MyComponent ``` ### unlock ```sh frame="none" sst unlock ``` When you run `sst deploy`, it acquires a lock on your state file to prevent concurrent deploys. However, if something unexpectedly kills the `sst deploy` process, or if you manage to run `sst deploy` concurrently, the lock might not be released. This should not usually happen, but it can prevent you from deploying. You can run `sst unlock` to release the lock. ### version ```sh frame="none" sst version ``` Prints the current version of the CLI. ### upgrade ```sh frame="none" sst upgrade [version] ``` #### Args - `version?` A version to upgrade to. Upgrade the CLI to the latest version. Or optionally, pass in a version to upgrade to. ```bash frame="none" sst upgrade 0.10 ``` ### telemetry #### Subcommands - [`enable`](#telemetry-enable) - [`disable`](#telemetry-disable) Manage telemetry settings. SST collects completely anonymous telemetry data about general usage. We track: - Version of SST in use - Command invoked, `sst dev`, `sst deploy`, etc. - General machine information, like the number of CPUs, OS, CI/CD environment, etc. This is completely optional and can be disabled at any time. You can also opt-out by setting an environment variable: `SST_TELEMETRY_DISABLED=1` or `DO_NOT_TRACK=1`. enable ```sh frame="none" sst telemetry enable ``` Enable telemetry. disable ```sh frame="none" sst telemetry disable ``` Disable telemetry. ### refresh ```sh frame="none" sst refresh ``` #### Flags - `target` `string` Only run it for the given component. - `exclude` `string` Exclude the specified component from the operation. - `dev` `boolean` Refresh the dev version of this stage. Compares your local state with the state of the resources in the cloud provider. Any changes that are found are adopted into your local state. It will: 1. Go through every single resource in your state. 2. Make a call to the cloud provider to check the resource. - If the configs are different, it'll **update the state** to reflect the change. - If the resource doesn't exist anymore, it'll **remove it from the state**. :::note The `sst refresh` does not make changes to the resources in the cloud provider. ::: By default, this refreshes the stage as it would be deployed using `sst deploy`. If the stage was deployed using `sst dev`, use the `--dev` flag. ```bash frame="none" sst refresh --dev ``` You can also refresh a specific component by passing in the name of the component. ```bash frame="none" sst refresh --target MyComponent ``` Alternatively, exclude a specific component from the refresh. ```bash frame="none" sst refresh --exclude MyComponent ``` This is useful for cases where you want to ensure that your local state is in sync with your cloud provider. [Learn more about how state works](/docs/providers/#how-state-works). ### state #### Subcommands - [`edit`](#state-edit) - [`export`](#state-export) - [`list`](#state-list) - [`remove`](#state-remove) - [`repair`](#state-repair) Manage state of your app edit ```sh frame="none" sst state edit ``` Edit the raw state of your app directly. This opens your state file in your local editor (`$EDITOR`, or `vim` by default). When you save and exit, SST pushes those changes back to your backend. :::danger This command is dangerous. If you make an invalid change, you can corrupt your state and break deploys. Only use this if you understand the state format and know exactly what you are changing. Consider using safer commands like `sst state remove` or `sst state repair` first. ::: export ```sh frame="none" sst state export ``` #### Flags - `decrypt` Decrypt the state before printing it out. Prints the state of your app. This pull the state of your app from the cloud provider and then prints it out. You can write this to a file or view it directly in your terminal. This can be run for specific stages as well. ```bash frame="none" sst state export --stage production ``` By default, it runs on your personal stage. list ```sh frame="none" sst state list ``` Lists all the stages of your app for the current set of credentials. :::note This does not list the stages that are deployed in other accounts. ::: This pulls the state of your app from the cloud provider and then prints out all the stages that are listed in the state. remove ```sh frame="none" sst state remove ``` #### Args - `target` The name of the resource to remove. Removes the reference for the given resource from the state. :::note This does not remove the resource itself. ::: This does not remove the resource itself, it only edits the state of your app. ```bash frame="none" sst state remove MyBucket ``` Here, `MyBucket` is the name of the resource as defined in your `sst.config.ts`. ```ts title="sst.config.ts" new sst.aws.Bucket("MyBucket"); ``` This command will: 1. Find the resource with the given name in the state. 2. Remove that from the state. It does not remove the children of this resource. 3. Runs a `repair` to remove any dependencies to this resource. You can run this for specific stages as well. ```bash frame="none" sst state remove MyBucket --stage production ``` By default, it runs on your personal stage. repair ```sh frame="none" sst state repair ``` Repairs the state of your app if it's corrupted. Sometimes, if something goes wrong with your app, or if the state was directly edited, the state can become corrupted. This will cause your `sst deploy` command to fail. This command looks for the following issues and fixes them. 1. Since the state is a list of resources, if one resource depends on another, it needs to be listed after the one it depends on. This command finds resources that depend on each other but are not ordered correctly and **reorders them**. 2. If resource B depends on resource A, but resource A is not listed in the state, it'll **remove the dependency**. This command does this by going through all the resources in the state, fixing the issues and updating the state. You can run this for specific stages as well. ```bash frame="none" sst state repair --stage production ``` By default, it runs on your personal stage. ### cert ```sh frame="none" sst cert ``` Generate a locally-trusted certificate to connect to the Console. The Console can show you local logs from `sst dev` by connecting to your CLI. Certain browsers like Safari and Brave require the local connection to be running on HTTPS. This command uses [mkcert](https://github.com/FiloSottile/mkcert) internally to generate a locally-trusted certificate for `localhost` and `127.0.0.1`. You'll only need to do this once on your machine. ### tunnel #### Subcommands - [`install`](#tunnel-install) Start a tunnel. ```bash frame="none" sst tunnel ``` If your app has a VPC with `bastion` enabled, you can use this to connect to it. This will forward traffic from the following ranges over SSH: - `10.0.4.0/22` - `10.0.12.0/22` - `10.0.0.0/22` - `10.0.8.0/22` The tunnel allows your local machine to access resources that are in the VPC. :::note The tunnel is only available for apps that have a VPC with `bastion` enabled. ::: If you are running `sst dev`, this tunnel will be started automatically under the _Tunnel_ tab in the sidebar. :::tip This is automatically started when you run `sst dev`. ::: You can start this manually if you want to connect to a different stage. ```bash frame="none" sst tunnel --stage production ``` This needs a network interface on your local machine. You can create this with the `sst tunnel install` command. install ```sh frame="none" sst tunnel install ``` Install the tunnel. To be able to create a tunnel, SST needs to create a network interface on your local ```bash "sudo" sudo sst tunnel install ``` You only need to run this once on your machine. ### diagnostic ```sh frame="none" sst diagnostic ``` Generates a diagnostic report based on the last command that was run. This takes the state of your app, its log files, and generates a zip file in the `.sst/` directory. This is for debugging purposes. --- ## Config Reference doc for the `sst.config.ts`. https://sst.dev/docs/reference/config The `sst.config.ts` file is used to configure your SST app and its resources. ```ts $config(input: Config): Config ``` You specify it using the `$config` function. This takes an object of type [`Config`](#config). ```ts title="sst.config.ts" /// // Your app's config app(input) { return { name: "my-sst-app", home: "aws" }; }, // Your app's resources async run() { const bucket = new sst.aws.Bucket("MyBucket"); // Your app's outputs return { bucket: bucket.name }; }, // Optionally, your app's Console config console: { autodeploy: { runner: { compute: "large" } } } }); ``` The `Config` object takes: 1. [`app`](#app-2) — Your config 2. [`run`](#run) — Your resources 3. [`console`](#console) — Optionally, your app's Console config The `app` function is evaluated right when your app loads. It's used to define the app config and its providers. :::note You need TypeScript 5 to see the types in your config. ::: You can add Pulumi code in the `run` function not the `app` function. While the `run` function is where you define your resources using SST or Pulumi's components. The run function also has access to a list of [Global](/docs/reference/global/) `$` variables and functions. These serve as the context for your app config. :::caution Do not `import` the provider packages in your `sst.config.ts`. ::: Since SST manages importing your provider packages, it's recommended not to add any imports in your `sst.config.ts`. --- #### .env Your `.env` and `.env.` files are loaded as environment variables in your config. They need to be in the same directory as your `sst.config.ts`. ```bash title=".env" MY_ENV_VAR=hello ``` And are available as `process.env` in both your `app` and `run` functions. ```ts title="sst.config.ts" process.env.MY_ENV_VAR ``` The `.env` file takes precedence over `.env.`. So if you have a `.env` and a `.env.dev` file, the values in the `.env` file will be used. :::note You need to restart `sst dev` for changes in your `.env` files to take effect. ::: Make sure the stage name in your `.env.` matches the stage your app is running on. --- ## Config ### console? **Type** `Object` - [`autodeploy`](#console-autodeploy) `Object` - [`runner?`](#console-autodeploy-runner) - [`target?`](#console-autodeploy-target) - [`workflow?`](#console-autodeploy-workflow) Configure how your app works with the SST Console. autodeploy **Type** `Object` **Default** Auto-deploys branches and PRs. Auto-deploys your app when you _git push_ to your repo. Uses [AWS CodeBuild](https://aws.amazon.com/codebuild/) in your account to run the build. To get started, first [make sure to set up Autodeploy](/docs/console#setup). Specifically, you need to configure an environment with the stage and AWS account you want to auto-deploy to. Now when you _git push_ to a branch, pull request, or tag, the following happens: 1. The stage name is generated based on the `autodeploy.target` callback. 1. If there is no callback, the stage name is a sanitized version of the branch or tag. 2. If there is a callback but no stage is returned, the deploy is skipped. 2. The runner config is generated based on the `autodeploy.runner`. Or the defaults are used. 3. The stage is matched against the environments in the Console to get the AWS account and any environment variables for the deploy. 4. The deploy is run based on the above config. This only applies only to git events. If you trigger a deploy through the Console, you are asked to sepcify the stage you want to deploy to. So in this case, it skips step 1 from above and does not call `autodeploy.target`. You can further configure Autodeploy through the `autodeploy` prop. ```ts title="sst.config.ts" console: { autodeploy: { target(event) {}, // Customize the target stage runner(stage) {}, // Customize the runner async workflow({ $, input }) {} // Customize the workflow } } ``` Here, `target`, `runner`, and `workflow` are all optional and come with defaults, so you don't need to configure anything. But you can customize them. ```ts { autodeploy: { target(event) { if ( event.type === "branch" && event.branch === "main" && event.action === "pushed" ) { return { stage: "production" }; } }, runner(stage) { if (stage === "production") return { timeout: "3 hours" }; } } } ``` For example, here we are only auto-deploying to the `production` stage when you git push to the `main` branch. We are also setting the timeout to 3 hours for the `production` stage. You can read more about the `target` and `runner` props below. Finally, if you want to configure exactly what happens in the build, you can pass in a `workflow` function. ```ts { autodeploy: { async workflow({ $, event }) { await $`npm i -g pnpm`; await $`pnpm i`; event.action === "removed" ? await $`pnpm sst remove` : await $`pnpm sst deploy`; } } } ``` You can read more the `workflow` prop below. runner? **Type** [`Runner`](#runner) | (input: [`RunnerInput`](#runnerinput)) => [`Runner`](#runner) Configure the runner that will run the build. By default it uses the following config: ```ts { runner: { engine: "codebuild", architecture: "x86_64", compute: "medium", timeout: "1 hour" } } ``` Most of these are optional and come with defaults. But you can configure them. ```ts { runner: { timeout: "3 hours" } } ``` You can also configure it based on the stage that's being deployed. Let's say you want to use the defaults for all stages except for `production`. ```ts { runner(stage) { if (stage === "production") return { timeout: "3 hours" }; } } ``` Aside from the above, you can also have the deploys run inside a VPC. ```ts { runner: { vpc: { id: "vpc-0be8fa4de860618bb", securityGroups: ["sg-0399348378a4c256c"], subnets: ["subnet-0b6a2b73896dc8c4c", "subnet-021389ebee680c2f0"] } } } ``` Or configure files or directories to be cached. ```ts { runner: { cache: { paths: ["node_modules", "/path/to/cache"] } } } ``` A _runner_ is a [AWS CodeBuild](https://aws.amazon.com/codebuild/) project and an IAM Role. This is created in **your account**. Once a runner is created, it can be used to run multiple builds of the same machine config concurrently. Runners are also shared across all apps in the same account and region. :::note You are only charged for the number of build minutes that you use. ::: If a runner with a given config has been been previously created, it'll be reused. The Console will also automatically remove runners that have not been used for more than 7 days. You are not charged for the number of runners you have, only for the number of build minutes that you use. The pricing is based on the machine config used. [Learn more about CodeBuild pricing](https://aws.amazon.com/codebuild/pricing/). target? ```ts target(input) ``` **Parameters** - `input` [`BranchEvent`](#branchevent)` | `[`TagEvent`](#tagevent)` | `[`PullRequestEvent`](#pullrequestevent) **Returns** `undefined | `[`Target`](#target) Defines the stage or a list of stages the app will be auto-deployed to. When a git event is received, Autodeploy will run the `target` function with the git event. This function should return the stage or a list of stages the app will be deployed to. Or `undefined` if the deploy should be skipped. :::tip Return `undefined` to skip the deploy. ::: The stage that is returned is then compared to the environments set in the [app settings in the Console](/docs/console/#setup). If the stage matches an environment, the stage will be deployed to that environment. If no matching environment is found, the deploy will be skipped. :::note You need to configure an environment in the Console to be able to deploy to it. ::: Currently, only git events for **branches**, **pull requests**, and **tags** are supported. :::tip This is not called when you manually trigger a deploy through the Console. ::: This config only applies to git events. If you trigger a deploy through the Console, you are asked to sepcify the stage you want to deploy to. In this case, and when you redeploy a manual deploy, the `target` function is not called. By default, this is what the `target` function looks like: ```ts { target(event) { if (event.type === "branch" && event.action === "pushed") { return { stage: event.branch .replace(/[^a-zA-Z0-9-]/g, "-") .replace(/-+/g, "-") .replace(/^-/g, "") .replace(/-$/g, "") }; } if (event.type === "pull_request") { return { stage: `pr-${event.number}` }; } } } ``` So for a: - **branch**: The stage name is a sanitized version of the branch name. When a branch is removed, the stage is **not removed**. - **pull request**: The stage name is `pr-`. When a pull request is closed, the stage **is removed**. :::tip Git events to tags are not auto-deployed by default. ::: Git events to tags are not auto-deployed by default. You can change this by adding it to your config. ```ts { target(event) { if (event.type === "tag" && event.action === "pushed") { return { stage: "tag-" + event.tag .replace(/[^a-zA-Z0-9-]/g, "-") .replace(/-+/g, "-") .replace(/^-/g, "") .replace(/-$/g, "") }; } } } ``` Here, similar to the branch event, we are sanitizing the tag name to generate the stage. Just make sure to configure the environment for these tag stages in the Console. If you don't want to auto-deploy for a given event, you can return `undefined`. For example, to skip any deploys to the `staging` stage. ```ts {3} { target(event) { if (event.type === "branch" && event.branch === "staging") return; if ( event.type === "branch" && event.branch === "main" && event.action === "pushed" ) { return { stage: "production" }; } } } ``` workflow? ```ts workflow(input) ``` **Parameters** - `input` [`WorkflowInput`](#workflowinput) **Returns** `Promise` Customize the commands that are run during the build process. This is useful for running tests, or completely customizing the build process. The default workflow automatically figures out the package manager you are using, installs the dependencies, and runs `sst deploy` or `sst remove` based on the event. For example, if you are using pnpm, the following is equivalent to the default workflow. ```ts { async workflow({ $, event }) { await $`npm i -g pnpm`; await $`pnpm i`; event.action === "removed" ? await $`pnpm sst remove` : await $`pnpm sst deploy`; } } ``` The workflow function is run inside a Bun process. It passes in `$` as the [Bun Shell](https://bun.sh/docs/runtime/shell). This makes _bash-like_ scripting easier. :::tip Use the Bun Shell to make running commands easier. ::: For example, here's how you can run tests before deploying. ```ts {5} { async workflow({ $, event }) { await $`npm i -g pnpm`; await $`pnpm i`; await $`pnpm test`; event.action === "removed" ? await $`pnpm sst remove` : await $`pnpm sst deploy`; } } ``` When you pass in a `workflow`, you are effectively taking control of what runs in your build. :::caution If you don't run `sst deploy`, your app won't be deployed. ::: This means that if you don't run `sst deploy`, your app won't be deployed. :::tip Throwing an error will fail the build and display the error in the Console. ::: If you throw an error in the workflow, the deploy will fail and the error will be displayed in the Autodeploy logs. Here's a more detailed example of using the Bun Shell to handle failures. ```ts {6,9} { async workflow({ $, event }) { await $`npm i -g pnpm`; await $`pnpm i`; const { exitCode } = await $`pnpm test`.nothrow(); if (exitCode !== 0) { // Process the test report and then fail the build throw new Error("Failed to run tests"); } event.action === "removed" ? await $`pnpm sst remove` : await $`pnpm sst deploy`; } } ``` You'll notice we are not passing in `--stage` to the SST commands. This is because the `SST_STAGE` environment variable is already set in the build process. :::tip You don't need to pass in `--stage` to the SST commands. ::: The build process is run inside an [Amazon Linux 2](https://aws.amazon.com/amazon-linux-2/) machine based on the `architecture` used. ### app ```ts app(input) ``` #### Parameters - `input` [`AppInput`](#appinput) **Returns** [`App`](#app)` | Promise<`[`App`](#app)`>` The config for your app. It needs to return an object of type [`App`](#app-1). The `app` function is evaluated when your app loads. :::caution You cannot define any components or resources in the `app` function. ::: Here's an example of a simple `app` function. ```ts title="sst.config.ts" app(input) { return { name: "my-sst-app", home: "aws", providers: { aws: true, cloudflare: { accountId: "6fef9ed9089bb15de3e4198618385de2" } }, removal: input.stage === "production" ? "retain" : "remove" }; }, ``` ### run ```ts run() ``` **Returns** `Promise>` An async function that lets you define the resources in your app. :::note You can use SST and Pulumi components only in the `run` function. ::: You can optionally return an object that'll be displayed as the output in the CLI. For example, here we return the name of the bucket we created. ```ts title="sst.config.ts" async run() { const bucket = new sst.aws.Bucket("MyBucket"); return { bucket: bucket.name }; } ``` This will display the following in the CLI on `sst deploy` and `sst dev`. ```bash frame=\"none\" bucket: bucket-jOaikGu4rla ``` These outputs are also written to a `.sst/outputs.json` file after every successful deploy. It contains the above outputs in JSON. ```json title=".sst/outputs.json" {"bucket": "bucket-jOaikGu4rla"} ``` ## App ### home **Type** `"aws" | "cloudflare" | "local"` The provider SST will use to store the state for your app. The state keeps track of all your resources and secrets. The state is generated locally and backed up in your cloud provider. Currently supports AWS, Cloudflare and local. :::tip SST uses the `home` provider to store the state for your app. If you use the local provider it will be saved on your machine. You can see where by running `sst version`. ::: If you want to configure the aws or cloudflare home provider, you can: ```ts { home: "aws", providers: { aws: { region: "us-west-2" } } } ``` ### name **Type** `string` The name of the app. This is used to prefix the names of the resources in your app. :::caution If you change the name of your app, it'll redeploy your app with new resources. The old resources will be orphaned. ::: This means that you don't want to change the name of your app without removing the old resources first. ```ts { name: "my-sst-app" } ``` ### protect? **Type** `boolean` If set to `true`, the `sst remove` CLI will not run and will error out. This is useful for preventing cases where you run `sst remove --stage ` for the wrong stage. :::tip Protect your production stages from being accidentally removed. ::: For example, prevent the _production_ stage from being removed. ```ts { protect: input.stage === "production" } ``` However, this only applies to `sst remove` for stages. If you accidentally remove a resource from the `sst.config.ts` and run `sst deploy` or `sst dev`, it'll still get removed. To avoid this, check out the `removal` prop. ### providers? **Type** `Record` **Default** The `home` provider. The providers that are being used in this app. This allows you to use the resources from these providers in your app. ```ts { providers: { aws: "6.27.0", cloudflare: "5.37.1" } } ``` Check out the full list in the [Directory](/docs/all-providers#directory). :::tip You'll need to run `sst install` after you update the `providers` in your config. ::: If you don't set a `provider` it uses your `home` provider with the default config. So if you set `home` to `aws`, it's the same as doing: ```ts { home: "aws", providers: { aws: "6.27.0" } } ``` You can also configure the provider props. Here's the config for some common providers: - [AWS](https://www.pulumi.com/registry/packages/aws/api-docs/provider/#inputs) - [Cloudflare](https://www.pulumi.com/registry/packages/cloudflare/api-docs/provider/#inputs) For example, to change the region for AWS. ```ts { providers: { aws: { region: "us-west-2" } } } ``` ### removal? **Type** `"remove" | "retain" | "retain-all"` **Default** `"retain"` Configure how your resources are handled when they have to be removed. - `remove`: Removes the underlying resource. - `retain`: Retains resources like S3 buckets and DynamoDB tables. Removes everything else. - `retain-all`: Retains all resources. :::tip If you change your removal policy, you'll need to deploy your app once for it to take effect. ::: For example, retain resources if it's the _production_ stage, otherwise remove all resources. ```ts { removal: input.stage === "production" ? "retain" : "remove" } ``` This applies to not just the `sst remove` command but also cases where you remove a resource from the `sst.config.ts` and run `sst dev` or `sst deploy`. To control how a stage is handled on `sst remove`, check out the `protect` prop. ### version? **Type** `string` **Default** The latest version of SST. The version of SST supported by the app. The CLI will fail any commands if the version does not match. :::tip Useful in CI where you don't want it to automatically deploy with a new version of SST. ::: Takes a specific version. ```ts version: "3.2.49" ``` Also supports semver ranges. ```ts version: ">= 3.2.49" ``` ### watch? **Type** `string[]` Configure which directories should be watched for changes when running `sst dev`. By default, all directories are watched (except node_modules and hidden directories). ```ts { watch: ["packages/www", "packages/api"] } ``` This will only watch the `packages/www` and `packages/api` directories. The paths are relative to the project root. ## AppInput ### stage **Type** `string` The stage this app is running on. This is a string that can be passed in through the CLI. :::caution Changing the stage will redeploy your app to a new stage with new resources. The old resources will still be around in the old stage. ::: If not passed in, it'll use the username of your local machine, or prompt you for it. ## BranchEvent A git event for when a branch is updated or deleted. For example: ```js { type: "branch", action: "pushed", repo: { id: 1296269, owner: "octocat", repo: "Hello-World" }, branch: "main", commit: { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" }, sender: { id: 1, username: "octocat" } } ``` ### action **Type** `"pushed" | "removed"` The type of the git action. - `pushed` is when you git push to a branch - `removed` is when a branch is removed ### branch **Type** `string` The name of the branch the event is coming from. ### commit **Type** `Object` - [`id`](#commit-id) - [`message`](#commit-message) Info about the commit in the event. This might look like: ```js { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" } ``` id **Type** `string` The ID of the commit. message **Type** `string` The commit message. ### repo **Type** `Object` - [`id`](#repo-id) - [`owner`](#repo-owner) - [`repo`](#repo-repo) The Git repository the event is coming from. This might look like: ```js { id: 1296269, owner: "octocat", repo: "Hello-World" } ``` id **Type** `number` The ID of the repo. This is usually a number. owner **Type** `string` The name of the owner or org the repo to belongs to. repo **Type** `string` The name of the repo. ### sender **Type** `Object` - [`id`](#sender-id) - [`username`](#sender-username) The user that generated the event. For example: ```js { id: 1, username: "octocat" } ``` id **Type** `number` The ID of the user. username **Type** `string` The username of the user. ### type **Type** `"branch"` The git event type, for the `BranchEvent` it's `branch`. ## PullRequestEvent A git event for when a pull request is updated or deleted. For example: ```js { type: "pull_request", action: "pushed", repo: { id: 1296269, owner: "octocat", repo: "Hello-World" }, number: 1347, base: "main", head: "feature", commit: { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" }, sender: { id: 1, username: "octocat" } } ``` ### action **Type** `"pushed" | "removed"` The type of the git action. - `pushed` is when you git push to the base branch of the PR - `removed` is when the PR is closed or merged ### base **Type** `string` The base branch of the PR. This is the branch the code is being merged into. ### commit **Type** `Object` - [`id`](#commit-id-1) - [`message`](#commit-message-1) Info about the commit in the event. This might look like: ```js { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" } ``` id **Type** `string` The ID of the commit. message **Type** `string` The commit message. ### head **Type** `string` The head branch of the PR. This is the branch the code is coming from. ### number **Type** `number` The pull request number. ### repo **Type** `Object` - [`id`](#repo-id-1) - [`owner`](#repo-owner-1) - [`repo`](#repo-repo-1) The Git repository the event is coming from. This might look like: ```js { id: 1296269, owner: "octocat", repo: "Hello-World" } ``` id **Type** `number` The ID of the repo. This is usually a number. owner **Type** `string` The name of the owner or org the repo to belongs to. repo **Type** `string` The name of the repo. ### sender **Type** `Object` - [`id`](#sender-id-1) - [`username`](#sender-username-1) The user that generated the event. For example: ```js { id: 1, username: "octocat" } ``` id **Type** `number` The ID of the user. username **Type** `string` The username of the user. ### title **Type** `string` The title of the pull request. ### type **Type** `"pull_request"` The git event type, for the `PullRequestEvent` it's `pull_request`. ## Runner ### architecture? **Type** `"x86_64" | "arm64"` **Default** `x86_64` The architecture of the build machine. The `x86_64` machine uses the [`al/standard/5.0`](https://github.com/aws/aws-codebuild-docker-images/tree/master/al/x86_64/standard/5.0) build image. While `arm64` uses the [`al/aarch64/standard/3.0`](https://github.com/aws/aws-codebuild-docker-images/tree/master/al/aarch64/standard/3.0) image instead. You can also configure what's used in the image: - **Node** To specify the version of Node you want to use in your build, you can use the `.node-version`, `.nvmrc`, or use the `engine` field in your `package.json`. **package.json** ```js title="package.json" { engine: { node: "20.15.1" } } ``` **node-version** ```bash title=".node-version" 20.15.1 ``` **nvmrc** ```bash title=".nvmrc" 20.15.1 ``` - **Package manager** To specify the package manager you want to use you can configure it through your `package.json`. **pnpm** ```js title="package.json" { packageManager: "pnpm@8.6.3" } ``` **bun** ```js title="package.json" { packageManager: "bun@1.2.0" } ``` Feel free to get in touch if you want to use your own build image or configure what's used in the build image. ### cache? **Type** `Object` - [`paths`](#cache-paths) Paths to cache as a part of the build. By default the `.git` directory is cached. The given list of files and directories will be saved to the cache at the end of the build. And they will be restored at the start of the build process. ```ts { cache: { paths: ["node_modules", "/path/to/cache"] } } ``` The relative paths are for caching files inside your repo. While the absolute path is for any global caches. To clear the cache, you can trigger a new deploy using the **Force** deploy option in the Console. paths **Type** `string[]` The paths to cache. These are relative to the root of the repository. By default, the `.git` directory is always cached. ### compute? **Type** `"small" | "medium" | "large" | "xlarge" | "2xlarge"` **Default** `medium` The compute size of the build environment. For `x86_64`, the following compute sizes are supported: - `small`: 3 GB, 2 vCPUs - `medium`: 7 GB, 4 vCPUs - `large`: 15 GB, 8 vCPUs - `xlarge`: 70 GB, 36 vCPUs - `2xlarge`: 145 GB, 72 vCPUs For `arm64` architecture, the following compute sizes are supported: - `small`: 4 GB, 2 vCPUs - `medium`: 8 GB, 4 vCPUs - `large`: 16 GB, 8 vCPUs - `xlarge`: 64 GB, 32 vCPUs - `2xlarge`: 96 GB, 48 vCPUs To increase the memory used by your Node.js process in the build environment, you'll want to set the `NODE_OPTIONS` environment variable to `--max-old-space-size=xyz`. Where `xyz` is the memory size in MB. By default, this is set to 1.5 GB. Read more about the [CodeBuild build environments](https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html). ### engine **Type** `"codebuild"` The service used to run the build. Currently, only AWS CodeBuild is supported. ### timeout? **Type** `"$\{number\} minute" | "$\{number\} minutes" | "$\{number\} hour" | "$\{number\} hours"` **Default** `1 hour` The timeout for the build. It can be from `5 minutes` to `36 hours`. ### vpc? **Type** `Object` - [`id`](#vpc-id) - [`securityGroups`](#vpc-securitygroups) - [`subnets`](#vpc-subnets) The VPC to run the build in. If provided, the build environment will have access to resources in the VPC. This is useful for building Next.js apps that might make queries to your database as a part of the build process. You can get these from the outputs of the `Vpc` component your are using or from the [Console](/docs/console/#resources). ```ts { vpc: { id: "vpc-0be8fa4de860618bb", subnets: ["subnet-0be8fa4de860618bb"], securityGroups: ["sg-0be8fa4de860618bb"] } } ``` id **Type** `string` The ID of the VPC. securityGroups **Type** `string[]` The security groups to run the build in. subnets **Type** `string[]` The subnets to run the build in. ## RunnerInput ### stage **Type** `string` The stage the deployment will be run in. ## TagEvent A git event for when a tag is created or deleted. For example: ```js { type: "tag", action: "pushed", repo: { id: 1296269, owner: "octocat", repo: "Hello-World" }, tag: "v1.5.2", commit: { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" }, sender: { id: 1, username: "octocat" } } ``` ### action **Type** `"pushed" | "removed"` The type of the git action. - `pushed` is when you create a tag - `removed` is when a tag is removed ### commit **Type** `Object` - [`id`](#commit-id-2) - [`message`](#commit-message-2) Info about the commit in the event. This might look like: ```js { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" } ``` id **Type** `string` The ID of the commit. message **Type** `string` The commit message. ### repo **Type** `Object` - [`id`](#repo-id-2) - [`owner`](#repo-owner-2) - [`repo`](#repo-repo-2) The Git repository the event is coming from. This might look like: ```js { id: 1296269, owner: "octocat", repo: "Hello-World" } ``` id **Type** `number` The ID of the repo. This is usually a number. owner **Type** `string` The name of the owner or org the repo to belongs to. repo **Type** `string` The name of the repo. ### sender **Type** `Object` - [`id`](#sender-id-2) - [`username`](#sender-username-2) The user that generated the event. For example: ```js { id: 1, username: "octocat" } ``` id **Type** `number` The ID of the user. username **Type** `string` The username of the user. ### tag **Type** `string` The name of the tag. For example, `v1.5.2`. ### type **Type** `"tag"` The git event type, for the `TagEvent` it's `tag`. ## Target ### stage **Type** `string | string[]` The stage or a list of stages the app will be deployed to. ## UserEvent A user event for when the user manually triggers a deploy. For example: ```js { type: "user", action: "deploy", repo: { id: 1296269, owner: "octocat", repo: "Hello-World" }, ref: "main", commit: { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" } } ``` ### action **Type** `"remove" | "deploy"` The type of the user action. - `deploy` is when you manually trigger a deploy - `remove` is when you manually remove a stage ### commit **Type** `Object` - [`id`](#commit-id-3) - [`message`](#commit-message-3) Info about the commit in the event. This might look like: ```js { id: "b7e7c4c559e0e5b4bc6f8d98e0e5e5e5e5e5e5e5", message: "Update the README with new information" } ``` id **Type** `string` The ID of the commit. message **Type** `string` The commit message. ### ref **Type** `string` The reference to the Git commit. This can be the branch, tag, or commit hash. ### repo **Type** `Object` - [`id`](#repo-id-3) - [`owner`](#repo-owner-3) - [`repo`](#repo-repo-3) The Git repository the event is coming from. This might look like: ```js { id: 1296269, owner: "octocat", repo: "Hello-World" } ``` id **Type** `number` The ID of the repo. This is usually a number. owner **Type** `string` The name of the owner or org the repo to belongs to. repo **Type** `string` The name of the repo. ### type **Type** `"user"` The user event type. ## WorkflowInput ### $ **Type** [`Bun Shell`](https://bun.sh/docs/runtime/shell) The [Bun shell](https://bun.sh/docs/runtime/shell). It's a cross-platform _bash-like_ shell for scripting with JavaScript and TypeScript. ### event **Type** [`BranchEvent`](#branchevent)` | `[`TagEvent`](#tagevent)` | `[`PullRequestEvent`](#pullrequestevent)` | `[`UserEvent`](#userevent) The event that triggered the workflow. This includes git branch, pull request, or tag events. And it also includes a user event for manual deploys that are triggered through the Console. --- ## Global Reference doc for the Global `$` library. https://sst.dev/docs/reference/global The Global library is a collection of `$` functions and variables that are available in the `run` function, of your [`sst.config.ts`](/docs/reference/config/). You don't need to import the Global library. It's available in the `run` function of your `sst.config.ts`. :::note The Global library is only available in the `run` function of your `sst.config.ts`. ::: For example, you can get the name of your app in your app config using `$app.name`. ```ts title="sst.config.ts" {4} // ... async run() { console.log($app.name); } }); ``` The **variables** contain the context of the app that's being run. While the **functions** help you work with the [Outputs of components](/docs/components##inputs--outputs). --- ## Variables ### $app **Type** `Object` - [`name`](#app-name) - [`protect`](#app-protect) - [`providers`](#app-providers) - [`removal`](#app-removal) - [`stage`](#app-stage) Context about the app being run. name **Type** `string` The name of the current app. protect **Type** `boolean` If true, prevents `sst remove` from being executed on this stage providers **Type** `undefined | Record` The providers currently being used in the app. removal **Type** `"remove" | "retain" | "retain-all"` The removal policy for the current stage. If `removal` was not set in the `sst.config.ts`, this will be return its default value, `retain`. stage **Type** `string` The stage currently being run. You can use this to conditionally deploy resources based on the stage. For example, to deploy a bucket only in the `dev` stage: ```ts title="sst.config.ts" if ($app.stage === "dev") { new sst.aws.Bucket("MyBucket"); } ``` ### $dev **Type** `boolean` Returns `true` if the app is running in `sst dev`. ### $util **Type** [`@pulumi/pulumi`](https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/pulumi/) A convenience reference to the the [`util`](https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/pulumi/) module from Pulumi. This is useful for working with components. You can use these without importing or installing the Pulumi SDK. For example, to create a new asset, you can: ```ts title="sst.config.ts" const myFiles = new $util.asset.FileArchive("./path/to/files"); ``` This is equivalent to doing: ```ts title="sst.config.ts" const myFiles = new pulumi.asset.FileArchive("./path/to/files"); ``` ## Functions ### $asset ```ts $asset(assetPath) ``` #### Parameters - `assetPath` `string` **Returns** [`FileArchive`](https://www.pulumi.com/docs/iac/concepts/assets-archives/#archives)` | `[`FileAsset`](https://www.pulumi.com/docs/iac/concepts/assets-archives/#assets) Packages a file or directory into a Pulumi asset. This can be used for Pulumi resources that take an asset as input. When the given path is a file, it returns a [`FileAsset`](https://www.pulumi.com/docs/iac/concepts/assets-archives/#assets). If the path is a directory, it returns a [`FileArchive`](https://www.pulumi.com/docs/iac/concepts/assets-archives/#assets) with the zipped contents of the directory. :::tip This automatically resolves paths relative to the root of the app. ::: Relative paths are resolved relative to the root of the app. While, absolute paths are used as is. If you have a file inside the `images` directory at the root of your app, you can upload it to S3 on deploy. ```ts title="sst.config.ts" {7} const bucket = new aws.s3.Bucket("MyBucket"); new aws.s3.BucketObjectv2("MyImage", { bucket: bucket.name, key: "public/spongebob.svg", contentType: "image/svg+xml", source: $asset("images/spongebob.svg"), }); ``` You can also use this to zip up the files in the `files/` directory and upload it to S3. ```ts title="sst.config.ts" {5} new aws.s3.BucketObjectv2("MyZip", { bucket: bucket.name, key: "public/spongebob.zip", contentType: "application/zip", source: $asset("files"), }); ``` ### $concat ```ts $concat(params) ``` #### Parameters - `params` `any[]` **Returns** `Output` Takes a sequence of Output values or plain JavaScript values, stringifies each, and concatenates them into one final string. This is takes care of resolving the Output values for you. Say you had a bucket: ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` Instead of having to resolve the bucket name first:: ```ts title="sst.config.ts" const description = bucket.name.apply(name => "This is a bucket named ".concat(name) ); ``` You can directly do this: ```ts title="sst.config.ts" const description = $concat("This is a bucket named ", bucket.name); ``` ### $interpolate ```ts $interpolate(literals, placeholders) ``` #### Parameters - `literals` `TemplateStringsArray<>` - `placeholders` `any[]` **Returns** `Output` Use string interpolation on Output values. This is takes care of resolving the Output values for you. Say you had a bucket: ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` Instead of resolving the bucket name first: ```ts title="sst.config.ts" const description = bucket.name.apply(name => `This is a bucket named ${name}`); ``` You can directly do this: ```ts title="sst.config.ts" const description = $interpolate`This is a bucket named ${bucket.name}`; ``` ### $jsonParse ```ts $jsonParse(text, reviver?) ``` #### Parameters - `text` `Input` - `reviver?` [`JSON.parse reviver`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#reviver) **Returns** `Output` Takes an Output value or plain JavaScript value, uses `JSON.parse` on the resolved JSON string to turn it into a JSON object. So for example, instead of doing of resolving the value first: ```ts title="sst.config.ts" const policy = policyStr.apply((policy) => JSON.parse(policy) ); ``` You can directly do this: ```ts title="sst.config.ts" const policy = $jsonParse(policyStr); ``` ### $jsonStringify ```ts $jsonStringify(obj, replacer?, space?) ``` #### Parameters - `obj` `any` - `replacer?` [`JSON.stringify replacer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#replacer) - `space?` `string | number` **Returns** `Output` Takes an Output value or plain JSON object, uses `JSON.stringify` on the resolved JSON object to turn it into a JSON string. So for example, instead of doing of resolving the value first: ```ts title="sst.config.ts" const policy = policyObj.apply((policy) => JSON.stringify(policy) ); ``` You can directly do this: ```ts title="sst.config.ts" const policy = $jsonStringify(policyObj); ``` ### $resolve ```ts $resolve(val) ``` #### Parameters - `val` `Record>` **Returns** `Output>` Wait for a list of Output values to be resolved, and then apply a function to their resolved values. Say you had a couple of S3 Buckets: ```ts title="sst.config.ts" const bucket1 = new sst.aws.Bucket("MyBucket1"); const bucket2 = new sst.aws.Bucket("MyBucket2"); ``` You can run a function after both of them are resolved: ```ts title="sst.config.ts" $resolve([bucket1.name, bucket2.name]).apply(([value1, value2]) => console.log({ value1, value2 }) ); ``` ### $transform ```ts $transform(resource, cb) ``` #### Parameters - `resource` `Component Class` - `cb` `(args, opts, name) => void` **Returns** `void` Register a function that'll be called when a component of the given type is about to be created. This is useful for setting global defaults for your components. :::note This function is only called for components that are created **after** the function is registered. ::: The function takes the arguments and options that are being passed to the component, and can modify them. For example, to set a default runtime for all function components. ```ts title="sst.config.ts" $transform(sst.aws.Function, (args, opts, name) => { // Set the default if it's not set by the component args.runtime ??= "nodejs24.x"; }); ``` Here, `args`, `opts` and `name` are what you'd pass to the `Function` component. Recall the signature of the `Function` component: ```ts title="sst.config.ts" new sst.aws.Function(name: string, args: FunctionArgs, opts?: pulumi.ComponentResourceOptions) ``` ## AWS ### iamEdit ```ts iamEdit(policy, cb) ``` #### Parameters - `policy` `Input` - `cb` (doc: `Object`) => `void` **Returns** `Output` A helper to modify the AWS IAM policy. The IAM policy document is normally in the form of a JSON string. This helper decodes the string into a JSON object and passes it to the callback. Allowing you to modify the policy document in a type-safe way. For example, this comes in handy when you are transforming the policy of a component. ```ts title="sst.config.ts" "sst.aws.iamEdit" new sst.aws.Bucket("MyBucket", { transform: { policy: (args) => { args.policy = sst.aws.iamEdit(args.policy, (policy) => { policy.Statement.push({ Effect: "Allow", Action: "s3:PutObject", Principal: { Service: "ses.amazonaws.com" }, Resource: $interpolate`arn:aws:s3:::${args.bucket}/*`, }); }); }, }, }); ``` --- ## SDK Interact with your infrastructure in your runtime code. https://sst.dev/docs/reference/sdk The SST SDK allows your runtime code to interact with your infrastructure in a typesafe way. You can use the SDK in your **functions**, **frontends**, and **container applications**. You can access links from components. And some components come with SDK clients that you can use. :::tip Check out the _SDK_ section in a component's API reference doc. ::: Currently, the SDK is only available for JS/TS, Python, Golang, and Rust. Support for other languages is on the roadmap. --- ## Node.js The JS SDK is an [npm package](https://www.npmjs.com/package/sst) that you can install in your functions, frontends, or container applications. ```bash npm install sst ``` --- ### Links Import `Resource` to access the linked resources. ```js title="src/lambda.ts" console.log(Resource.MyBucket.name); ``` :::tip The `Resource` object is typesafe and will autocomplete the available resources in your editor. ::: Here we are assuming that a bucket has been linked to the function. Here's what that could look like. ```js title="sst.config.ts" {5} const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Function("MyFunction", { handler: "src/lambda.handler", link: [bucket] }); ``` --- #### Defaults By default, the `Resource` object contains `Resource.App`. This gives you some info about the current app including: - `App.name`: The name of your SST app. - `App.stage`: The current stage of your SST app. ```ts title="src/lambda.ts" console.log(Resource.App.name, Resource.App.stage); ``` --- ### Clients Components like the [`Realtime`](/docs/component/aws/realtime/) component come with a client that you can use. ```ts title="src/lambda.ts" // Validate the token }); ``` For example, `realtime.authorizer` lets you create the handler for the authorizer function that `Realtime` needs. --- ### How it works In the above example, `Resource.MyBucket.name` works because it's been injected into the function package on `sst dev` and `sst deploy`. For functions, this is injected into the [`globalThis`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/globalThis) using [esbuild](https://esbuild.github.io/) and for frontends, it's injected into the `process.env` object. The JS SDK first checks the `process.env` and then the `globalThis` for the linked resources. You can [read more about how the links are injected](/docs/linking/#injecting-links). --- ## Python SST uses [uv](https://docs.astral.sh/uv/) to package your Python functions. Your functions need to be in a [uv workspace](https://docs.astral.sh/uv/concepts/projects/workspaces/#workspace-sources). To use the SDK, add it to your `pyproject.toml`. ```toml title="functions/pyproject.toml" [tool.uv.sources] sst = { git = "https://github.com/sst/sst.git", subdirectory = "sdk/python", branch = "dev" } ``` And in your function, import the `resource` module and access the linked resource. ```py title="functions/src/functions/api.py" {1} "Resource.MyBucket.name" from sst import Resource def handler(event, context): print(Resource.MyBucket.name) ``` Here `MyBucket` is the name of a bucket that's linked to the function. ```ts title="sst.config.ts" {6} const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Function("MyFunction", { handler: "functions/src/functions/api.handler", runtime: "python3.11", link: [bucket] }); ``` Client functions are currently **not supported** in the Python SDK. --- ## Golang Use the SST Go SDK package in your Golang functions or container applications. ```go title="src/main.go" "github.com/sst/sst/v3/sdk/golang/resource" ) ``` In your runtime code, use the `resource.Get` function to access the linked resources. ```go title="src/main.go" resource.Get("MyBucket", "name") ``` Where `MyBucket` is the name of a bucket that's linked to the function. ```js title="sst.config.ts" {5} const bucket = new sst.aws.Bucket("MyBucket"); new sst.aws.Function("MyFunction", { handler: "./src", link: [bucket] }); ``` You can also access the current app's info with. ```go title="src/main.go" resource.Get("App", "name") resource.Get("App", "stage") ``` Client functions are currently **not supported** in the Go SDK. --- ## Rust Use the SST Rust SDK package in your Rust functions or container applications. ```bash cargo install sst_sdk ``` In your runtime, use the `Resource::get()` function to access linked resources as a typesafe struct, or a `serde_json::Value`. ```rust title="main.rs" use sst_sdk::Resource; #[derive(serde::Deserialize, Debug)] struct Bucket { name: String, } fn main() { let resource = Resource::init().unwrap(); // access your linked resources as a typesafe struct that implements Deserialize let Bucket { name } = resource.get("MyBucket").unwrap(); // or as a weakly typed json value (that also implements Deserialize) let openai_key: serde_json::Value = resource.get("OpenaiSecret").unwrap(); } ``` where `MyBucket` and `OpenaiSecret` are linked to the function. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); const openai = new sst.Secret("OpenaiSecret"); new sst.aws.Function("MyFunction", { handler: "./", link: [bucket, openai], runtime: "rust" }); ``` Client functions are currently **not supported** in the Rust SDK. --- ## Set up a Monorepo A TypeScript monorepo setup for your app. https://sst.dev/docs/set-up-a-monorepo While, [drop-in mode](/docs/#drop-in-mode) is great for simple projects, we recommend using a monorepo for projects that are going to have multiple packages. :::tip We created a [monorepo template](https://github.com/sst/monorepo-template/tree/main) for your SST projects. ::: However, setting up a monorepo with everything you need can be surprisingly tricky. To fix this we created a template for a TypeScript monorepo that uses npm workspaces. --- ## How to use To use this template. 1. Head over to [**github.com/sst/monorepo-template**](https://github.com/sst/monorepo-template) 2. Click on **Use this template** and create a new repo. 3. Clone the repo. 4. From the project root, run the following to rename it to your app. ```bash npx replace-in-file "/monorepo-template/g" "MY_APP" "./**/*" --verbose ``` 5. Install the dependencies. ```bash npm install ``` Now just run `npx sst dev` from the project root. --- ## Project structure The app is split into the separate `packages/` and an `infra/` directory. ```txt {2} my-sst-app ├─ sst.config.ts ├─ package.json ├─ packages │ ├─ functions │ ├─ scripts │ └─ core └─ infra ``` The `packages/` directory has your workspaces and this is in the root `package.json`. ```json title="package.json "workspaces": [ "packages/*" ] ``` Let's look at it in detail. --- ### Packages The `packages/` directory includes the following: - `core/` This directory includes shared code that can be used by other packages. These are defined as modules. For example, we have an `Example` module. ```ts title="packages/core/src/example/index.ts" export namespace Example { export function hello() { return "Hello, world!"; } } ``` We export this using the following in the `package.json`: ```json title="packages/core/package.json" "exports": { "./*": [ "./src/*\/index.ts", "./src/*.ts" ] } ``` This will allow us to import the `Example` module by doing: ```ts import { Example } from "@monorepo-template/core/example"; Example.hello(); ``` We recommend creating new modules for the various _domains_ in your project. This roughly follows Domain Driven Design. We also have [Vitest](https://vitest.dev/) configured for testing this package with the `sst shell` CLI. ```bash npm test ``` - `functions/` This directory includes our Lambda functions. It imports from the `core/` package by using it as a local dependency. - `scripts/` This directory includes scripts that you can run on your SST app using the `sst shell` CLI and [`tsx`](https://www.npmjs.com/package/tsx). For example, to the run the example `scripts/src/example.ts`, run the following from `packages/scripts/`. ```bash npm run shell src/example.ts ``` You can add additional packages to the `packages/` directory. For example, you might add a `frontend/` and a `backend/` package. --- ### Infrastructure The `infra/` directory allows you to logically split the infrastructure of your app into separate files. This can be helpful as your app grows. In the template, we have an `api.ts`, and `storage.ts`. These export resources that can be used in the other infrastructure files. ```ts title="infra/storage.ts" ``` We then dynamically import them in the `sst.config.ts`. ```ts title="sst.config.ts" async run() { const storage = await import("./infra/storage"); await import("./infra/api"); return { MyBucket: storage.bucket.name }; } ``` Finally, some of the outputs of our components are set as outputs for our app. --- ## Share Across Stages Share resources across stages in your app. https://sst.dev/docs/share-across-stages You define all the resources in your app in your `sst.config.ts`. These resources then get created for each stage that you deploy to. However, there might be some cases where you don't want to recreate certain resources for every stage. --- ## Why share You typically want to share for cases where: - Resources that are expensive and their pricing is not truly pay-per-use, like your Postgres cluster. - Or, if they contain data that these new stages need to reuse. For example, your PR stages might just be for testing against your staging data and don't need to recreate some resources. While it might be tempting to share more resources across stages, we only recommend doing it for the above cases. --- ## How to share To help with this some SST components come with a `static get` method. These components are typically ones that people want to be able to share. Here are some components that have this: - [`Vpc`](/docs/component/aws/vpc/) - [`Email`](/docs/component/aws/email/) - [`Bucket`](/docs/component/aws/bucket/) - [`Postgres`](/docs/component/aws/postgres/) - [`CognitoUserPool`](/docs/component/aws/cognito-user-pool/) - [`CognitoIdentityPool`](/docs/component/aws/cognito-identity-pool/) - [`D1`](/docs/component/cloudflare/d1/) If you'd like us to add to this list, feel free to open a GitHub issue. It's worth noting that complex components like our frontends, `Nextjs`, or `StaticSite`, are not likely to be supported. Both because they are made up of a large number of resources. But also because they really aren't worth sharing across stages. Let's look at an example. --- ### Example The [`static get`](/docs/component/aws/bucket/#static-get) in the `Bucket` component has the following signature. It takes the name of the component and the name of the existing bucket. ```ts get(name: string, bucketName: string) ``` Imagine you create a bucket in the `dev` stage. And in your personal stage `frank`, instead of creating a new bucket, you want to share the bucket from `dev`. ```ts title="sst.config.ts" const bucket = $app.stage === "frank" ? sst.aws.Bucket.get("MyBucket", "app-dev-mybucket-12345678") : new sst.aws.Bucket("MyBucket"); ``` We are using [`$app.stage`](/docs/reference/global/#app-stage), a global to get the current stage the CLI is running on. It allows us to conditionally create the bucket. Here `app-dev-mybucket-12345678` is the auto-generated bucket name for the bucket created in the `dev` stage. You can find this by outputting the bucket name in the `dev` stage. ```ts title="sst.config.ts" return { bucket: bucket.name }; ``` And it'll print it out on `sst deploy`. ```bash frame="none" bucket: app-dev-mybucket-12345678 ``` You can read more about outputs in the [`run`](/docs/reference/config/#run) function. --- ## Analog on AWS with SST Create and deploy an Analog app to AWS with SST. https://sst.dev/docs/start/aws/analog We are going to create an [Analog app](https://analogjs.org/), add an S3 Bucket for file uploads, and deploy it to AWS using SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-analog) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ## 1. Create a project Let's start by creating our project. ```bash npm create analog@latest cd aws-analog ``` We are picking the **Full-stack Application** option and **not adding Tailwind**. --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `vite.config.ts` with something like this. ```diff lang="ts" title="vite.config.ts" plugins: [analog({ + nitro: { + preset: "aws-lambda", + } })], ``` --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Analog app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your Analog app in your browser. --- ## 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `Analog` component. #### Link the bucket Now, link the bucket to our Analog app. ```js title="sst.config.ts" {2} new sst.aws.Analog("MyWeb", { link: [bucket], }); ``` --- ## 3. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload on the server. Create a new `src/pages/index.server.ts` with the following. ```ts title="src/pages/index.server.ts" {10} const command = new PutObjectCommand({ Key: crypto.randomUUID(), // @ts-ignore: Generated on deploy Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return { url }; }; ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` --- ## 4. Create an upload form Add the upload form client in `src/pages/index.page.ts`. Replace it with the following. ```ts title="src/pages/index.page.ts" {6,20} @Component({ selector: 'app-home', standalone: true, imports: [FormsModule], template: `
`, }) data = toSignal(injectLoad(), { requireSync: true }); async onSubmit(event: Event): Promise { const file = (event.target as HTMLFormElement)['file'].files?.[0]!; const image = await fetch(this.data().url, { body: file, method: 'PUT', headers: { 'Content-Type': file.type, 'Content-Disposition': `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split('?')[0]; } } ``` Here we are injecting the pre-signed URL from the server into the component. Head over to the local Analog app site in your browser, `http://localhost:5173` and try **uploading an image**. You should see it upload and then download the image. --- ## 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Angular on AWS with SST Create and deploy an Angular app to AWS with SST. https://sst.dev/docs/start/aws/angular We are going to create an Angular 18 SPA, add an S3 Bucket for file uploads, and deploy it to AWS using SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-angular) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ## 1. Create a project Let's start by creating our project. ```bash npm install -g @angular/cli ng new aws-angular cd aws-angular ``` We are picking **CSS** for styles, and **not using SSR**. --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init ``` This'll create a `sst.config.ts` file in your project root. --- ## 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `StaticSite` component. We are going to upload a file to this bucket using a pre-signed URL. This'll let us upload it directly to our bucket. --- ## 3. Add an API Let's create a simple API to generate that URL. Add this below the `Bucket` component. ```ts title="sst.config.ts" {3} const pre = new sst.aws.Function("MyFunction", { url: true, link: [bucket], handler: "functions/presigned.handler", }); ``` We are linking our bucket to this function. --- #### Pass the API URL Now, pass the API URL to our Angular app. Add this below the `build` prop in our `StaticSite` component. ```ts title="sst.config.ts" {2} environment: { NG_APP_PRESIGNED_API: pre.url } ``` To load this in our Angular app, we'll use the [`@ngx-env/builder`](https://www.npmjs.com/package/@ngx-env/builder) package. ```bash ng add @ngx-env/builder ``` --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Angular app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and go to your Angular app in your browser. Typically on `http://localhost:4200`. --- ## 3. Create an upload form Let's create a component to do the file upload. Add the following to `src/app/file-upload.component.ts`. ```ts title="src/app/file-upload.component.ts" {19} @Component({ selector: 'app-file-upload', standalone: true, imports: [FormsModule], template: `
`, }) private http = inject(HttpClient); presignedApi = import.meta.env['NG_APP_PRESIGNED_API']; async onSubmit(event: Event): Promise { const file = (event.target as HTMLFormElement)['file'].files?.[0]!; this.http.get(this.presignedApi, { responseType: 'text' }).subscribe({ next: async (url: string) => { const image = await fetch(url, { body: file, method: "PUT", headers: { "Content-Type": file.type, "Content-Disposition": `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split("?")[0]; }, }); } } ``` This is getting the pre-signed API URL from the environment. Making a request to it to get the pre-signed URL and then uploading our file to it. Let's add some `styles` below the `template` prop. ```ts title="src/app/file-upload.component.ts" styles: [` form { color: white; padding: 2rem; display: flex; align-items: center; justify-content: space-between; background-color: #23262d; background-image: none; background-size: 400%; border-radius: 0.6rem; background-position: 100%; box-shadow: 0 4px 6px -1px rgba(0, 0, 0, 0.1), 0 2px 4px -2px rgba(0, 0, 0, 0.1); } button { appearance: none; border: 0; font-weight: 500; border-radius: 5px; font-size: 0.875rem; padding: 0.5rem 0.75rem; background-color: white; color: black; } button:active:enabled { background-color: #EEE; } `] ``` To make HTTP fetch requests we need to add the provider to our Angular app config. Add the following to the `providers` list in `src/app/app.config.ts`. ```ts title="src/app/app.config.ts" provideHttpClient(withFetch()) ``` And import it at the top. ```ts title="src/app/app.config.ts" ``` Let's add this to our app. Replace the `src/app/app.component.ts` file with. ```ts title="src/app/app.component.ts" @Component({ selector: 'app-root', standalone: true, imports: [RouterOutlet, FileUploadComponent], template: `
`, styles: [` main { margin: auto; padding: 1.5rem; max-width: 60ch; } `], }) ``` --- ## 4. Generate a pre-signed URL Let's implement the API that generates the pre-signed URL. Create a `functions/presigned.ts` file with the following. ```ts title="functions/presigned.ts" {8} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); return { statusCode: 200, body: await getSignedUrl(new S3Client({}), command), }; } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local Angular app in your browser, `http://localhost:4200` and try **uploading an image**. You should see it upload and then download the image. --- ## 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Astro on AWS with SST Create and deploy an Astro site to AWS with SST. https://sst.dev/docs/start/aws/astro There are two ways to deploy an Astro site to AWS with SST. 1. [Serverless](#serverless) 2. [Containers](#containers) We'll use both to build a couple of simple apps below. --- #### Examples We also have a few other Astro examples that you can refer to. - [Enabling streaming in your Astro app](/docs/examples/#aws-astro-streaming) - [Hit counter with Redis and Astro in a container](/docs/examples/#aws-astro-container-with-redis) --- ## Serverless We are going to create an Astro site, add an S3 Bucket for file uploads, and deploy it using the `Astro` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-astro) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npm create astro@latest aws-astro cd aws-astro ``` We are picking all the default options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `astro.config.mjs` with something like this. ```diff lang="js" title="astro.config.mjs" + import aws from "astro-sst"; + output: "server", + adapter: aws() }); ``` --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your Astro site. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your Astro site in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `Astro` component. ##### Link the bucket Now, link the bucket to our Astro site. ```js title="sst.config.ts" {2} new sst.aws.Astro("MyWeb", { link: [bucket], }); ``` --- ### 3. Create an upload form Add the upload form client in `src/pages/index.astro`. Replace the `` component with: ```astro title="src/pages/index.astro"
``` Add some styles, add this to your `src/pages/index.astro`. ```astro title="src/pages/index.astro" ``` --- ### 4. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and use it in the form. Add this to the header on your `src/pages/index.astro`. ```astro title="src/pages/index.astro" {8} --- const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); --- ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local Astro site in your browser, `http://localhost:4321` and try **uploading an image**. You should see it upload and then download the image. ![SST Astro app local](../../../../../assets/docs/start/start-astro-local.png) --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## Containers We are going to create a Astro site, add an S3 Bucket for file uploads, and deploy it in a container with the `Cluster` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-astro-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npm create astro@latest aws-astro-container cd aws-astro-container ``` We are picking all the default options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `astro.config.mjs`. But **we'll instead use** the [Node.js adapter](https://docs.astro.build/en/guides/integrations-guide/node/) since we're deploying it through a container. ```bash npx astro add node ``` --- ### 2. Add a Service To deploy our Astro site in a container, we'll use [AWS Fargate](https://aws.amazon.com/fargate/) with [Amazon ECS](https://aws.amazon.com/ecs/). Replace the `run` function in your `sst.config.ts`. ```ts title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "4321/http" }], }, dev: { command: "npm run dev", }, }); } ``` This creates a VPC, and an ECS Cluster with a Fargate service in it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Astro site locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Astro site. ```bash npx sst dev ``` Once complete, click on **MyService** in the sidebar and open your Astro site in your browser. --- ### 3. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this below the `Vpc` component. --- ##### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Astro site. --- ### 4. Create an upload form Add the upload form client in `src/pages/index.astro`. Replace the `` component with: ```astro title="src/pages/index.astro"
``` Add some styles, add this to your `src/pages/index.astro`. ```astro title="src/pages/index.astro" ``` --- ### 5. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and use it in the form. Add this to the header on your `src/pages/index.astro`. ```astro title="src/pages/index.astro" {8} --- const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); --- ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local Astro site in your browser, `http://localhost:4321` and try **uploading an image**. You should see it upload and then download the image. ![SST Astro app local](../../../../../assets/docs/start/start-astro-local-container.png) --- ### 6. Deploy your app To deploy our app we'll add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM node:lts AS base WORKDIR /app COPY package.json package-lock.json ./ FROM base AS prod-deps RUN npm install --omit=dev FROM base AS build-deps RUN npm install FROM build-deps AS build COPY . . RUN npm run build FROM base AS runtime COPY --from=prod-deps /app/node_modules ./node_modules COPY --from=build /app/dist ./dist ENV HOST=0.0.0.0 ENV PORT=4321 EXPOSE 4321 CMD node ./dist/server/entry.mjs ``` :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" .DS_Store node_modules dist ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## OpenAuth with SST and Next.js Add OpenAuth to your Next.js app and deploy it with SST. https://sst.dev/docs/start/aws/auth We are going to create a new Next.js app, add authentication to it with [OpenAuth](https://openauth.js.org), and deploy it with [OpenNext](https://opennext.js.org) and SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-auth-nextjs) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- #### Examples We also have another OpenAuth example that you can refer to. - [Full-stack React SPA with an API](/docs/examples/#aws-openauth-react-spa) --- ## 1. Create a project Let's start by creating our Next.js app and starting it in dev mode. ```bash npx create-next-app@latest aws-auth-nextjs cd aws-auth-nextjs ``` We are picking **TypeScript** and not selecting **ESLint**. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ## 2. Add OpenAuth server Next, let's add a directory for our OpenAuth server. ```bash mkdir auth ``` Add our OpenAuth server to a `auth/index.ts` file. ```ts title="auth/index.ts" async function getUser(email: string) { // Get user from database and return user ID return "123"; } const app = issuer({ subjects, // Remove after setting custom domain allow: async () => true, providers: { code: CodeProvider( CodeUI({ sendCode: async (email, code) => { console.log(email, code); }, }), ), }, success: async (ctx, value) => { if (value.provider === "code") { return ctx.subject("user", { id: await getUser(value.claims.email), }); } throw new Error("Invalid provider"); }, }); ``` --- ##### Define subjects We are also going to define our subjects. Add the following to a `auth/subjects.ts` file. ```ts title="auth/subjects.ts" user: object({ id: string(), }), }); ``` Let's install our dependencies. ```bash npm install @openauthjs/openauth valibot hono ``` --- ##### Add Auth component Now let's add this to our SST app. Replace the `run` function in `sst.config.ts` with the following. ```ts title="sst.config.ts" {6} const auth = new sst.aws.Auth("MyAuth", { issuer: "auth/index.handler", }); new sst.aws.Nextjs("MyWeb", { link: [auth], }); ``` This is defining our OpenAuth component and linking it to our Next.js app. --- ##### Start dev mode Run the following to start dev mode. This'll start SST, your Next.js app, and your OpenAuth server. ```bash npx sst dev ``` Once complete, it should give you the URL of your OpenAuth server. ```bash ✓ Complete MyAuth: https://fv62a3niazbkrazxheevotace40affnk.lambda-url.us-east-1.on.aws ``` Also click on **MyWeb** in the sidebar and open your Next.js app by going to `http://localhost:3000`. --- ## 3. Add OpenAuth client Next, let's add our OpenAuth client to our Next.js app. Add the following to `app/auth.ts`. ```ts title="app/auth.ts" {7} clientID: "nextjs", issuer: Resource.MyAuth.url, }); const cookies = await getCookies(); cookies.set({ name: "access_token", value: access, httpOnly: true, sameSite: "lax", path: "/", maxAge: 34560000, }); cookies.set({ name: "refresh_token", value: refresh, httpOnly: true, sameSite: "lax", path: "/", maxAge: 34560000, }); } ``` Here we are _linking_ to our auth server. And once the user is authenticated, we'll be saving their access and refresh tokens in _http only_ cookies. --- ##### Add auth actions Let's add the server actions that our Next.js app will need to authenticate users. Add the following to `app/actions.ts`. ```ts title="app/actions.ts" "use server"; const cookies = await getCookies(); const accessToken = cookies.get("access_token"); const refreshToken = cookies.get("refresh_token"); if (!accessToken) { return false; } const verified = await client.verify(subjects, accessToken.value, { refresh: refreshToken?.value, }); if (verified.err) { return false; } if (verified.tokens) { await setTokens(verified.tokens.access, verified.tokens.refresh); } return verified.subject; } const cookies = await getCookies(); const accessToken = cookies.get("access_token"); const refreshToken = cookies.get("refresh_token"); if (accessToken) { const verified = await client.verify(subjects, accessToken.value, { refresh: refreshToken?.value, }); if (!verified.err && verified.tokens) { await setTokens(verified.tokens.access, verified.tokens.refresh); redirect("/"); } } const headers = await getHeaders(); const host = headers.get("host"); const protocol = host?.includes("localhost") ? "http" : "https"; const { url } = await client.authorize( `${protocol}://${host}/api/callback`, "code", ); redirect(url); } const cookies = await getCookies(); cookies.delete("access_token"); cookies.delete("refresh_token"); redirect("/"); } ``` This is adding an `auth` action that checks if a user is authenticated, `login` that starts the OAuth flow, and `logout` that clears the session. --- ##### Add callback route When the OpenAuth flow is complete, users will be redirected back to our Next.js app. Let's add a callback route to handle this in `app/api/callback/route.ts`. ```ts title="app/api/callback/route.ts" const url = new URL(req.url); const code = url.searchParams.get("code"); const exchanged = await client.exchange(code!, `${url.origin}/api/callback`); if (exchanged.err) return NextResponse.json(exchanged.err, { status: 400 }); await setTokens(exchanged.tokens.access, exchanged.tokens.refresh); return NextResponse.redirect(`${url.origin}/`); } ``` Once the user is authenticated, we redirect them to the root of our app. --- ## 4. Add auth to app Now we are ready to add authentication to our app. Replace the `` component in `app/page.tsx` with the following. ```tsx title="app/page.tsx" const subject = await auth(); return (
    {subject ? ( <>
  1. Logged in as `{subject.properties.id}`.
  2. And then check out `app/page.tsx`.
  3. ) : ( <>
  4. Login with your email and password.
  5. And then check out `app/page.tsx`.
  6. )}
{subject ? (
) : (
)}
); } ``` Let's also add these styles to `app/page.module.css`. ```css title="app/page.module.css" .ctas button { appearance: none; background: transparent; border-radius: 128px; height: 48px; padding: 0 20px; border: none; border: 1px solid transparent; transition: background 0.2s, color 0.2s, border-color 0.2s; cursor: pointer; display: flex; align-items: center; justify-content: center; font-size: 16px; line-height: 20px; font-weight: 500; } button.primary { background: var(--foreground); color: var(--background); gap: 8px; } button.secondary { border-color: var(--gray-alpha-200); min-width: 180px; } ``` --- #### Test your app Head to `http://localhost:3000` and click the login button, you should be redirected to the OpenAuth server asking you to put in your email. If you check the **Functions** tab in your `sst dev` session, you'll see the code being console logged. You can use this code to login. This should log you in and print your user ID. --- ## 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. ```bash ✓ Complete MyAuth: https://vp3honbl3od4gmo7mei37mchky0waxew.lambda-url.us-east-1.on.aws MyWeb: https://d2fjg1rqbqi95t.cloudfront.net ``` Congrats! Your app and your OpenAuth server should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Bun on AWS with SST Create and deploy a Bun app to AWS with SST. https://sst.dev/docs/start/aws/bun We are going to build an app with Bun, add an S3 Bucket for file uploads, and deploy it to AWS in a container with SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-bun) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- #### Examples We also have a few other Bun examples that you can refer to. - [Deploy Bun with Elysia in a container](/docs/examples/#aws-bun-elysia-container) - [Build a hit counter with Bun and Redis](/docs/examples/#aws-bun-redis) --- ## 1. Create a project Let's start by creating our Bun app. ```bash mkdir aws-bun && cd aws-bun bun init -y ``` --- #### Init Bun Serve Replace your `index.ts` with the following. ```js title="index.ts" const server = Bun.serve({ async fetch(req) { const url = new URL(req.url); if (url.pathname === "/" && req.method === "GET") { return new Response("Hello World!"); } return new Response("404!"); }, }); console.log(`Listening on ${server.url}`); ``` This starts up an HTTP server by default on port `3000`. --- #### Add scripts Add the following to your `package.json`. ```json title="package.json" "scripts": { "dev": "bun run --watch index.ts" }, ``` This adds a `dev` script with a watcher. --- #### Init SST Now let's initialize SST in our app. ```bash bunx sst init bun install ``` This'll create an `sst.config.ts` file in your project root and install SST. --- ## 2. Add a Service To deploy our Bun app, let's add an [AWS Fargate](https://aws.amazon.com/fargate/) container with [Amazon ECS](https://aws.amazon.com/ecs/). Update your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "bun dev", }, }); } ``` This creates a VPC with an ECS Cluster, and adds a Fargate service to it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Bun app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Bun app. ```bash bun sst dev ``` Once complete, click on **MyService** in the sidebar and open your Bun app in your browser. --- ## 3. Add an S3 Bucket Let's add an S3 Bucket for file uploads. Add this to your `sst.config.ts` below the `Vpc` component. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` --- #### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Bun app. --- ## 4. Upload a file We want a `POST` request made to the `/` route to upload a file to our S3 bucket. Let's add this below our _Hello World_ route in our `index.ts`. ```ts title="index.ts" {5} if (url.pathname === "/" && req.method === "POST") { const formData = await req.formData(); const file = formData.get("file")! as File; const params = { Bucket: Resource.MyBucket.name, ContentType: file.type, Key: file.name, Body: file, }; const upload = new Upload({ params, client: s3, }); await upload.done(); return new Response("File uploaded successfully."); } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the imports. We'll use the extra ones below. ```ts title="index.ts" S3Client, GetObjectCommand, ListObjectsV2Command, } from "@aws-sdk/client-s3"; const s3 = new S3Client(); ``` And install the npm packages. ```bash bun install @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner ``` --- ## 5. Download the file We'll add a `/latest` route that'll download the latest file in our S3 bucket. Let's add this below our upload route in `index.ts`. ```ts title="index.ts" if (url.pathname === "/latest" && req.method === "GET") { const objects = await s3.send( new ListObjectsV2Command({ Bucket: Resource.MyBucket.name, }), ); const latestFile = objects.Contents!.sort( (a, b) => (b.LastModified?.getTime() ?? 0) - (a.LastModified?.getTime() ?? 0), )[0]; const command = new GetObjectCommand({ Key: latestFile.Key, Bucket: Resource.MyBucket.name, }); return Response.redirect(await getSignedUrl(s3, command)); } ``` --- #### Test your app To upload a file run the following from your project root. ```bash curl -F file=@package.json http://localhost:3000/ ``` This should upload the `package.json`. Now head over to `http://localhost:3000/latest` in your browser and it'll show you what you just uploaded. ![SST Bun app file upload](../../../../../assets/docs/start/start-bun-app-file-upload.png) --- ## 6. Deploy your app To deploy our app we'll first add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM oven/bun COPY bun.lock . COPY package.json . RUN bun install --frozen-lockfile COPY . . EXPOSE 3000 CMD ["bun", "index.ts"] ``` This is a pretty basic setup. You can refer to the [Bun docs](https://bun.sh/guides/ecosystem/docker) for a more optimized Dockerfile. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" node_modules .git .gitignore README.md Dockerfile* ``` Now to build our Docker image and deploy we run: ```bash bun sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. This'll give the URL of your Bun app deployed as a Fargate service. ```bash ✓ Complete MyService: http://prod-MyServiceLoadBalanc-491430065.us-east-1.elb.amazonaws.com ``` Congrats! Your app should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Deno on AWS with SST Create and deploy a Deno app to AWS with SST. https://sst.dev/docs/start/aws/deno We are going to build an app with Deno, add an S3 Bucket for file uploads, and deploy it to AWS in a container with SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-deno) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- #### Examples We also have a few other Deno examples that you can refer to. - [Build a hit counter with Deno and Redis](/docs/examples/#aws-deno-redis) --- ## 1. Create a project Let's start by creating our Deno app. ```bash deno init aws-deno ``` --- #### Init Deno Serve Replace your `main.ts` with the following. ```ts title="main.ts" Deno.serve(async (req) => { const url = new URL(req.url); if (url.pathname === "/" && req.method === "GET") { return new Response("Hello World!"); } return new Response("404!"); }); ``` This starts up an HTTP server by default on port `8000`. --- #### Init SST Make sure you have [SST installed globally](/docs/reference/cli). ```bash sst init ``` This'll create an `sst.config.ts` file in your project root. --- ## 2. Add a Service To deploy our Deno app, let's add an [AWS Fargate](https://aws.amazon.com/fargate/) container with [Amazon ECS](https://aws.amazon.com/ecs/). Update your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "8000/http" }], }, dev: { command: "deno task dev", }, }); } ``` This creates a VPC with an ECS Cluster, and adds a Fargate service to it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Deno app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Deno app. ```bash sst dev ``` Once complete, click on **MyService** in the sidebar and open your Deno app in your browser. --- ## 3. Add an S3 Bucket Let's add an S3 Bucket for file uploads. Add this to your `sst.config.ts` below the `Vpc` component. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` --- #### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Deno app. --- ## 4. Upload a file We want a `POST` request made to the `/` route to upload a file to our S3 bucket. Let's add this below our _Hello World_ route in our `main.ts`. ```ts title="main.ts" {6} if (url.pathname === "/" && req.method === "POST") { const formData: FormData = await req.formData(); const file: File | null = formData?.get("file") as File; const params = { Bucket: Resource.MyBucket.name, ContentType: file.type, Key: file.name, Body: file, }; const upload = new Upload({ params, client: s3, }); await upload.done(); return new Response("File uploaded successfully."); } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the imports. We'll use the extra ones below. ```ts title="main.ts" S3Client, GetObjectCommand, ListObjectsV2Command, } from "@aws-sdk/client-s3"; const s3 = new S3Client(); ``` And install the npm packages. ```bash deno install npm:sst npm:@aws-sdk/client-s3 npm:@aws-sdk/lib-storage npm:@aws-sdk/s3-request-presigner ``` --- ## 5. Download the file We'll add a `/latest` route that'll download the latest file in our S3 bucket. Let's add this below our upload route in `main.ts`. ```ts title="main.ts" if (url.pathname === "/latest" && req.method === "GET") { const objects = await s3.send( new ListObjectsV2Command({ Bucket: Resource.MyBucket.name, }), ); const latestFile = objects.Contents!.sort( (a, b) => (b.LastModified?.getTime() ?? 0) - (a.LastModified?.getTime() ?? 0), )[0]; const command = new GetObjectCommand({ Key: latestFile.Key, Bucket: Resource.MyBucket.name, }); return Response.redirect(await getSignedUrl(s3, command)); } ``` --- #### Test your app To upload a file run the following from your project root. You might have to go to the `MyService` tab in the sidebar and accept the Deno permission prompts. ```bash curl -F file=@deno.json http://localhost:8000/ ``` This should upload the `deno.json`. Now head over to `http://localhost:8000/latest` in your browser and it'll show you what you just uploaded. --- ## 5. Deploy your app To deploy our app we'll first add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM denoland/deno EXPOSE 8000 USER deno WORKDIR /app ADD . /app RUN deno install --entrypoint main.ts CMD ["run", "--allow-all", "main.ts"] ``` :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Now to build our Docker image and deploy we run: ```bash sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. This'll give the URL of your Deno app deployed as a Fargate service. ```bash ✓ Complete MyService: http://prod-MyServiceLoadBalanc-491430065.us-east-1.elb.amazonaws.com ``` --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Drizzle with Amazon RDS and SST Use Drizzle and SST to manage and deploy your Amazon Postgres RDS database. https://sst.dev/docs/start/aws/drizzle You can use SST to deploy an Amazon Postgres RDS database and set up [Drizzle ORM](https://orm.drizzle.team) and [Drizzle Kit](https://orm.drizzle.team/docs/kit-overview) to manage it. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-drizzle) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- #### Examples We also have a few other Drizzle and Postgres examples that you can refer to. - [Run migrations in your CI/CD pipeline](/docs/examples/#drizzle-migrations-in-cicd) - [Run Postgres in a local Docker container for dev](/docs/examples/#aws-postgres-local) - [Use Next.js, Postgres, and Drizzle with the T3 Stack](/docs/examples/#t3-stack-in-aws) --- ## 1. Create a project Let's start by creating a Node.js app. ```bash mkdir aws-drizzle && cd aws-drizzle npm init -y ``` --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- #### Init Drizzle Add Drizzle to your project. We're also adding the `pg` client that Drizzle will use. ```bash npm install pg @types/pg drizzle-orm drizzle-kit ``` Drizzle ORM is what will be used to query our database, while Drizzle Kit will allow us to run migrations. It also comes with Drizzle Studio, a query browser. Let's add the following to the `scripts` in the `package.json`. ```json title="package.json" "sst shell" "scripts": { "db": "sst shell drizzle-kit" }, ``` The `sst shell` CLI will pass the credentials to Drizzle Kit and allow it to connect to your database. Let's also update our `tsconfig.json`. ```json title="tsconfig.json" { "compilerOptions": { "strict": true } } ``` --- ## 2. Add a Postgres db Let's add a Postgres database using [Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html). This needs a VPC. Update your `sst.config.ts`. ```ts title="sst.config.ts" async run() { const vpc = new sst.aws.Vpc("MyVpc", { bastion: true, nat: "ec2" }); const rds = new sst.aws.Postgres("MyPostgres", { vpc, proxy: true }); }, ``` The `proxy` option configures an RDS Proxy behind the scenes making it ideal for serverless applications. :::tip The RDS Proxy allows serverless environments to reliably connect to RDS. ::: While the `bastion` option will let us connect to the VPC from our local machine. We also need the NAT gateway for this example since we'll be using a Lambda function, and this allows a Lambda function that's in a VPC to access the internet. --- #### Start Drizzle Studio When you run SST in dev it can start other dev processes for you. In this case we want to start Drizzle Studio. Add this below the `Postgres` component. ```ts title="sst.config.ts" new sst.x.DevCommand("Studio", { link: [rds], dev: { command: "npx drizzle-kit studio", }, }); ``` This will run the given command in dev. --- #### Add an API We'll use a Lambda function as an API to query our database. Add the following to your `sst.config.ts` below the database config. ```ts title="sst.config.ts" {4} new sst.aws.Function("MyApi", { vpc, url: true, link: [rds], handler: "src/api.handler", }); ``` We are linking our database to the API. --- #### Install a tunnel Since our database cluster is in a VPC, we'll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You'll only need to do this once on your machine. --- #### Start dev mode Start your app in dev mode. This runs your functions [_Live_](/docs/live/). ```bash npx sst dev ``` It'll take a few minutes to create your database. Once complete, you'll see this. ```bash frame="none" ✓ Complete MyApi: https://ouu5vovpxllyn5b6ot2nn6vdsa0hvcuj.lambda-url.us-east-1.on.aws ``` You'll see Drizzle Studio started in a tab called **Studio**. And a tunnel in the **Tunnel** tab. --- ## 3. Create a schema Let's define our Drizzle config. Add a `drizzle.config.ts` in your project root with this. ```ts title="drizzle.config.ts" {6-8} dialect: "postgresql", // Pick up all our schema files schema: ["./src/**/*.sql.ts"], out: "./migrations", dbCredentials: { host: Resource.MyPostgres.host, port: Resource.MyPostgres.port, user: Resource.MyPostgres.username, password: Resource.MyPostgres.password, database: Resource.MyPostgres.database, }, }); ``` Here we are telling Drizzle that we'll be specifying your database schema in `.sql.ts` files in our `src/` directory. :::tip SST allows us to automatically access our database with `Resource.MyPostgres.*`. ::: We are going to create a simple database to store some todos. Create a new file in `src/todo.sql.ts` with the following. ```ts title="src/todo.sql.ts" id: serial("id").primaryKey(), title: text("title").notNull(), description: text("description"), }); ``` --- ## 4. Generate a migration We can use this to generate a migration. ```bash npm run db generate ``` This in turn runs `sst shell drizzle-kit generate` and creates a new migration in the `migrations/` directory. --- #### Apply the migration Now we can apply our migration using. ```bash npm run db migrate ``` This should create our new schema. :::tip You need a tunnel to connect to your database. ::: This needs the tunnel to connect to the database. So you should have `sst dev` in a separate terminal. ```bash "sudo" npx sst tunnel ``` Alternatively, you can just run the tunnel using the above command. --- #### Drizzle Studio To see our schema in action we can open the Drizzle Studio. Head over to the **Studio** tab in your `sst dev` session and go to the link. Or head over to `https://local.drizzle.studio` in your browser. ![Initial Drizzle Studio with SST](../../../../../assets/docs/start/initial-drizzle-studio-with-sst.png) --- ## 5. Query the database To use Drizzle ORM to query our database, create a new `src/drizzle.ts` config file with the following. ```ts title="src/drizzle.ts" const pool = new Pool({ host: Resource.MyPostgres.host, port: Resource.MyPostgres.port, user: Resource.MyPostgres.username, password: Resource.MyPostgres.password, database: Resource.MyPostgres.database, }); ``` Now we can use that in the API. Create our API handler in `src/api.ts`. ```ts title="src/api.ts" if (evt.requestContext.http.method === "GET") { const result = await db.select().from(todo).execute(); return { statusCode: 200, body: JSON.stringify(result, null, 2), }; } if (evt.requestContext.http.method === "POST") { const result = await db .insert(todo) .values({ title: "Todo", description: crypto.randomUUID() }) .returning() .execute(); return { statusCode: 200, body: JSON.stringify(result), }; } }; ``` For _POST_ requests we create a new todo and for _GET_ requests we simply print out all our todos. --- #### Test your app To test our app, make a _POST_ request to our API. ```bash curl -X POST https://ouu5vovpxllyn5b6ot2nn6vdsa0hvcuj.lambda-url.us-east-1.on.aws ``` Now if you head over to `https://ouu5vovpxllyn5b6ot2nn6vdsa0hvcuj.lambda-url.us-east-1.on.aws` in your browser, you'll see that a todo has been added. ![Todo created with Drizzle in SST](../../../../../assets/docs/start/todo-created-with-drizzle-in-sst.png) You should see this in the Drizzle Studio as well. --- ## 6. Deploy your app Finally, let's deploy your app. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Send emails in AWS with SST Send emails from your API in AWS with SST. https://sst.dev/docs/start/aws/email We are going to build a simple SST app in AWS with a serverless API, and send emails from it. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-email) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ## 1. Create a project Let's start by creating our app. ```bash mkdir my-email-app && cd my-email-app npm init -y ``` #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ## 2. Add email Let's add Email to our app, it uses [Amazon SES](https://aws.amazon.com/ses/) behind the scenes. Update your `sst.config.ts`. ```js title="sst.config.ts" {3} async run() { const email = new sst.aws.Email("MyEmail", { sender: "email@example.com", }); } ``` SES can send emails from a verified email address or domain. To keep things simple we'll be sending from an email. Make sure to use your email address here as we'll be verifying it in the next step. --- ## 3. Add an API Next let's create a simple API that'll send out an email when invoked. Add this to your `sst.config.ts`. ```js title="sst.config.ts" {3} const api = new sst.aws.Function("MyApi", { handler: "sender.handler", link: [email], url: true, }); return { api: api.url, }; ``` We are linking the our email component to our API. --- #### Start dev mode Start your app in dev mode. This runs your functions [_Live_](/docs/live/). ```bash npx sst dev ``` This will give you your API URL. ```bash frame="none" + Complete api: https://wwwrwteda6kbpquppdz5i3lg4a0nvmbf.lambda-url.us-east-1.on.aws/ ``` You should also get an email asking you to verify the sender email address. ![Verify your email with SST](../../../../../assets/docs/start/verify-your-email-with-sst.png) Click the link to verify your email address. --- ## 4. Send an email We'll use the SES client to send an email when the API is invoked. Create a new `sender.ts` file and add the following to it. ```ts title="sender.ts" {4} await client.send( new SendEmailCommand({ FromEmailAddress: Resource.MyEmail.sender, Destination: { ToAddresses: [Resource.MyEmail.sender], }, Content: { Simple: { Subject: { Data: "Hello World!", }, Body: { Text: { Data: "Sent from my SST app.", }, }, }, }, }) ); return { statusCode: 200, body: "Sent!" }; }; ``` We are sending an email to the same verified email that we are sending from because your SES account might be in _sandbox_ mode and can only send to verified emails. We'll look at how to go to production below. :::tip We are accessing our email service with `Resource.Email.sender`. ::: Add the imports. ```ts title="sender.ts" const client = new SESv2Client(); ``` And install the npm packages. ```bash npm install @aws-sdk/client-sesv2 ``` --- #### Test your app To test our app, hit the API. ```bash curl https://wwwrwteda6kbpquppdz5i3lg4a0nvmbf.lambda-url.us-east-1.on.aws ``` This should print out `Sent!` and you should get an email. You might have to check your spam folder since the sender and receiver email address is the same in this case. ![Email sent from SST](../../../../../assets/docs/start/email-sent-from-sst.png) --- ## 5. Deploy your app Now let's deploy your app. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Next, for production you can: 1. [Request production access](https://docs.aws.amazon.com/ses/latest/dg/request-production-access.html) for SES 2. And [use your domain](/docs/component/aws/email/) to send emails This'll let you send emails from your domain to any email address. --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Express on AWS with SST Create and deploy an Express app to AWS with SST. https://sst.dev/docs/start/aws/express We are going to build an app with Express, add an S3 Bucket for file uploads, and deploy it to AWS in a container with SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-express) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- #### Examples We also have a few other Express examples that you can refer to. - [Build a hit counter with Express and Redis](/docs/examples/#aws-express-redis) - [Use service discovery to connect to your Express app](/docs/examples/#aws-cluster-service-discovery) --- ## 1. Create a project Let's start by creating our Express app. ```bash mkdir aws-express && cd aws-express npm init -y npm install express ``` --- #### Init Express Create your app by adding an `index.mjs` to the root. ```js title="index.mjs" const PORT = 80; const app = express(); app.get("/", (req, res) => { res.send("Hello World!") }); app.listen(PORT, () => { console.log(`Server is running on http://localhost:${PORT}`); }); ``` --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` This'll create a `sst.config.ts` file in your project root. --- ## 2. Add a Service To deploy our Express app, let's add an [AWS Fargate](https://aws.amazon.com/fargate/) container with [Amazon ECS](https://aws.amazon.com/ecs/). Update your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http" }], }, dev: { command: "node --watch index.mjs", }, }); } ``` This creates a VPC with an ECS Cluster, and adds a Fargate service to it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Express app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Express app. ```bash npx sst dev ``` Once complete, click on **MyService** in the sidebar and open your Express app in your browser. --- ## 3. Add an S3 Bucket Let's add an S3 Bucket for file uploads. Add this to your `sst.config.ts` below the `Vpc` component. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` --- #### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Express app. --- ## 4. Upload a file We want a `POST` request made to the `/` route to upload a file to our S3 bucket. Let's add this below our _Hello World_ route in our `index.mjs`. ```js title="index.mjs" {4} app.post("/", upload.single("file"), async (req, res) => { const file = req.file; const params = { Bucket: Resource.MyBucket.name, ContentType: file.mimetype, Key: file.originalname, Body: file.buffer, }; const upload = new Upload({ params, client: s3, }); await upload.done(); res.status(200).send("File uploaded successfully."); }); ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the imports. We'll use the extra ones below. ```js title="index.mjs" S3Client, GetObjectCommand, ListObjectsV2Command, } from "@aws-sdk/client-s3"; const s3 = new S3Client({}); const upload = multer({ storage: multer.memoryStorage() }); ``` And install the npm packages. ```bash npm install multer @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner ``` --- ## 5. Download the file We'll add a `/latest` route that'll download the latest file in our S3 bucket. Let's add this below our upload route in `index.mjs`. ```js title="index.mjs" app.get("/latest", async (req, res) => { const objects = await s3.send( new ListObjectsV2Command({ Bucket: Resource.MyBucket.name, }), ); const latestFile = objects.Contents.sort( (a, b) => b.LastModified - a.LastModified, )[0]; const command = new GetObjectCommand({ Key: latestFile.Key, Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(s3, command); res.redirect(url); }); ``` --- #### Test your app To upload a file run the following from your project root. ```bash curl -F file=@package.json http://localhost:80/ ``` This should upload the `package.json`. Now head over to `http://localhost:80/latest` in your browser and it'll show you what you just uploaded. --- ## 5. Deploy your app To deploy our app we'll first add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM node:lts-alpine WORKDIR /app/ COPY package.json /app RUN npm install COPY index.mjs /app ENTRYPOINT ["node", "index.mjs"] ``` This just builds our Express app in a Docker image. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" node_modules ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. This'll give the URL of your Express app deployed as a Fargate service. ```bash ✓ Complete MyService: http://jayair-MyServiceLoadBala-592628062.us-east-1.elb.amazonaws.com ``` --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Hono on AWS with SST Create and deploy a Hono API in AWS with SST. https://sst.dev/docs/start/aws/hono There are two ways to deploy a [Hono](https://hono.dev) app to AWS with SST. 1. [Serverless](#serverless) 2. [Containers](#containers) We'll use both to build a couple of simple apps below. --- #### Examples We also have a few other Hono examples that you can refer to. - [Enabling streaming in your Hono app](/docs/examples/#aws-hono-streaming) - [Hit counter with Redis and Hono in a container](/docs/examples/#aws-hono-container-with-redis) --- ## Serverless We are going to build a serverless Hono API, add an S3 Bucket for file uploads, and deploy it using a Lambda function. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-hono) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our app. ```bash npm create hono@latest aws-hono cd aws-hono ``` We are picking the **aws-lambda** template. ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ### 2. Add an API Let's add a Hono API using an AWS Lambda. Update your `sst.config.ts`. ```js title="sst.config.ts" async run() { new sst.aws.Function("Hono", { url: true, handler: "src/index.handler", }); } ``` We are enabling the function URL for this. --- ##### Start dev mode Start your app in dev mode. This runs your functions [_Live_](/docs/live/). ```bash npx sst dev ``` This will give you the URL of your API. ```bash frame="none" ✓ Complete Hono: https://gyrork2ll35rsuml2yr4lifuqu0tsjft.lambda-url.us-east-1.on.aws ``` --- ### 3. Add an S3 Bucket Let's add an S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` ##### Link the bucket Now, link the bucket to the API. ```ts title="sst.config.ts" {3} new sst.aws.Function("Hono", { url: true, link: [bucket], handler: "src/index.handler", }); ``` --- ### 4. Upload a file We want the `/` route of our API to generate a pre-signed URL to upload a file to our S3 Bucket. Replace the _Hello Hono_ route in `src/index.ts`. ```ts title="src/index.ts" {4} app.get('/', async (c) => { const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); return c.text(await getSignedUrl(s3, command)); }); ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Then add the relevant imports. We'll use the extra ones below. ```ts title="src/index.ts" S3Client, GetObjectCommand, PutObjectCommand, ListObjectsV2Command, } from '@aws-sdk/client-s3' const s3 = new S3Client(); ``` --- ### 5. Download a file We want the `/latest` route of our API to generate a pre-signed URL to download the last uploaded file in our S3 Bucket. Add this to your routes in `src/index.ts`. ```ts title="src/index.ts" app.get('/latest', async (c) => { const objects = await s3.send( new ListObjectsV2Command({ Bucket: Resource.MyBucket.name, }), ); const latestFile = objects.Contents!.sort( (a, b) => (b.LastModified?.getTime() ?? 0) - (a.LastModified?.getTime() ?? 0), )[0]; const command = new GetObjectCommand({ Key: latestFile.Key, Bucket: Resource.MyBucket.name, }); return c.redirect(await getSignedUrl(s3, command)); }); ``` --- ##### Test your app Let's try uploading a file from your project root. Make sure to use your API URL. ```bash curl --upload-file package.json "$(curl https://gyrork2ll35rsuml2yr4lifuqu0tsjft.lambda-url.us-east-1.on.aws)" ``` Now head over to `https://gyrork2ll35rsuml2yr4lifuqu0tsjft.lambda-url.us-east-1.on.aws/latest` in your browser and it'll download the file you just uploaded. --- ### 6. Deploy your app Now let's deploy your app. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## Containers We are going to create a Hono API, add an S3 Bucket for file uploads, and deploy it in a container with the `Cluster` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-hono-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our app. ```bash npm create hono@latest aws-hono-container cd aws-hono-container ``` We are picking the **nodejs** template. ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ### 2. Add a Service To deploy our Hono app in a container, we'll use [AWS Fargate](https://aws.amazon.com/fargate/) with [Amazon ECS](https://aws.amazon.com/ecs/). Replace the `run` function in ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); } ``` This creates a VPC, and an ECS Cluster with a Fargate service in it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Hono app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Hono app. ```bash npx sst dev ``` Once complete, click on **MyService** in the sidebar and open your Hono app in your browser. --- ### 3. Add an S3 Bucket Let's add an S3 Bucket for file uploads. Add this to your `sst.config.ts` below the `Vpc` component. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` --- ##### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Hono app. --- ### 4. Upload a file We want a `POST` request made to the `/` route to upload a file to our S3 bucket. Let's add this below our _Hello Hono_ route in our `src/index.ts`. ```ts title="src/index.ts" {6} app.post('/', async (c) => { const body = await c.req.parseBody(); const file = body['file'] as File; const params = { Bucket: Resource.MyBucket.name, ContentType: file.type, Key: file.name, Body: file, }; const upload = new Upload({ params, client: s3, }); await upload.done(); return c.text('File uploaded successfully.'); }); ``` Add the imports. We'll use the extra ones below. ```tsx title="src/index.ts" S3Client, GetObjectCommand, ListObjectsV2Command, } from '@aws-sdk/client-s3' const s3 = new S3Client(); ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner ``` --- ### 5. Download the file We'll add a `/latest` route that'll download the latest file in our S3 bucket. Let's add this below our upload route in `src/index.ts`. ```ts title="src/index.ts" app.get('/latest', async (c) => { const objects = await s3.send( new ListObjectsV2Command({ Bucket: Resource.MyBucket.name, }), ); const latestFile = objects.Contents!.sort( (a, b) => (b.LastModified?.getTime() ?? 0) - (a.LastModified?.getTime() ?? 0), )[0]; const command = new GetObjectCommand({ Key: latestFile.Key, Bucket: Resource.MyBucket.name, }); return c.redirect(await getSignedUrl(s3, command)); }); ``` --- #### Test your app To upload a file run the following from your project root. ```bash curl -F file=@package.json http://localhost:3000/ ``` This should upload the `package.json`. Now head over to `http://localhost:3000/latest` in your browser and it'll show you what you just uploaded. --- ### 6. Deploy your app To deploy our app we'll first add a `Dockerfile`. This is building our app by running our `build` script from above. ```diff lang="dockerfile" title="Dockerfile" FROM node:lts-alpine AS base FROM base AS builder RUN apk add --no-cache gcompat WORKDIR /app COPY package*json tsconfig.json src ./ + # Copy over generated types + COPY sst-env.d.ts* ./ RUN npm ci && \ npm run build && \ npm prune --production FROM base AS runner WORKDIR /app RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 hono COPY --from=builder --chown=hono:nodejs /app/node_modules /app/node_modules COPY --from=builder --chown=hono:nodejs /app/dist /app/dist COPY --from=builder --chown=hono:nodejs /app/package.json /app/package.json USER hono EXPOSE 3000 CMD ["node", "/app/dist/index.js"] ``` This builds our Hono app in a Docker image. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" node_modules .git ``` To compile our TypeScript file, we'll need add the following to the `tsconfig.json`. ```diff lang="json" title="tsconfig.json" {4,6} { "compilerOptions": { // ... + "outDir": "./dist" }, + "exclude": ["node_modules"] } ``` Install TypeScript. ```bash npm install typescript --save-dev ``` And add a `build` script to our `package.json`. ```diff lang="json" title="package.json" "scripts": { // ... + "build": "tsc" } ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. This'll give the URL of your Hono app deployed as a Fargate service. ```bash ✓ Complete MyService: http://prod-MyServiceLoadBalanc-491430065.us-east-1.elb.amazonaws.com ``` --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## NestJS on AWS with SST Create and deploy an NestJS app to AWS with SST. https://sst.dev/docs/start/aws/nestjs We are going to build an app with NestJS, add an S3 Bucket for file uploads, and deploy it to AWS in a container with SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-nestjs-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). :::note You need Node 22.12 or higher for this example to work. ::: Also make sure you have Node 22.12. Or set the `--experimental-require-module` flag. This'll allow NestJS to import the SST SDK. --- #### Examples We also have a few other NestJS examples that you can refer to. - [Build a hit counter with NestJS and Redis](/docs/examples/#aws-nestjs-with-redis) --- ## 1. Create a project Let's start by creating our Nest app. ```bash nest new aws-nestjs-container cd aws-nestjs-container ``` We are picking npm as the package manager. --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` This'll create a `sst.config.ts` file in your project root. To make sure the types in the `sst.config.ts` are picked up, add the following to the `tsconfig.json`. ```diff lang="json" title="tsconfig.json" { + "include": ["src/**/*", "test/**/*", "sst-env.d.ts"] } ``` --- ## 2. Add a Service To deploy our Nest app, let's add an [AWS Fargate](https://aws.amazon.com/fargate/) container with [Amazon ECS](https://aws.amazon.com/ecs/). Update your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run start:dev", }, }); } ``` This creates a VPC with an ECS Cluster, and adds a Fargate service to it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Nest app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Nest app. ```bash npx sst dev ``` Once complete, click on **MyService** in the sidebar and open your Nest app in your browser. --- ## 3. Add an S3 Bucket Let's add an S3 Bucket for file uploads. Add this to your `sst.config.ts` below the `Vpc` component. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket"); ``` --- #### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Nest app. --- ## 4. Upload a file We want a `POST` request made to the `/` route to upload a file to our S3 bucket. Let's add this below our `getHello` method in our `src/app.controller.ts`. ```ts title="src/app.controller.ts" {5} @Post() @UseInterceptors(FileInterceptor('file')) async uploadFile(@UploadedFile() file: Express.Multer.File): Promise { const params = { Bucket: Resource.MyBucket.name, ContentType: file.mimetype, Key: file.originalname, Body: file.buffer, }; const upload = new Upload({ params, client: s3, }); await upload.done(); return 'File uploaded successfully.'; } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the imports. We'll use the extra ones below. ```ts title="src/app.controller.ts" S3Client, GetObjectCommand, ListObjectsV2Command, } from '@aws-sdk/client-s3'; const s3 = new S3Client({}); ``` And install the npm packages. ```bash npm install -D @types/multer npm install @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner ``` --- ## 5. Download the file We'll add a `/latest` route that'll download the latest file in our S3 bucket. Let's add this below our `uploadFile` method in `src/app.controller.ts`. ```ts title="src/app.controller.ts" @Get('latest') @Redirect('/', 302) async getLatestFile() { const objects = await s3.send( new ListObjectsV2Command({ Bucket: Resource.MyBucket.name, }), ); const latestFile = objects.Contents.sort( (a, b) => b.LastModified.getTime() - a.LastModified.getTime(), )[0]; const command = new GetObjectCommand({ Key: latestFile.Key, Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(s3, command); return { url }; } ``` --- #### Test your app To upload a file run the following from your project root. ```bash curl -F file=@package.json http://localhost:3000/ ``` This should upload the `package.json`. Now head over to `http://localhost:3000/latest` in your browser and it'll download you what you just uploaded. --- ## 5. Deploy your app To deploy our app we'll first add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM node:22 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . RUN npm run build EXPOSE 3000 CMD ["node", "dist/main"] ``` This just builds our Nest app in a Docker image. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" dist node_modules ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. This'll give the URL of your Nest app deployed as a Fargate service. ```bash ✓ Complete MyService: http://jayair-MyServiceLoadBala-592628062.us-east-1.elb.amazonaws.com ``` --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Next.js on AWS with SST Create and deploy a Next.js app to AWS with SST. https://sst.dev/docs/start/aws/nextjs There are two ways to deploy a Next.js app to AWS with SST. 1. [Serverless with OpenNext](#serverless) 2. [Containers with Docker](#containers) We'll use both to build a couple of simple apps below. --- #### Examples We also have a few other Next.js examples that you can refer to. - [Adding basic auth to your Next.js app](/docs/examples/#aws-nextjs-basic-auth) - [Enabling streaming in your Next.js app](/docs/examples/#aws-nextjs-streaming) - [Add additional routes to the Next.js CDN](/docs/examples/#aws-nextjs-add-behavior) - [Hit counter with Redis and Next.js in a container](/docs/examples/#aws-nextjs-container-with-redis) --- ## Serverless We are going to create a Next.js app, add an S3 Bucket for file uploads, and deploy it using [OpenNext](https://opennext.js.org) and the `Nextjs` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-nextjs) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our app. ```bash npx create-next-app@latest aws-nextjs cd aws-nextjs ``` We are picking **TypeScript** and not selecting **ESLint**. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your Next.js app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your Next.js app in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `Nextjs` component. ##### Link the bucket Now, link the bucket to our Next.js app. ```js title="sst.config.ts" {2} new sst.aws.Nextjs("MyWeb", { link: [bucket] }); ``` --- ### 3. Create an upload form Add a form client component in `components/form.tsx`. ```tsx title="components/form.tsx" "use client"; return (
{ e.preventDefault(); const file = (e.target as HTMLFormElement).file.files?.[0] ?? null; const image = await fetch(url, { body: file, method: "PUT", headers: { "Content-Type": file.type, "Content-Disposition": `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split("?")[0]; }} >
); } ``` Add some styles. ```css title="components/form.module.css" .form { padding: 2rem; border-radius: 0.5rem; background-color: var(--gray-alpha-100); } .form input { margin-right: 1rem; } .form button { appearance: none; padding: 0.5rem 0.75rem; font-weight: 500; font-size: 0.875rem; border-radius: 0.375rem; background-color: transparent; font-family: var(--font-geist-sans); border: 1px solid var(--gray-alpha-200); } .form button:active:enabled { background-color: var(--gray-alpha-200); } ``` --- ### 4. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and render the form with it. Replace your `Home` component in `app/page.tsx`. ```ts title="app/page.tsx" {6} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return (
); } ``` We need the `force-dynamic` because we don't want Next.js to cache the pre-signed URL. :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```ts title="app/page.tsx" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` --- #### Test your app Head over to the local Next.js app in your browser, `http://localhost:3000` and try **uploading an image**. You should see it upload and then download the image. --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! --- ## Containers We are going to create a Next.js app, add an S3 Bucket for file uploads, and deploy it in a container with the `Cluster` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-nextjs-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our app. ```bash npx create-next-app@latest aws-nextjs-container cd aws-nextjs-container ``` We are picking **TypeScript** and not selecting **ESLint**. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ### 2. Add a Service To deploy our Next.js app in a container, we'll use [AWS Fargate](https://aws.amazon.com/fargate/) with [Amazon ECS](https://aws.amazon.com/ecs/). Replace the `run` function in your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); } ``` This creates a VPC, and an ECS Cluster with a Fargate service in it. :::note By default, your service is not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Next.js app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Next.js app. ```bash npx sst dev ``` Once complete, click on **MyService** in the sidebar and open your Next.js app in your browser. --- ### 3. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this below the `Vpc` component. --- ##### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Next.js app. --- ### 4. Create an upload form Add a form client component in `components/form.tsx`. ```tsx title="components/form.tsx" "use client"; return ( { e.preventDefault(); const file = (e.target as HTMLFormElement).file.files?.[0] ?? null; const image = await fetch(url, { body: file, method: "PUT", headers: { "Content-Type": file.type, "Content-Disposition": `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split("?")[0]; }} > ); } ``` Add some styles. ```css title="components/form.module.css" .form { padding: 2rem; border-radius: 0.5rem; background-color: var(--gray-alpha-100); } .form input { margin-right: 1rem; } .form button { appearance: none; padding: 0.5rem 0.75rem; font-weight: 500; font-size: 0.875rem; border-radius: 0.375rem; background-color: transparent; font-family: var(--font-geist-sans); border: 1px solid var(--gray-alpha-200); } .form button:active:enabled { background-color: var(--gray-alpha-200); } ``` --- ### 5. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and render the form with it. Replace your `Home` component in `app/page.tsx`. ```ts title="app/page.tsx" {6} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return (
); } ``` We need the `force-dynamic` because we don't want Next.js to cache the pre-signed URL. :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```ts title="app/page.tsx" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` --- #### Test your app Head over to the local Next.js app in your browser, `http://localhost:3000` and try **uploading an image**. You should see it upload and then download the image. --- ### 6. Deploy your app To build our app for production, we'll enable Next.js's [standalone output](https://nextjs.org/docs/pages/api-reference/next-config-js/output#automatically-copying-traced-files). Let's update our `next.config.mjs`. ```js title="next.config.ts" {3} const nextConfig: NextConfig = { /* config options here */ output: "standalone" }; ``` Now to deploy our app we'll add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM node:lts-alpine AS base # Stage 1: Install dependencies FROM base AS deps WORKDIR /app COPY package.json package-lock.json* ./ COPY sst-env.d.ts* ./ RUN npm ci # Stage 2: Build the application FROM base AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . # If static pages do not need linked resources RUN npm run build # If static pages need linked resources # RUN --mount=type=secret,id=SST_RESOURCE_MyResource,env=SST_RESOURCE_MyResource \ # npm run build # Stage 3: Production server FROM base AS runner WORKDIR /app ENV NODE_ENV=production COPY --from=builder /app/.next/standalone ./ COPY --from=builder /app/.next/static ./.next/static EXPOSE 3000 CMD ["node", "server.js"] ``` This builds our Next.js app in a Docker image. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: If your Next.js app is building static pages that need linked resources, you can need to declare them in your `Dockerfile`. For example, if we need the linked `MyBucket` component from above. ```dockerfile RUN --mount=type=secret,id=SST_RESOURCE_MyBucket,env=SST_RESOURCE_MyBucket npm run build ``` You'll need to do this for each linked resource. Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" .git .next node_modules ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Nuxt on AWS with SST Create and deploy a Nuxt app to AWS with SST. https://sst.dev/docs/start/aws/nuxt There are two ways to deploy a Nuxt app to AWS with SST. 1. [Serverless](#serverless) 2. [Containers](#containers) We'll use both to build a couple of simple apps below. --- ## Serverless We are going to create a Nuxt app, add an S3 Bucket for file uploads, and deploy it using the `Nuxt` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-nuxt) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npx nuxi@latest init aws-nuxt cd aws-nuxt ``` We are picking the **npm** as the package manager. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `nuxt.config.ts` with something like this. ```diff lang="ts" title="nuxt.config.ts" compatibilityDate: '2024-04-03', + nitro: { + preset: 'aws-lambda' + }, devtools: { enabled: true } }) ``` --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your Nuxt app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your Nuxt app in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `Nuxt` component. ##### Link the bucket Now, link the bucket to our Nuxt app. ```ts title="sst.config.ts" {2} new sst.aws.Nuxt("MyWeb", { link: [bucket], }); ``` --- ### 3. Generate a pre-signed URL When our app loads, we'll call an API that'll generate a pre-signed URL for the file upload. Create a new `server/api/presigned.ts` with the following. ```tsx title="server/api/presigned.ts" {4} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); return await getSignedUrl(new S3Client({}), command); }) ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```tsx title="src/app.tsx" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` --- ### 4. Create an upload form Add a form to upload files to the presigned URL. Replace our `app.vue` with: ```vue title="app.vue" ``` Head over to the local app in your browser, `http://localhost:3000` and try **uploading an image**. You should see it upload and then download the image. --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your site should now be live! ![SST Nuxt app](../../../../../assets/docs/start/start-nuxt.png) --- ## Containers We are going to build a hit counter Nuxt app with Redis. We'll deploy it to AWS in a container using the `Cluster` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-nuxt-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npx nuxi@latest init aws-nuxt-container cd aws-nuxt-container ``` We are picking the **npm** as the package manager. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `nuxt.config.ts`. But instead we'll use the **default Node preset**. ```ts title="nuxt.config.ts" compatibilityDate: '2024-11-01', devtools: { enabled: true } }) ``` --- ### 2. Add a Cluster To deploy our Nuxt app in a container, we'll use [AWS Fargate](https://aws.amazon.com/fargate/) with [Amazon ECS](https://aws.amazon.com/ecs/). Replace the `run` function in your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); } ``` This creates a VPC with a bastion host, an ECS Cluster, and adds a Fargate service to it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Nuxt app locally in dev mode. --- ### 3. Add Redis Let's add an [Amazon ElastiCache](https://aws.amazon.com/elasticache/) Redis cluster. Add this below the `Vpc` component in your `sst.config.ts`. ```js title="sst.config.ts" const redis = new sst.aws.Redis("MyRedis", { vpc }); ``` This shares the same VPC as our ECS cluster. --- #### Link Redis Now, link the Redis cluster to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [redis], }); ``` This will allow us to reference the Redis cluster in our Nuxt app. --- #### Install a tunnel Since our Redis cluster is in a VPC, we'll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You'll only need to do this once on your machine. --- #### Start dev mode Start your app in dev mode. ```bash npx sst dev ``` This will deploy your app, start a tunnel in the **Tunnel** tab, and run your Nuxt app locally in the **MyServiceDev** tab. --- ### 4. Connect to Redis We want the `/` route to increment a counter in our Redis cluster. Let's start by installing the npm package we'll use. ```bash npm install ioredis ``` We'll call an API that'll increment the counter when the app loads. Create a new `server/api/counter.ts` with the following. ```ts title="server/api/counter.ts" {5} const redis = new Cluster( [{ host: Resource.MyRedis.host, port: Resource.MyRedis.port }], { dnsLookup: (address, callback) => callback(null, address), redisOptions: { tls: {}, username: Resource.MyRedis.username, password: Resource.MyRedis.password, }, } ); return await redis.incr("counter"); }) ``` :::tip We are directly accessing our Redis cluster with `Resource.MyRedis.*`. ::: Let's update our component to show the counter. Replace our `app.vue` with: ```vue title="app.vue" ``` --- #### Test your app Let's head over to `http://localhost:3000` in your browser and it'll show the current hit counter. You should see it increment every time you **refresh the page**. --- ### 5. Deploy your app To deploy our app we'll add a `Dockerfile`.
View Dockerfile ```dockerfile title="Dockerfile" FROM node:lts AS base WORKDIR /src # Build FROM base as build COPY --link package.json package-lock.json ./ RUN npm install COPY --link . . RUN npm run build # Run FROM base ENV PORT=3000 ENV NODE_ENV=production COPY --from=build /src/.output /src/.output CMD [ "node", ".output/server/index.mjs" ] ```
:::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" node_modules ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! ![SST Nuxt container app](../../../../../assets/docs/start/start-nuxt-container.png) --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Prisma with Amazon RDS and SST Use Prisma and SST to manage and deploy your Amazon Postgres RDS database. https://sst.dev/docs/start/aws/prisma We are going to use Prisma and SST to deploy an Amazon Postgres RDS database and connect to it from an Express app in a container. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-prisma) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- #### Examples We also have a few other Prisma and Postgres examples that you can refer to. - [Use Prisma in a Lambda function](/docs/examples/#prisma-in-lambda) - [Run Postgres in a local Docker container for dev](/docs/examples/#aws-postgres-local) --- ## 1. Create a project Let's start by creating a Node.js app. ```bash mkdir aws-prisma && cd aws-prisma npm init -y ``` We'll install Prisma, TypeScript, and Express. ```bash npm install prisma typescript ts-node @types/node --save-dev npm install express ``` Let's initialize TypeScript and Prisma. ```bash npx tsc --init npx prisma init ``` This will create a `prisma` directory with a `schema.prisma`. --- #### Init Express Create your Express app by adding an `index.mjs` to the root. ```js title="index.mjs" const PORT = 80; const app = express(); app.get("/", (req, res) => { res.send("Hello World!") }); app.listen(PORT, () => { console.log(`Server is running on http://localhost:${PORT}`); }); ``` --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ## 2. Add a Postgres db Let's add a Postgres database using [Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html). This needs a VPC. ```ts title="sst.config.ts" {5} async run() { const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const rds = new sst.aws.Postgres("MyPostgres", { vpc }); const DATABASE_URL = $interpolate`postgresql://${rds.username}:${rds.password}@${rds.host}:${rds.port}/${rds.database}`; }, ``` The `bastion` option will let us connect to the VPC from our local machine. We are also building the `DATABASE_URL` variable using the outputs from our RDS database. We'll use this later. --- #### Start Prisma Studio When you run SST in dev it can start other dev processes for you. In this case we want to start Prisma Studio. Add this below the `DATABASE_URL` variable. ```ts title="sst.config.ts" {2} new sst.x.DevCommand("Prisma", { environment: { DATABASE_URL }, dev: { autostart: false, command: "npx prisma studio", }, }); ``` This will run the given command in dev. --- ## 3. Add a Cluster To deploy our Express app, let's add an [AWS Fargate](https://aws.amazon.com/fargate/) container with [Amazon ECS](https://aws.amazon.com/ecs/). Add this at the end of your `sst.config.ts`. ```ts title="sst.config.ts" {6} const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, link: [rds], environment: { DATABASE_URL }, loadBalancer: { ports: [{ listen: "80/http" }], }, dev: { command: "node --watch index.mjs", }, }); ``` This uses the same VPC, and adds an ECS Cluster, with a Fargate service in it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Express app locally in dev mode. --- #### Install a tunnel Since our database cluster is in a VPC, we'll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You'll only need to do this once on your machine. --- #### Start dev mode Start your app in dev mode. This will take a few minutes. ```bash npx sst dev ``` It'll deploy your app, start a tunnel in the **Tunnel** tab, run your Express app locally in the **MyServiceDev** tab, and have your Prisma Studio in the **Studio** tab. We are setting Prisma Studio to not auto-start since it pops up a browser window. You can start it by clicking on it and hitting _Enter_. --- ## 4. Create a schema Let's create a simple schema. Add this to your `schema.prisma`. ```prisma title="schema.prisma" model User { id Int @id @default(autoincrement()) name String? email String @unique } ``` --- #### Generate a migration We'll now generate a migration for this schema and apply it. In a separate terminal run: ```bash npx sst shell --target Prisma -- npx prisma migrate dev --name init ``` We are wrapping the `prisma migrate dev --name init` command in `sst shell --target Prisma` because we want this command to have access to the `DATABASE_URL` defined in our `sst.config.ts`. The `Prisma` target is coming from the `new sst.x.DevCommand("Prisma")` component defined above. :::tip You need a tunnel to connect to your database. ::: This needs the tunnel to connect to the database. So you should have `sst dev` in a separate terminal. ```bash "sudo" npx sst tunnel ``` Alternatively, you can just run the tunnel using the above command. --- #### Prisma Studio To see our schema in action we can open the Prisma Studio. Head over to the **Studio** tab in your `sst dev` session and hit enter to start it. ![Initial Prisma Studio with SST](../../../../../assets/docs/start/initial-prisma-studio-with-sst.png) --- ## 5. Query the database Running the `migrate dev` command also installs the Prisma Client in our project. So let's use that to query our database. Replace the `/` route in your `index.mjs` with. ```ts title="index.mjs" const prisma = new PrismaClient(); app.get("/", async (_req, res) => { const user = await prisma.user.create({ data: { name: "Alice", email: `alice-${crypto.randomUUID()}@example.com` }, }); res.send(JSON.stringify(user)); }); ``` --- #### Test your app Let's head over to `http://localhost:80` in your browser and it'll show you the new user that was created. ![User created with Prisma in SST](../../../../../assets/docs/start/user-created-with-prisma-in-sst.png) You should see this in the Prisma Studio as well. --- ## 5. Deploy your app To deploy our app we'll first add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM node:18-bullseye-slim WORKDIR /app/ COPY package.json index.mjs prisma /app/ RUN npm install RUN npx prisma generate ENTRYPOINT ["node", "index.mjs"] ``` This just builds our Express app in a Docker image and runs the `prisma generate` command. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" node_modules ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. This'll give the URL of your Express app deployed as a Fargate service. ```bash ✓ Complete MyService: http://jayair-MyServiceLoadBala-592628062.us-east-1.elb.amazonaws.com ``` --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## React Router on AWS with SST Create and deploy a React Router v7 app to AWS with SST. https://sst.dev/docs/start/aws/react We are going to create a React Router v7 app in _Framework mode_, add an S3 Bucket for file uploads, and deploy it to using the `React` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-react-router) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npx create-react-router@latest aws-react-router cd aws-react-router ``` We are picking all the default options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. ```ts title="sst.config.ts" async run() { new sst.aws.React("MyWeb"); } ``` --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your React Router app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your React Router app in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `React` component. ##### Link the bucket Now, link the bucket to our React Router app. ```js title="sst.config.ts" {2} new sst.aws.React("MyWeb", { link: [bucket], }); ``` --- ### 3. Create an upload form Add the upload form client in `app/routes/home.tsx`. Replace the `Home` component with: ```tsx title="app/routes/home.tsx" loaderData, }: Route.ComponentProps) { const { url } = loaderData; return (

Welcome to React Router!

{ e.preventDefault(); const file = (e.target as HTMLFormElement).file.files?.[0]!; const image = await fetch(url, { body: file, method: "PUT", headers: { "Content-Type": file.type, "Content-Disposition": `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split("?")[0]; }} >
); } ``` --- ### 4. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and use it in the form. Add this above the `Home` component in `app/routes/home.tsx`. ```tsx title="app/routes/home.tsx" {4} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return { url }; } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```tsx title="app/routes/_index.tsx" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local React Router app in your browser, `http://localhost:5173` and try **uploading an image**. You should see it upload and then download the image. ![SST React Router app local](../../../../../assets/docs/start/start-react-router-start-local.png) --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your site should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Realtime apps in AWS with SST Use SST to build and deploy a realtime chat app to AWS. https://sst.dev/docs/start/aws/realtime We are going to use SST to build and deploy a realtime chat app on AWS. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-realtime-nextjs) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ## 1. Create a project Let's start by creating a Node.js app. ```bash npx create-next-app@latest my-realtime-app cd my-realtime-app ``` --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Next.js app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your Next.js app in your browser. --- ## 2. Add Realtime Let's add the `Realtime` component and link it to the Next.js app. Update your `sst.config.ts`. ```js title="sst.config.ts" {7} async run() { const realtime = new sst.aws.Realtime("MyRealtime", { authorizer: "authorizer.handler", }); new sst.aws.Nextjs("MyWeb", { link: [realtime], }); }, ``` This component allows us to set up _topics_ that can be subscribed to. The `authorizer` function can be used control who has access to these. --- #### Add an authorizer Add the following to a new `authorizer.ts` file in your project root. ```ts title="authorizer.ts" const prefix = `${Resource.App.name}/${Resource.App.stage}`; const isValid = token === "PLACEHOLDER_TOKEN"; return isValid ? { publish: [`${prefix}/*`], subscribe: [`${prefix}/*`], } : { publish: [], subscribe: [], }; }); ``` Here we are saying that a user with a valid token has access to publish and subscribe to any topic namespaced user the app and stage name. :::tip Namespace your topics with the app and stage name to keep them unique. ::: In production, we would validate the given token against our database or auth provider. --- ## 3. Create the chat UI Now let's create a chat interface in our app. Create a new component in `components/chat.tsx` with the following. ```tsx title="components/chat.tsx" {29-33} "use client"; { topic, endpoint, authorizer }: { topic: string, endpoint: string, authorizer: string } ) { const [messages, setMessages] = useState([]); const [connection, setConnection] = useState(null); return (
{connection && messages.length > 0 &&
{messages.map((msg, i) => (
{JSON.parse(msg).message}
))}
}
{ e.preventDefault(); const input = (e.target as HTMLFormElement).message; connection!.publish( topic, JSON.stringify({ message: input.value }), { qos: 1 } ); input.value = ""; }} >
); } ``` Here we are going to publish a message that's submitted to the given topic. We'll create the realtime connection below. Add some styles. ```css title="components/chat.module.css" .chat { gap: 1rem; width: 30rem; display: flex; padding: 1rem; flex-direction: column; border-radius: var(--border-radius); background-color: rgba(var(--callout-rgb), 0.5); border: 1px solid rgba(var(--callout-border-rgb), 0.3); } .messages { padding-bottom: 0.125rem; border-bottom: 1px solid rgba(var(--callout-border-rgb), 0.3); } .messages > div { line-height: 1.1; padding-bottom: 0.625rem; } .form { display: flex; gap: 0.625rem; } .form input { flex: 1; font-size: 0.875rem; padding: 0.5rem 0.75rem; border-radius: calc(1rem - var(--border-radius)); border: 1px solid rgba(var(--callout-border-rgb), 1); } .form button { font-weight: 500; font-size: 0.875rem; padding: 0.5rem 0.75rem; border-radius: calc(1rem - var(--border-radius)); background: linear-gradient( to bottom right, rgba(var(--tile-start-rgb), 1), rgba(var(--tile-end-rgb), 1) ); border: 1px solid rgba(var(--callout-border-rgb), 1); } .form button:active:enabled { background: linear-gradient( to top left, rgba(var(--tile-start-rgb), 1), rgba(var(--tile-end-rgb), 1) ); } ``` Install the npm package. ```bash npm install mqtt ``` --- #### Add to the page Let's use the component in our page. Replace the `Home` component in `app/page.tsx`. ```tsx title="app/page.tsx" {23-25} const topic = "sst-chat"; return (
); } ``` :::tip We are directly accessing our Realtime component with `Resource.MyRealtime.*`. ::: Here we are going to publish and subscribe to a _topic_ called `sst-chat`, namespaced under the name of the app and the stage our app is deployed to. --- ## 4. Create a connection When our chat component loads, it'll create a new connection to our realtime service. Add the following below the imports in `components/chat.tsx`. ```ts title="components/chat.tsx" function createConnection(endpoint: string, authorizer: string) { return mqtt.connect(`wss://${endpoint}/mqtt?x-amz-customauthorizer-name=${authorizer}`, { protocolVersion: 5, manualConnect: true, username: "", // Must be empty for the authorizer password: "PLACEHOLDER_TOKEN", // Passed as the token to the authorizer clientId: `client_${window.crypto.randomUUID()}`, }); } ``` We are using a _placeholder_ token here. In production this might be a user's session token. Now let's subscribe to it and save the messages we receive. Add this to the `Chat` component. ```ts title="components/chat.tsx" useEffect(() => { const connection = createConnection(endpoint, authorizer); connection.on("connect", async () => { try { await connection.subscribeAsync(topic, { qos: 1 }); setConnection(connection); } catch (e) { } }); connection.on("message", (_fullTopic, payload) => { const message = new TextDecoder("utf8").decode(new Uint8Array(payload)); setMessages((prev) => [...prev, message]); }); connection.on("error", console.error); connection.connect(); return () => { connection.end(); setConnection(null); }; }, [topic, endpoint, authorizer]); ``` :::note If you have a new AWS account, you might have to wait a few minutes before your realtime service is ready to go. ::: Head over to the local Next.js app in your browser, `http://localhost:3000` and try **sending a message**, you should see it appear right away. You can also open a new browser window and see them appear in both places. --- ## 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! ![SST Realtime Next.js app](../../../../../assets/docs/start/sst-realtime-nextjs-app.png) Next you can: - Let users create chat rooms - Save them to a database - Only show messages from the right chat rooms This'll help you build realtime apps for production. --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Remix on AWS with SST Create and deploy a Remix app to AWS with SST. https://sst.dev/docs/start/aws/remix There are two ways to deploy a Remix app to AWS with SST. 1. [Serverless](#serverless) 2. [Containers](#containers) We'll use both to build a couple of simple apps below. --- #### Examples We also have a few other Remix examples that you can refer to. - [Enabling streaming in your Remix app](/docs/examples/#aws-remix-streaming) - [Hit counter with Redis and Remix in a container](/docs/examples/#aws-remix-container-with-redis) --- ## Serverless We are going to create a Remix app, add an S3 Bucket for file uploads, and deploy it to using the `Remix` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-remix) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npx create-remix@latest aws-remix cd aws-remix ``` We are picking all the default options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your Remix app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your Remix app in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `Remix` component. ##### Link the bucket Now, link the bucket to our Remix app. ```js title="sst.config.ts" {2} new sst.aws.Remix("MyWeb", { link: [bucket], }); ``` --- ### 3. Create an upload form Add the upload form client in `app/routes/_index.tsx`. Replace the `Index` component with: ```tsx title="app/routes/_index.tsx" const data = useLoaderData(); return (

Welcome to Remix

{ e.preventDefault(); const file = (e.target as HTMLFormElement).file.files?.[0]!; const image = await fetch(data.url, { body: file, method: "PUT", headers: { "Content-Type": file.type, "Content-Disposition": `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split("?")[0]; }} >
); } ``` --- ### 4. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and use it in the form. Add this above the `Index` component in `app/routes/_index.tsx`. ```tsx title="app/routes/_index.tsx" {4} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return { url }; } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```tsx title="app/routes/_index.tsx" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local Remix app in your browser, `http://localhost:5173` and try **uploading an image**. You should see it upload and then download the image. ![SST Remix app local](../../../../../assets/docs/start/start-remix-local.png) --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your site should now be live! --- ## Containers We are going to create a Remix app, add an S3 Bucket for file uploads, and deploy it in a container with the `Cluster` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-remix-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our app. ```bash npx create-remix@latest aws-remix-container cd aws-remix-container ``` We are picking all the default options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ### 2. Add a Service To deploy our Remix app in a container, we'll use [AWS Fargate](https://aws.amazon.com/fargate/) with [Amazon ECS](https://aws.amazon.com/ecs/). Replace the `run` function in your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); } ``` This creates a VPC, and an ECS Cluster with a Fargate service in it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our Remix app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your Remix app. ```bash npx sst dev ``` Once complete, click on **MyService** in the sidebar and open your Remix app in your browser. --- ### 3. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this below the `Vpc` component. --- ##### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our Remix app. --- ### 4. Create an upload form Add the upload form client in `app/routes/_index.tsx`. Replace the `Index` component with: ```tsx title="app/routes/_index.tsx" const data = useLoaderData(); return (

Welcome to Remix

{ e.preventDefault(); const file = (e.target as HTMLFormElement).file.files?.[0]!; const image = await fetch(data.url, { body: file, method: "PUT", headers: { "Content-Type": file.type, "Content-Disposition": `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split("?")[0]; }} >
); } ``` --- ### 5. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and use it in the form. Add this above the `Index` component in `app/routes/_index.tsx`. ```tsx title="app/routes/_index.tsx" {4} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return { url }; } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```tsx title="app/routes/_index.tsx" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local Remix app in your browser, `http://localhost:5173` and try **uploading an image**. You should see it upload and then download the image. ![SST Remix app local](../../../../../assets/docs/start/start-remix-local-container.png) --- ### 6. Deploy your app To deploy our app we'll add a `Dockerfile`. ```dockerfile title="Dockerfile" FROM node:lts-alpine as base ENV NODE_ENV production # Stage 1: Install all node_modules, including dev dependencies FROM base as deps WORKDIR /myapp ADD package.json ./ RUN npm install --include=dev # Stage 2: Setup production node_modules FROM base as production-deps WORKDIR /myapp COPY --from=deps /myapp/node_modules /myapp/node_modules ADD package.json ./ RUN npm prune --omit=dev # Stage 3: Build the app FROM base as build WORKDIR /myapp COPY --from=deps /myapp/node_modules /myapp/node_modules ADD . . RUN npm run build # Stage 4: Build the production image FROM base WORKDIR /myapp COPY --from=production-deps /myapp/node_modules /myapp/node_modules COPY --from=build /myapp/build /myapp/build COPY --from=build /myapp/public /myapp/public ADD . . CMD ["npm", "start"] ``` This builds our Remix app in a Docker image. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" node_modules .cache build public/build ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## SolidStart on AWS with SST Create and deploy a SolidStart app to AWS with SST. https://sst.dev/docs/start/aws/solid There are two ways to deploy SolidStart apps to AWS with SST. 1. [Serverless](#serverless) 2. [Containers](#containers) We'll use both to build a couple of simple apps below. --- #### Examples We also have a few other SolidStart examples that you can refer to. - [Adding a WebSocket endpoint to SolidStart](/docs/examples/#aws-solidstart-websocket-endpoint) --- ## Serverless We are going to create a SolidStart app, add an S3 Bucket for file uploads, and deploy it using the `SolidStart` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-solid) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npm init solid@latest aws-solid-start cd aws-solid-start ``` We are picking the **SolidStart**, **_basic_**, and **_TypeScript_** options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `app.config.ts` with something like this. ```diff lang="ts" title="app.config.ts" + server: { + preset: "aws-lambda", + awsLambda: { + streaming: true, + }, + }, }); ``` --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your SolidStart app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your SolidStart app in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `SolidStart` component. ##### Link the bucket Now, link the bucket to our SolidStart app. ```js title="sst.config.ts" {2} new sst.aws.SolidStart("MyWeb", { link: [bucket], }); ``` --- ### 3. Generate a pre-signed URL When our app loads, we'll generate a pre-signed URL for the file upload and use it in our form. Add this below the imports in `src/routes/index.tsx`. ```ts title="src/routes/index.tsx" {5} async function presignedUrl() { "use server"; const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); return await getSignedUrl(new S3Client({}), command); } load: () => presignedUrl(), }; ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```tsx title="src/app.tsx" ``` And install the npm packages. ```bash npm install @solidjs/router @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` --- ### 4. Create an upload form Add a form to upload files to the presigned URL. Replace the `Home` component in `src/routes/index.tsx` with: ```tsx title="src/routes/index.tsx" const url = createAsync(() => presignedUrl()); return (

Hello world!

{ e.preventDefault(); const file = (e.target as HTMLFormElement).file.files?.[0]!; const image = await fetch(url() as string, { body: file, method: "PUT", headers: { "Content-Type": file.type, "Content-Disposition": `attachment; filename="${file.name}"`, }, }); window.location.href = image.url.split("?")[0]; }} >
); } ``` Head over to the local app in your browser, `http://localhost:3000` and try **uploading an image**. You should see it upload and then download the image. --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your site should now be live! ![SST SolidStart app](../../../../../assets/docs/start/start-solidstart.png) --- ## Containers We are going to build a hit counter SolidStart app with Redis. We'll deploy it to AWS in a container using the `Cluster` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-solid-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npm init solid@latest aws-solid-container cd aws-solid-container ``` We are picking the **SolidStart**, **_basic_**, and **_TypeScript_** options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `app.config.ts`. Instead we'll use the **default Node preset**. ```ts title="app.config.ts" ``` --- ### 2. Add a Cluster To deploy our SolidStart app in a container, we'll use [AWS Fargate](https://aws.amazon.com/fargate/) with [Amazon ECS](https://aws.amazon.com/ecs/). Replace the `run` function in your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc", { bastion: true }); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); } ``` This creates a VPC with a bastion host, an ECS Cluster, and adds a Fargate service to it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our SolidStart app locally in dev mode. --- ### 3. Add Redis Let's add an [Amazon ElastiCache](https://aws.amazon.com/elasticache/) Redis cluster. Add this below the `Vpc` component in your `sst.config.ts`. ```js title="sst.config.ts" const redis = new sst.aws.Redis("MyRedis", { vpc }); ``` This shares the same VPC as our ECS cluster. --- #### Link Redis Now, link the Redis cluster to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [redis], }); ``` This will allow us to reference the Redis cluster in our SolidStart app. --- #### Install a tunnel Since our Redis cluster is in a VPC, we'll need a tunnel to connect to it from our local machine. ```bash "sudo" sudo npx sst tunnel install ``` This needs _sudo_ to create a network interface on your machine. You'll only need to do this once on your machine. --- #### Start dev mode Start your app in dev mode. ```bash npx sst dev ``` This will deploy your app, start a tunnel in the **Tunnel** tab, and run your SolidStart app locally in the **MyServiceDev** tab. --- ### 4. Connect to Redis We want the `/` route to increment a counter in our Redis cluster. Let's start by installing the packages we'll use. ```bash npm install ioredis @solidjs/router ``` We'll increment the counter when the route loads. Replace your `src/routes/index.tsx` with: ```ts title="src/routes/index.tsx" {8} const getCounter = cache(async () => { "use server"; const redis = new Cluster( [{ host: Resource.MyRedis.host, port: Resource.MyRedis.port }], { dnsLookup: (address, callback) => callback(null, address), redisOptions: { tls: {}, username: Resource.MyRedis.username, password: Resource.MyRedis.password, }, } ); return await redis.incr("counter"); }, "counter"); load: () => getCounter(), }; ``` :::tip We are directly accessing our Redis cluster with `Resource.MyRedis.*`. ::: Let's update our component to show the counter. Add this to your `src/routes/index.tsx`. ```tsx title="src/routes/index.tsx" const counter = createAsync(() => getCounter()); return

Hit counter: {counter()}

; } ``` --- #### Test your app Let's head over to `http://localhost:3000` in your browser and it'll show the current hit counter. You should see it increment every time you **refresh the page**. --- ### 5. Deploy your app To deploy our app we'll add a `Dockerfile`.
View Dockerfile ```dockerfile title="Dockerfile" FROM node:lts AS base WORKDIR /src # Build FROM base as build COPY --link package.json package-lock.json ./ RUN npm install COPY --link . . RUN npm run build # Run FROM base ENV PORT=3000 ENV NODE_ENV=production COPY --from=build /src/.output /src/.output CMD [ "node", ".output/server/index.mjs" ] ```
:::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" node_modules ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! ![SST SolidStart container app](../../../../../assets/docs/start/start-solidstart-container.png) --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## SvelteKit on AWS with SST Create and deploy a SvelteKit app to AWS with SST. https://sst.dev/docs/start/aws/svelte There are two ways to deploy a SvelteKit app to AWS with SST. 1. [Serverless](#serverless) 2. [Containers](#containers) We'll use both to build a couple of simple apps below. --- #### Examples We also have a few other SvelteKit examples that you can refer to. - [Hit counter with Redis and SvelteKit in a container](/docs/examples/#aws-sveltekit-container-with-redis) --- ## Serverless We are going to create a SvelteKit app, add an S3 Bucket for file uploads, and deploy it using the `SvelteKit` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-svelte-kit) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our app. ```bash npx sv create aws-svelte-kit cd aws-svelte-kit ``` We are picking the **_SvelteKit minimal_** and **_Yes, using TypeScript syntax_** options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `svelte.config.mjs` with something like this. ```diff lang="js" title="svelte.config.mjs" - import adapter from '@sveltejs/adapter-auto'; + import adapter from "svelte-kit-sst"; ``` --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your SvelteKit app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your SvelteKit app in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `SvelteKit` component. ##### Link the bucket Now, link the bucket to our SvelteKit app. ```js title="sst.config.ts" {2} new sst.aws.SvelteKit("MyWeb", { link: [bucket] }); ``` --- ### 3. Create an upload form Let's add a file upload form. Replace your `src/routes/+page.svelte`. This will upload a file to a given pre-signed upload URL. ```svelte title="src/routes/+page.svelte"
``` Add some styles. ```css title="src/routes/+page.svelte" ``` --- ### 4. Generate a pre-signed URL When our route loads, we'll generate a pre-signed URL for S3 and our form will upload to it. Create a new `src/routes/+page.server.ts` and add the following. ```ts title="src/routes/+page.server.ts" {5} /** @type {import('./$types').PageServerLoad} */ const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return { url }; } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```ts title="src/routes/+page.server.ts" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local SvelteKit app in your browser, `http://localhost:5173` and try **uploading an image**. You should see it upload and then download the image. ![SST SvelteKit app local](../../../../../assets/docs/start/start-svelte-kit-local.png) --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! --- ## Containers We are going to create a SvelteKit app, add an S3 Bucket for file uploads, and deploy it in a container with the `Cluster` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-svelte-container) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our project. ```bash npx sv create aws-svelte-container cd aws-svelte-container ``` We are picking the **_SvelteKit minimal_** and **_Yes, using TypeScript syntax_** options. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. It'll also ask you to update your `svelte.config.mjs`. But **we'll instead use** the [Node.js adapter](https://kit.svelte.dev/docs/adapter-node) since we're deploying it through a container. ```bash npm i -D @sveltejs/adapter-node ``` And updating your `svelte.config.js`. ```diff lang="js" title="svelte.config.mjs" - import adapter from '@sveltejs/adapter-auto'; + import adapter from '@sveltejs/adapter-node'; ``` --- ### 2. Add a Service To deploy our SvelteKit app in a container, we'll use [AWS Fargate](https://aws.amazon.com/fargate/) with [Amazon ECS](https://aws.amazon.com/ecs/). Replace the `run` function in your `sst.config.ts`. ```js title="sst.config.ts" {10-12} async run() { const vpc = new sst.aws.Vpc("MyVpc"); const cluster = new sst.aws.Cluster("MyCluster", { vpc }); new sst.aws.Service("MyService", { cluster, loadBalancer: { ports: [{ listen: "80/http", forward: "3000/http" }], }, dev: { command: "npm run dev", }, }); } ``` This creates a VPC, and an ECS Cluster with a Fargate service in it. :::note By default, your service in not deployed when running in _dev_. ::: The `dev.command` tells SST to instead run our SvelteKit app locally in dev mode. --- #### Start dev mode Run the following to start dev mode. This'll start SST and your SvelteKit app. ```bash npx sst dev ``` Once complete, click on **MyService** in the sidebar and open your SvelteKit app in your browser. --- ### 3. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this below the `Vpc` component. --- ##### Link the bucket Now, link the bucket to the container. ```ts title="sst.config.ts" {3} new sst.aws.Service("MyService", { // ... link: [bucket], }); ``` This will allow us to reference the bucket in our SvelteKit app. --- ### 4. Create an upload form Let's add a file upload form. Replace your `src/routes/+page.svelte`. This will upload a file to a given pre-signed upload URL. ```svelte title="src/routes/+page.svelte"
``` Add some styles. ```css title="src/routes/+page.svelte" ``` --- ### 5. Generate a pre-signed URL When our route loads, we'll generate a pre-signed URL for S3 and our form will upload to it. Create a new `src/routes/+page.server.ts` and add the following. ```ts title="src/routes/+page.server.ts" {5} /** @type {import('./$types').PageServerLoad} */ const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name, }); const url = await getSignedUrl(new S3Client({}), command); return { url }; } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```ts title="src/routes/+page.server.ts" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local SvelteKit app in your browser, `http://localhost:5173` and try **uploading an image**. You should see it upload and then download the image. ![SST SvelteKit app local](../../../../../assets/docs/start/start-svelte-kit-local-container.png) --- ### 6. Deploy your app To deploy our app we'll add a `Dockerfile`. ```diff lang="dockerfile" title="Dockerfile" FROM node:18.18.0-alpine AS builder WORKDIR /app COPY package*.json . RUN npm install COPY . . RUN npm run build RUN npm prune --prod FROM builder AS deployer WORKDIR /app COPY --from=builder /app/build build/ COPY --from=builder /app/package.json . EXPOSE 3000 ENV NODE_ENV=production CMD [ "node", "build" ] ``` This builds our SvelteKit app in a Docker image. :::tip You need to be running [Docker Desktop](https://www.docker.com/products/docker-desktop/) to deploy your app. ::: Let's also add a `.dockerignore` file in the root. ```bash title=".dockerignore" .DS_Store node_modules ``` Now to build our Docker image and deploy we run: ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## TanStack Start on AWS with SST Create and deploy a TanStack Start app to AWS with SST. https://sst.dev/docs/start/aws/tanstack We are going to create a TanStack Start app, add an S3 Bucket for file uploads, and deploy it using the `TanStackStart` component. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-tanstack-start) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ### 1. Create a project Let's start by creating our app. ```bash npm create @tanstack/start@latest ``` Now use the init wizard to setup your TanStack Start project: - Name: aws-start-basic - Tailwind: `` - Toolchain: `ESLint` - Deployment Adapter: `Nitro (agnostic)` - Add-ons: `` - Examples: none This will create a project with the basic options, and Nitro pre-installed for deployments. --- ##### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. ##### Configure Nitro We use Nitro to serve our Tanstack Start app on AWS. You will need to add the correct nitro settings to the `vite.config.ts` file. ```ts title="vite.config.ts" const config = defineConfig({ plugins: [ devtools(), nitro(), // this is the plugin that enables path aliases viteTsConfigPaths({ projects: ['./tsconfig.json'], }), tailwindcss(), tanstackStart(), viteReact(), ], nitro: { preset: 'aws-lambda', awsLambda: { streaming: true } } }) ``` --- ##### Start dev mode Run the following to start dev mode. This'll start SST and your TanStack Start app. ```bash npx sst dev ``` Once complete, click on **MyWeb** in the sidebar and open your TanStack Start app in your browser. --- ### 2. Add an S3 Bucket Let's allow public `access` to our S3 Bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.aws.Bucket("MyBucket", { access: "public" }); ``` Add this above the `TanStackStart` component. ##### Link the bucket Now, link the bucket to our TanStack Start app. ```js title="sst.config.ts" {2} new sst.aws.TanStackStart("MyWeb", { link: [bucket] }); ``` --- ### 3. Create an upload form Add a form component in `src/components/Form.tsx`. ```tsx title="src/components/Form.tsx" return (
{ e.preventDefault() const file = (e.target as HTMLFormElement).file.files?.[0] ?? null const image = await fetch(url, { body: file, method: 'PUT', headers: { 'Content-Type': file.type, 'Content-Disposition': `attachment filename='${file.name}'`, }, }) window.location.href = image.url.split('?')[0] }} >
) } ``` Add some styles. ```css title="src/components/Form.css" .form { padding: 2rem; } .form input { margin-right: 1rem; } .form button { appearance: none; padding: 0.5rem 0.75rem; font-weight: 500; font-size: 0.875rem; border-radius: 0.375rem; border: 1px solid rgba(68, 107, 158, 0); background-color: rgba(68, 107, 158, 0.1); } .form button:active:enabled { background-color: rgba(68, 107, 158, 0.2); } ``` --- ### 4. Generate a pre-signed URL When our route loads, we'll generate a pre-signed URL for S3 and our form will upload to it. Add a server function and a route loader in `src/routes/index.tsx`. ```ts title="src/routes/index.tsx" {4} const command = new PutObjectCommand({ Key: crypto.randomUUID(), Bucket: Resource.MyBucket.name }) return await getSignedUrl(new S3Client({}), command) }) component: RouteComponent, loader: async () => { return { url: await getPresignedUrl() } } }) function RouteComponent() { const { url } = Route.useLoaderData() return (
) } ``` :::tip We are directly accessing our S3 bucket with `Resource.MyBucket.name`. ::: Add the relevant imports. ```ts title="src/routes/+page.server.ts" ``` And install the npm packages. ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` Head over to the local TanStack Start app in your browser, `http://localhost:3000` and try **uploading an image**. You should see it upload and then download the image. ![SST TanStack Start app local](../../../../../assets/docs/start/start-tanstack-start-local.png) --- ### 5. Deploy your app Now let's deploy your app to AWS. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. Congrats! Your app should now be live! --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and view logs from it. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## tRPC on AWS with SST Create and deploy a tRPC API in AWS with SST. https://sst.dev/docs/start/aws/trpc We are going to build a serverless [tRPC](https://trpc.io) API, a simple client, and deploy it to AWS using SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/aws-trpc) of this example in our repo. ::: Before you get started, make sure to [configure your AWS credentials](/docs/iam-credentials#credentials). --- ## 1. Create a project Let's start by creating our app. ```bash mkdir my-trpc-app && cd my-trpc-app npm init -y ``` #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **AWS**. This'll create a `sst.config.ts` file in your project root. --- ## 2. Add the API Let's add two Lambda functions; one for our tRPC server and one that'll be our client. Update your `sst.config.ts`. ```js title="sst.config.ts" {9} async run() { const trpc = new sst.aws.Function("Trpc", { url: true, handler: "index.handler", }); const client = new sst.aws.Function("Client", { url: true, link: [trpc], handler: "client.handler", }); return { api: trpc.url, client: client.url, }; } ``` We are linking the server to our client. This will allow us to access the URL of the server in our client. --- #### Start dev mode Start your app in dev mode. This runs your functions [_Live_](/docs/live/). ```bash npx sst dev ``` This will give you two URLs. ```bash frame="none" + Complete api: https://gyrork2ll35rsuml2yr4lifuqu0tsjft.lambda-url.us-east-1.on.aws client: https://3x4y4kg5zv77jeroxsrnjzde3q0tgxib.lambda-url.us-east-1.on.aws ``` --- ## 3. Create the server Let's create our tRPC server. Add the following to `index.ts`. ```ts title="index.ts" const t = initTRPC .context>() .create(); const router = t.router({ greet: t.procedure .input(z.object({ name: z.string() })) .query(({ input }) => { return `Hello ${input.name}!`; }), }); router: router, createContext: (opts) => opts, }); ``` We are creating a simple method called `greet` that takes a _string_ as an input. Add the imports. ```ts title="index.ts" awsLambdaRequestHandler, CreateAWSLambdaContextOptions } from "@trpc/server/adapters/aws-lambda"; ``` And install the npm packages. ```bash npm install zod @trpc/server@next ``` --- ## 4. Add the client Now we'll connect to our server in our client. Add the following to `client.ts`. ```ts title="client.ts" {4} const client = createTRPCClient({ links: [ httpBatchLink({ url: Resource.Trpc.url, }), ], }); return { statusCode: 200, body: await client.greet.query({ name: "Patrick Star" }), }; } ``` :::tip We are accessing our server with `Resource.Trpc.url`. ::: Add the relevant imports. Notice we are importing the _types_ for our API. ```ts title="index.ts" {2} ``` Install the client npm package. ```bash npm install @trpc/client@next ``` --- #### Test your app To test our app, hit the client URL. ```bash curl https://3x4y4kg5zv77jeroxsrnjzde3q0tgxib.lambda-url.us-east-1.on.aws ``` This will print out `Hello Patrick Star!`. --- ## 5. Deploy your app Now let's deploy your app. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## Connect the console As a next step, you can setup the [SST Console](/docs/console/) to _**git push to deploy**_ your app and monitor it for any issues. ![SST Console Autodeploy](../../../../../assets/docs/start/sst-console-autodeploy.png) You can [create a free account](https://console.sst.dev) and connect it to your AWS account. --- ## Hono on Cloudflare with SST Create and deploy a Hono API on Cloudflare with SST. https://sst.dev/docs/start/cloudflare/hono We are going to build an API with Hono, add an R2 bucket for file uploads, and deploy it using Cloudflare with SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/cloudflare-hono) of this example in our repo. ::: Before you get started, [create your Cloudflare API token](/docs/cloudflare/#credentials). --- ## 1. Create a project Let's start by creating our app. ```bash mkdir my-hono-api && cd my-hono-api npm init -y ``` --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **Cloudflare**. This'll create a `sst.config.ts` file in your project root. --- #### Set the Cloudflare credentials Set the token and account ID in your shell or `.env` file. ```bash ``` --- ## 2. Add a Worker Let's add a Worker. Update your `sst.config.ts`. ```js title="sst.config.ts" async run() { const hono = new sst.cloudflare.Worker("Hono", { url: true, handler: "index.ts", }); return { api: hono.url, }; } ``` We are enabling the Worker URL, so we can use it as our API. --- ## 3. Add an R2 Bucket Let's add an R2 bucket for file uploads. Update your `sst.config.ts`. ```ts title="sst.config.ts" const bucket = new sst.cloudflare.Bucket("MyBucket"); ``` Add this above the `Worker` component. #### Link the bucket Now, link the bucket to our Worker. ```ts title="sst.config.ts" {3} const hono = new sst.cloudflare.Worker("Hono", { url: true, link: [bucket], handler: "index.ts", }); ``` --- ## 4. Upload a file We want the `/` route of our API to upload a file to the R2 bucket. Create an `index.ts` file and add the following. ```ts title="index.ts" {4-8} const app = new Hono() .put("/*", async (c) => { const key = crypto.randomUUID(); await Resource.MyBucket.put(key, c.req.raw.body, { httpMetadata: { contentType: c.req.header("content-type"), }, }); return c.text(`Object created with key: ${key}`); }); ``` :::tip We are uploading to our R2 bucket with the SDK — `Resource.MyBucket.put()` ::: Add the imports. ```ts title="index.ts" ``` And install the npm packages. ```bash npm install hono ``` --- ## 5. Download a file We want to download the last uploaded file if you make a `GET` request to the API. Add this to your routes in `index.ts`. ```ts title="index.ts" const app = new Hono() // ... .get("/", async (c) => { const first = await Resource.MyBucket.list().then( (res) => res.objects.sort( (a, b) => a.uploaded.getTime() - b.uploaded.getTime(), )[0], ); const result = await Resource.MyBucket.get(first.key); c.header("content-type", result.httpMetadata.contentType); return c.body(result.body); }); ``` We are getting a list of the files in the files in the bucket with `Resource.MyBucket.list()` and we are getting a file for the given key with `Resource.MyBucket.get()`. --- #### Start dev mode Start your app in dev mode. ```bash npx sst dev ``` This will give you the URL of your API. ```bash frame="none" ✓ Complete Hono: https://my-hono-api-jayair-honoscript.sst-15d.workers.dev ``` --- #### Test your app Let's try uploading a file from your project root. Make sure to use your API URL. ```bash curl -X PUT --upload-file package.json https://my-hono-api-jayair-honoscript.sst-15d.workers.dev ``` Now head over to `https://my-hono-api-jayair-honoscript.sst-15d.workers.dev` in your browser and it'll load the file you just uploaded. --- ## 6. Deploy your app Finally, let's deploy your app! ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## tRPC on Cloudflare with SST Create and deploy a tRPC API in Cloudflare with SST. https://sst.dev/docs/start/cloudflare/trpc We are going to build a [tRPC](https://trpc.io) API, a simple client, and deploy it to Cloudflare using SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/cloudflare-trpc) of this example in our repo. ::: Before you get started, [create your Cloudflare API token](/docs/cloudflare/#credentials). --- ## 1. Create a project Let's start by creating our app. ```bash mkdir my-trpc-app && cd my-trpc-app npm init -y ``` --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **Cloudflare**. This'll create a `sst.config.ts` file in your project root. --- #### Set the Cloudflare credentials You can save your Cloudflare API token in a `.env` file or just set it directly. ```bash ``` --- ## 2. Add the API Let's add two Workers; one for our tRPC server and one that'll be our client. Update your `sst.config.ts`. ```js title="sst.config.ts" {9} async run() { const trpc = new sst.cloudflare.Worker("Trpc", { url: true, handler: "index.ts", }); const client = new sst.cloudflare.Worker("Client", { url: true, link: [trpc], handler: "client.ts", }); return { api: trpc.url, client: client.url, }; } ``` We are linking the server to our client. This will allow us to access the server in our client. --- ## 3. Create the server Let's create our tRPC server. Add the following to `index.ts`. ```ts title="index.ts" const t = initTRPC.context().create(); const router = t.router({ greet: t.procedure .input(z.object({ name: z.string() })) .query(({ input }) => { return `Hello ${input.name}!`; }), }); async fetch(request: Request): Promise { return fetchRequestHandler({ router, req: request, endpoint: "/", createContext: (opts) => opts, }); }, }; ``` We are creating a simple method called `greet` that takes a _string_ as an input. Add the relevant imports. ```ts title="index.ts" ``` And install the npm packages. ```bash npm install zod @trpc/server@next ``` --- ## 4. Add the client Now we'll connect to our server in our client. Add the following to `client.ts`. ```ts title="client.ts" {8} async fetch() { const client = createTRPCClient({ links: [ httpBatchLink({ url: "http://localhost/", fetch(req) { return Resource.Trpc.fetch(req); }, }), ], }); return new Response( await client.greet.query({ name: "Patrick Star", }), ); }, }; ``` :::tip We are accessing our server with `Resource.Trpc.fetch()`. ::: Add the imports. Notice we are importing the _types_ for our API. ```ts title="index.ts" {2} ``` Install the client npm package. ```bash npm install @trpc/client@next ``` --- #### Start dev mode Start your app in dev mode. ```bash npx sst dev ``` This will give you two URLs. ```bash frame="none" + Complete api: https://my-trpc-app-jayair-trpcscript.sst-15d.workers.dev client: https://my-trpc-app-jayair-clientscript.sst-15d.workers.dev ``` --- #### Test your app To test our app, hit the client URL. ```bash curl https://my-trpc-app-jayair-clientscript.sst-15d.workers.dev ``` This will print out `Hello Patrick Star!`. --- ## 5. Deploy your app Now let's deploy your app. ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## Cloudflare Workers with SST Create and deploy a Cloudflare Worker as an API with SST. https://sst.dev/docs/start/cloudflare/worker We are going to build an API with a Cloudflare Worker, add an R2 bucket for file uploads, and deploy it using SST. :::tip[View source] You can [view the source](https://github.com/sst/sst/tree/dev/examples/cloudflare-worker) of this example in our repo. ::: Before you get started, [create your Cloudflare API token](/docs/cloudflare/#credentials). --- ## 1. Create a project Let's start by creating our app. ```bash mkdir my-worker && cd my-worker npm init -y ``` --- #### Init SST Now let's initialize SST in our app. ```bash npx sst@latest init npm install ``` Select the defaults and pick **Cloudflare**. This'll create a `sst.config.ts` file in your project root. --- #### Set the Cloudflare credentials Set the token and account ID in your shell or `.env` file. ```bash ``` --- ## 2. Add a Worker Let's add a Worker. Update your `sst.config.ts`. ```js title="sst.config.ts" async run() { const worker = new sst.cloudflare.Worker("MyWorker", { handler: "./index.ts", url: true, }); return { api: worker.url, }; } ``` We are enabling the Worker URL, so we can use it as our API. --- ## 3. Add an R2 Bucket Let's add an R2 bucket for file uploads. Update your `sst.config.ts`. ```js title="sst.config.ts" const bucket = new sst.cloudflare.Bucket("MyBucket"); ``` Add this above the `Worker` component. #### Link the bucket Now, link the bucket to our Worker. ```ts title="sst.config.ts" {3} const worker = new sst.cloudflare.Worker("MyWorker", { handler: "./index.ts", link: [bucket], url: true, }); ``` --- ## 4. Upload a file We want our API to upload a file to the R2 bucket if you make a `PUT` request to it. Create an `index.ts` file and add the following. ```ts title="index.ts" {5-9} async fetch(req: Request) { if (req.method == "PUT") { const key = crypto.randomUUID(); await Resource.MyBucket.put(key, req.body, { httpMetadata: { contentType: req.headers.get("content-type"), }, }); return new Response(`Object created with key: ${key}`); } }, }; ``` :::tip We are uploading to our R2 bucket with the SDK — `Resource.MyBucket.put()` ::: Import the SDK. ```ts title="index.ts" ``` --- ## 5. Download a file We want to download the last uploaded file if you make a `GET` request to the API. Add this to the `fetch` function in your `index.ts` file. ```ts title="index.ts" if (req.method == "GET") { const first = await Resource.MyBucket.list().then( (res) => res.objects.toSorted( (a, b) => a.uploaded.getTime() - b.uploaded.getTime(), )[0], ); const result = await Resource.MyBucket.get(first.key); return new Response(result.body, { headers: { "content-type": result.httpMetadata.contentType, }, }); } ``` We are getting a list of the files in the files in the bucket with `Resource.MyBucket.list()` and we are getting a file for the given key with `Resource.MyBucket.get()`. --- #### Start dev mode Start your app in dev mode. ```bash npx sst dev ``` This will give you the URL of your API. ```bash frame="none" + Complete api: https://start-cloudflare-jayair-myworkerscript.sst-15d.workers.dev ``` --- #### Test your app Let's try uploading a file from your project root. Make sure to use your API URL. ```bash curl --upload-file package.json https://start-cloudflare-jayair-myworkerscript.sst-15d.workers.dev ``` Now head over to `https://start-cloudflare-jayair-myworkerscript.sst-15d.workers.dev` in your browser and it'll load the file you just uploaded. --- ## 6. Deploy your app Finally, let's deploy your app! ```bash npx sst deploy --stage production ``` You can use any stage name here but it's good to create a new stage for production. --- ## State Tracking the infrastructure created by your app. https://sst.dev/docs/state When you deploy your app, SST creates a state file locally to keep track of the state of the infrastructure in your app. So when you make a change, it'll allow SST to do a diff with the state and only deploy what's changed. --- The state of your app includes: 1. A state file for your resources. This is a JSON file. 2. A passphrase that is used to encrypt the secrets in your state. Aside from these, SST also creates some other resources when your app is first deployed. We'll look at this below. --- The state is generated locally but is backed up to your provider using: 1. A **bucket** to store the state, typically named `sst-state-`. This is created in the region of your `home`. More on this below. 2. An **SSM parameter** to store the passphrase used to encrypt your secrets, under `/sst/passphrase//`. Also created in the region of your `home`. :::danger Do not delete the SSM parameter that stores the passphrase for your app. ::: The passphrase is used to encrypt any secrets and sensitive information. Without it SST won't be able to read the state file and deploy your app. --- ## Home Your `sst.config.ts` specifies which provider to use for storing your state. We call this the `home` of your app. ```ts title="sst.config.ts" { home: "aws" } ``` You can specify which provider you'd like to use for this. Currently `aws` and `cloudflare` are supported. :::tip Your state file is uploaded to your `home`. ::: When you specify your home provider, SST assumes you'd like to use that provider in your app as well and adds it to your providers internally. So the above is equivalent to doing this. ```ts title="sst.config.ts" { home: "aws", providers: { aws: true } } ``` This also means that if you change the region of the `aws` provider above, you are changing the region for your `home` as well. You can read more about the `home` provider in [Config](/docs/reference/config/). --- ## Bootstrap As SST starts deploying the resources in your app, it creates some additional _bootstrap_ resources. If your app has a Lambda function or a Docker container, then SST will create the following in the same region as the given resource: 1. An assets bucket to store the function packages, typically named `sst-asset-`. 2. An ECR repository to store container images, called `sst-asset`. 3. An SSM parameter to store the assets bucket name and the ECR repository, stored under `/sst/bootstrap`. 4. An AppSync Events API endpoint to power [Live](/docs/live). The SSM parameter is used to look up the location of these resources. :::tip Some additional bootstrap resources are created based on what your app is creating. ::: When you remove an SST app, it does not remove the _state_ or _bootstrap_ resources. This is because it does not know if there are other apps that might be using this. So if you want to completely remove any SST created resources, you'll need to manually remove these in the regions you've deployed to. --- ### Reset If you accidentally remove the bootstrap resources the SST CLI will not be able to start up. To fix this you'll need to reset your bootstrap resources. 1. Remove the resources that are listed in the parameter. For example, the `asset` or `state` bucket. Or the ECR repository. 2. Remove the SSM parameter. Now when you run the SST CLI, it'll bootstrap your account again. --- ## How it works The state file is a JSON representation of all the low level resources created by your app. It is a cached version of the state of resources in the cloud provider. So when you do a deploy the following happens. 1. The components in the `sst.config.ts` get converted into low level resource definitions. These get compared to the the state file. 2. The differences between the two are turned into API calls that are made to your cloud provider. - If there's a new resource, it gets created. - If a resource has been removed, it gets removed. - If there's a change in config of the resource, it gets applied. 3. The state file is updated to reflect the new state of these resources. Now the `sst.config.ts`, the state file, and the resources in the cloud provider are all in sync. --- ### Out of sync This process works fine until you manually go change these resources through the cloud provider's console. This will cause the **state and the resources to not be in sync** anymore. This can cause an issue in some cases. :::caution If you manually change the resources in your cloud provider, they will go out of sync with your app's state. ::: Let's look at a couple of scenarios. Say we've deployed a `Function` with it set to `{ timeout: 10 seconds" }`. At this point, the config, state, and resource are in sync. --- #### Change the resource - We now go change the timeout to 20 seconds in the AWS Console. - The config and state are out of sync with the resource since they are still set to 10 seconds. - Now if we deploy our app, the config will be compared to the state. - It'll find no differences and so it won't update the resource. The config and state will stay out of sync with the resource. --- #### Change the config - If we change our config to `{ timeout: 30 seconds" }` and do a deploy. - The config and state will have some differences. - SST will make a call to AWS to set the timeout of the resource to 30 seconds. - Once updated, it'll update the state file to match the current state of the resource. The config, state, and resource are back being in sync. --- #### Remove the resource - Next we go to the AWS Console and remove the function. - The config and state still have the function with the timeout set to 30 seconds. - If we change our config to `{ timeout: 60 seconds }` and do a deploy. - The config and state will be different. - SST will make a call to update the timeout of the resource to 60 seconds. - But this call to AWS will fail because the function doesn't exist. Your deploys will fail moving forward because your state shows that a resource exists but it doesn't anymore. To fix this, you'll need to _refresh_ your state file. --- ### Refresh To fix scenarios where the resources in the cloud are out of sync with the state of your app, you'll need to run. ```bash sst refresh ``` This command does a couple of simple things: 1. It goes through every single resource in your state. 2. Makes a call to the cloud provider to check the resource. - If the configs are different, it'll **update the state** to reflect the change. - If the resource doesn't exist anymore, it'll **remove it from the state**. :::note The `sst refresh` does not make changes to the resources in the cloud provider. ::: Now the state and the resources are in sync. So if we take the scenario from above where we removed the function from the AWS Console but not from our config or state. To fix it, we'll need to: - Run `sst refresh` - This will remove the function from the state as well. - Now if we change our config to `{ timeout: 60 seconds }` and do a deploy. - The config and state will be compared and it'll find that a function with that config doesn't exist. - So SST will make a call to AWS to create a new function with the given config. In general we do not recommend manually changing resources in a cloud provider since it puts your state out of sync. But if you find yourself in a situation where this happens, you can use the `sst refresh` command to put them back in sync. --- ## Upgrade AWS Databases How to upgrade your database and minimize downtime. https://sst.dev/docs/upgrade-aws-databases Sometimes database components like [`Postgres`](/docs/component/aws/postgres/) and [`Mysql`](/docs/component/aws/mysql/) need to be upgraded as your application scales. When you change fields like `version` or `instance`, SST applies the update on the next `sst deploy`. By default, AWS performs the upgrade in place. This restarts the database and causes temporary downtime until your application can reconnect. This guide covers the different strategies to minimize that downtime. --- ## Multi-AZ [`Postgres`](/docs/component/aws/postgres/) and [`Mysql`](/docs/component/aws/mysql/) support Multi-AZ deployments. AWS maintains a standby replica in a different availability zone. During the upgrade, changes are applied to the standby first, then it fails over. This reduces downtime to around 60-120 seconds instead of the full upgrade duration. ```ts title="sst.config.ts" {4} const database = new sst.aws.Postgres("MyDatabase", { vpc, version: "17", multiAz: true, }); ``` Multi-AZ roughly doubles the cost since a standby instance is always running. You can enable it temporarily for the upgrade and disable it after. :::caution Major version upgrades restart both instances, making Multi-AZ ineffective for reducing downtime. ::: Enabling [RDS Proxy](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html) through the `proxy` property can help further reduce failover downtime by detecting the new primary directly instead of waiting for DNS propagation. This can bring downtime down to just a few seconds. --- ## Blue/Green [`Postgres`](/docs/component/aws/postgres/) and [`Mysql`](/docs/component/aws/mysql/) support [AWS RDS Blue/Green deployments](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments.html) through the `blueGreen` property. This creates a staging copy of your database, applies the changes there, and switches over with near-zero downtime. ```ts title="sst.config.ts" {4} const database = new sst.aws.Postgres("MyDatabase", { vpc, version: "17", blueGreen: true, }); ``` Your database endpoint stays the same throughout, so no application changes are needed. The overall deploy takes significantly longer but actual downtime is near zero. :::caution Blue/Green deployments are not compatible with databases that have read replicas or `proxy` enabled. ::: Learn more in [this Pulumi article](https://www.pulumi.com/blog/aws-rds-blue-green-deployment-updates). --- ## Upgrading steps Here's the recommended workflow for upgrading a production database. ### 1. Check compatibility Not all version jumps are allowed. You might need to go through intermediate versions. - [PostgreSQL upgrade paths](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html) - [MySQL upgrade paths](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.MySQL.html) - [Aurora PostgreSQL upgrade paths](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_UpgradeDBInstance.PostgreSQL.html) - [Aurora MySQL upgrade paths](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Updates.MajorVersionUpgrade.html) ### 2. Pick a strategy Use in-place if some downtime is fine, Multi-AZ to reduce it, or Blue/Green for the shortest switchover when supported. | Component | In-place | Multi-AZ | Blue/Green | | --- | --- | --- | --- | | [`Postgres`](/docs/component/aws/postgres/) | ✓ | ✓ | ✓ | | [`Mysql`](/docs/component/aws/mysql/) | ✓ | ✓ | ✓ | | [`Aurora`](/docs/component/aws/aurora/) | ✓ | ✕ | ✕ | | [`Redis`](/docs/component/aws/redis/) | ✓ | ✕ | ✕ | | [`OpenSearch`](/docs/component/aws/open-search/) | ✓ | ✕ | ✕ | ### 3. Verify in a test stage Apply the change in a test stage first to ensure the upgrade works as expected. ```diff lang="ts" title="sst.config.ts" const database = new sst.aws.Postgres("MyDatabase", { vpc, - version: "16", + version: "17", + blueGreen: true, }); ``` ### 4. Deploy to production Finally, deploy the upgrade to your production stage. ```ts title="sst.config.ts" const database = new sst.aws.Postgres("MyDatabase", { vpc, version: "17", blueGreen: true, }); ```