Workflow
The Workflow component lets you add serverless workflows to your app using
AWS Lambda Durable Functions.
It’s a thin wrapper around the Function component
with durable execution enabled.
It includes an SDK that wraps the AWS SDK with a simpler interface, adds helper methods, and makes it easier to integrate with other SST components.
Minimal example
new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler",});import { workflow } from "sst/aws/workflow";
export const handler = workflow.handler(async (event, ctx) => { const user = await ctx.step("load-user", async () => { return { id: "user_123", email: "alice@example.com" }; });
await ctx.wait("pause-before-email", "1 minute");
return ctx.step("send-email", async () => { return { sent: true, userId: user.id }; });});Configure timeout and retention
new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", retention: "30 days", timeout: { execution: "2 hours", invocation: "30 seconds", },});Link resources
const bucket = new sst.aws.Bucket("MyBucket");
new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler", link: [bucket],});import { Resource } from "sst";import { workflow } from "sst/aws/workflow";
export const handler = workflow.handler(async (event, ctx) => { return ctx.step("get-bucket-name", async () => { return Resource.MyBucket.name; });});Trigger with a cron job
const workflow = new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler",});
new sst.aws.CronV2("MyCron", { schedule: "rate(1 minute)", function: workflow,});import { workflow } from "sst/aws/workflow";
export const handler = workflow.handler(async (event, ctx) => { await ctx.step("start", async ({ logger }) => { logger.info({ message: "Workflow invoked by cron" }); });});Subscribe to a bus
const workflow = new sst.aws.Workflow("MyWorkflow", { handler: "src/workflow.handler",});
const bus = new sst.aws.Bus("MyBus");
bus.subscribe("MySubscriber", workflow, { pattern: { detailType: ["app.workflow.requested"], },});import { workflow } from "sst/aws/workflow";
interface Event { "detail-type": string; detail: { properties: { message: string; requestId: string; }; };}
export const handler = workflow.handler<Event>(async (event, ctx) => { await ctx.step("start", async ({ logger }) => { logger.info({ message: "Workflow invoked by bus", requestId: event.detail.properties.requestId, }); });});Limitations
Durable workflows replay from the top on resume and retry. Keep the control flow
deterministic, and move side effects like API calls, database writes, timestamps, and random
ID generation inside durable operations like ctx.step().
Before using workflows in production, review the AWS best practices for durable functions.
Cost
A workflow has no idle monthly cost. You pay the standard Lambda request and compute charges for each invocation.
Lambda durable functions usage is billed separately.
- Durable operations like starting an execution, completing a step, and creating a wait are billed at $8.00 per 1 million operations.
- Data written by durable operations is billed at $0.25 per GB.
- Retained execution state is billed at $0.15 per GB-month.
For example, a workflow with two step() calls and one wait() uses four durable operations:
one start, two steps, and one wait. That’s about $0.000032 per execution for durable
operations, before Lambda compute, requests, written data, and retention.
These are rough us-east-1 estimates. Check out the AWS Lambda pricing for more details.
Constructor
new Workflow(name, args, opts?)Parameters
-
namestring -
argsWorkflowArgs -
opts?ComponentResourceOptions
WorkflowArgs
architecture?
Type Input<“x86_64” | “arm64”>
Default “x86_64”
The architecture of the Lambda function.
{ architecture: "arm64"}bundle?
Type Input<string>
Path to the source code directory for the function. By default, the handler is
bundled with esbuild. Use bundle to skip bundling.
If the bundle option is specified, the handler needs to be in the root of the bundle.
Here, the entire packages/functions/src directory is zipped. And the handler is
in the src directory.
{ bundle: "packages/functions/src", handler: "index.handler"}copyFiles?
Type Input<Object[]>
Add additional files to copy into the function package. Takes a list of objects
with from and to paths. These will be copied over before the function package
is zipped up.
Copying over a single file from the src directory to the src/ directory of the
function package.
{ copyFiles: [{ from: "src/index.js" }]}Copying over a single file from the src directory to the core/src directory in
the function package.
{ copyFiles: [{ from: "src/index.js", to: "core/src/index.js" }]}Copying over a couple of files.
{ copyFiles: [ { from: "src/this.js", to: "core/src/this.js" }, { from: "src/that.js", to: "core/src/that.js" } ]}copyFiles[].from
Type Input<string>
Source path relative to the sst.config.ts.
copyFiles[].to?
Type Input<string>
Default The from path in the function package
Destination path relative to function root in the package. By default, it
creates the same directory structure as the from path and copies the file.
description?
Type Input<string>
A description for the function. This is displayed in the AWS Console.
{ description: "Handler function for my nightly cron job."}dev?
Type Input<false>
Default true
Disable running this function Live in sst dev.
By default, the functions in your app are run locally in sst dev. To do this, a stub
version of your function is deployed, instead of the real function.
This shows under the Functions tab in the multiplexer sidebar where your invocations
are logged. You can turn this off by setting dev to false.
Read more about Live and sst dev.
{ dev: false}environment?
Type Input<Record<string, Input<string>>>
Key-value pairs of values that are set as Lambda environment variables. The keys need to:
- Start with a letter
- Be at least 2 characters long
- Contain only letters, numbers, or underscores
They can be accessed in your function using process.env.<key>.
{ environment: { DEBUG: "true" }}handler
Type Input<string>
Path to the handler for the function.
- For Node.js this is in the format
{path}/{file}.{method}. - For Python this is also
{path}/{file}.{method}. - For Golang this is
{path}to the Go module. - For Rust this is
{path}to the Rust crate.
Node.js
For example with Node.js you might have.
{ handler: "packages/functions/src/main.handler"}Where packages/functions/src is the path. And main is the file, where you might have
a main.ts or main.js. And handler is the method exported in that file.
If bundle is specified, the handler needs to be in the root of the bundle directory.
{ bundle: "packages/functions/src", handler: "index.handler"}Python
For Python, uv is used to package the function. You need to have it installed.
The functions need to be in a uv workspace.
{ handler: "functions/src/functions/api.handler"}The project structure might look something like this. Where there is a
pyproject.toml file in the root and the functions/ directory is a uv
workspace with its own pyproject.toml.
├── sst.config.ts├── pyproject.toml└── functions ├── pyproject.toml └── src └── functions ├── __init__.py └── api.pyTo make sure that the right runtime is used in sst dev, make sure to set the
version of Python in your pyproject.toml to match the runtime you are using.
requires-python = "==3.11.*"You can refer to this example of deploying a Python function.
Golang
For Golang the handler looks like.
{ handler: "packages/functions/go/some_module"}Where packages/functions/go/some_module is the path to the Go module. This
includes the name of the module in your go.mod. So in this case your go.mod
might be in packages/functions/go and some_module is the name of the
module.
You can refer to this example of deploying a Go function.
Rust
For Rust, the handler looks like.
{ handler: "crates/api"}Where crates/api is the path to the Rust crate. This means there is a
Cargo.toml file in crates/api, and the main() function handles the lambda.
hook?
Type Object
Hook into the Lambda function build process.
hook.postbuild
postbuild(dir)Parameters
The directory where the function code is generated.dirstring
Returns Promise<void>
Specify a callback that’ll be run after the Lambda function is built.
Useful for modifying the generated Lambda function code before it’s deployed to AWS. It can also be used for uploading the generated sourcemaps to a service like Sentry.
layers?
Type Input<Input<string>[]>
A list of Lambda layer ARNs to add to the function.
These are only added when the function is deployed. In sst dev, your functions are run
locally, so the layers are not used. Instead you should use a local version of what’s
in the layer.
{ layers: ["arn:aws:lambda:us-east-1:123456789012:layer:my-layer:1"]}link?
Type Input<any[]>
Link resources to your function. This will:
- Grant the permissions needed to access the resources.
- Allow you to access it in your function using the SDK.
Takes a list of components to link to the function.
{ link: [bucket, stripeKey]}logging?
Type false | Object
Default {retention: “1 month”, format: “json”}
Configure the workflow logs in CloudWatch. Or pass in false to disable writing logs.
The only supported log format is json.
logging.format?
Type Input<“json”>
Default “json”
The log format for the workflow.
AWS Lambda durable functions require structured JSON logs, so "json" is the only
supported value.
logging.logGroup?
Type Input<string>
Default Creates a log group
Assigns the given CloudWatch log group name to the workflow. This allows you to pass in a previously created log group.
By default, the workflow creates a new log group when it’s created.
{ logging: { logGroup: "/existing/log-group" }}logging.retention?
Type Input<“1 day” | “3 days” | “5 days” | “1 week” | “2 weeks” | “1 month” | “2 months” | “3 months” | “4 months” | “5 months” | “6 months” | “1 year” | “13 months” | “18 months” | “2 years” | “3 years” | “5 years” | “6 years” | “7 years” | “8 years” | “9 years” | “10 years” | “forever”>
Default 1 month
The duration the workflow logs are kept in CloudWatch.
Not applicable when an existing log group is provided.
{ logging: { retention: "forever" }}memory?
Type Input<“${number} MB” | “${number} GB”>
Default “1024 MB”
The amount of memory allocated for the function. Takes values between 128 MB and 10240 MB in 1 MB increments. The amount of memory affects the amount of virtual CPU available to the function.
{ memory: "10240 MB"}name?
Type Input<string>
The name for the function.
By default, the name is generated from the app name, stage name, and component name. This is displayed in the AWS Console for this function.
If you are going to set the name, you need to make sure:
- It’s unique across your app.
- Uses the app and stage name, so it doesn’t thrash when you deploy to different stages.
Also, changing the name after your’ve deployed it once will create a new function and delete the old one.
{ name: `${$app.name}-${$app.stage}-my-function`}nodejs?
Type Input<Object>
Configure how your function is bundled.
By default, SST will bundle your function code using esbuild. This tree shakes your code to only include what’s used; reducing the size of your function package and improving cold starts.
nodejs.banner?
Type Input<string>
Use this to insert a string at the beginning of the generated JS file.
{ nodejs: { banner: "console.log('Function starting')" }}nodejs.esbuild?
Type Input<EsbuildOptions>
This allows you to customize esbuild config that is used.
nodejs.format?
Type Input<“cjs” | “esm”>
Default “esm”
Configure the format of the generated JS code; ESM or CommonJS.
{ nodejs: { format: "cjs" }}nodejs.install?
Type Input<string[] | Record<string, string>>
Dependencies that need to be excluded from the function package.
Certain npm packages cannot be bundled using esbuild. This allows you to exclude them
from the bundle. Instead they’ll be moved into a node_modules/ directory in the
function package.
This will allow your functions to be able to use these dependencies when deployed. They just won’t be tree shaken.
Esbuild will ignore them while traversing the imports in your code. So these are the package names as seen in the imports. It also works on packages that are not directly imported by your code.
{ nodejs: { install: { pg: "8.13.1" } }}Passing ["packageName"] is the same as passing { packageName: "*" }.
nodejs.loader?
Type Input<Record<string, Loader>>
Configure additional esbuild loaders for other file extensions. This is useful
when your code is importing non-JS files like .png, .css, etc.
{ nodejs: { loader: { ".png": "file" } }}nodejs.minify?
Type Input<boolean>
Default true
Disable if the function code is minified when bundled.
{ nodejs: { minify: false }}nodejs.sourcemap?
Type Input<boolean>
Default false
Configure if source maps are added to the function bundle when deployed. Since they
increase payload size and potentially cold starts, they are not added by default.
However, they are always generated during sst dev.
{ nodejs: { sourcemap: true }}nodejs.splitting?
Type Input<boolean>
Default false
If enabled, modules that are dynamically imported will be bundled in their own files with common dependencies placed in shared chunks. This can help reduce cold starts as your function grows in size.
{ nodejs: { splitting: true }}permissions?
Type Input<Object[]>
Permissions and the resources that the function needs to access. These permissions are used to create the function’s IAM role.
Allow the function to read and write to an S3 bucket called my-bucket.
{ permissions: [ { actions: ["s3:GetObject", "s3:PutObject"], resources: ["arn:aws:s3:::my-bucket/*"] } ]}Allow the function to perform all actions on an S3 bucket called my-bucket.
{ permissions: [ { actions: ["s3:*"], resources: ["arn:aws:s3:::my-bucket/*"] } ]}Granting the function permissions to access all resources.
{ permissions: [ { actions: ["*"], resources: ["*"] } ]}permissions[].actions
permissions[].conditions?
Type Input<Input<Object>[]>
Configure specific conditions for when the policy is in effect.
{ conditions: [ { test: "StringEquals", variable: "s3:x-amz-server-side-encryption", values: ["AES256"] }, { test: "IpAddress", variable: "aws:SourceIp", values: ["10.0.0.0/16"] } ]}permissions[].conditions[].test
Type Input<string>
Name of the IAM condition operator to evaluate.
permissions[].conditions[].values
Type Input<Input<string>[]>
The values to evaluate the condition against. If multiple values are provided, the condition matches if at least one of them applies. That is, AWS evaluates multiple values as though using an “OR” boolean operation.
permissions[].conditions[].variable
Type Input<string>
Name of a Context Variable to apply the condition to. Context variables may either be standard AWS variables starting with aws: or service-specific variables prefixed with the service name.
permissions[].effect?
Type “allow” | “deny”
Default “allow”
Configures whether the permission is allowed or denied.
{ effect: "deny"}permissions[].resources
Type Input<Input<string>[]>
The resourcess specified using the IAM ARN format.
{ resources: ["arn:aws:s3:::my-bucket/*"]}policies?
Type Input<string[]>
Policies to attach to the function. These policies will be added to the function’s IAM role.
Attaching policies lets you grant a set of predefined permissions to the
function without having to specify the permissions in the permissions prop.
For example, allow the function to have read-only access to all resources.
{ policies: ["arn:aws:iam::aws:policy/ReadOnlyAccess"]}python?
Type Input<Object>
-
container?Input<boolean|Object>
Configure how your Python function is packaged.
python.container?
Type Input<boolean | Object>
Default false
Set this to true if you want to deploy this function as a container image.
There are a couple of reasons why you might want to do this.
- The Lambda package size has an unzipped limit of 250MB. Whereas the container image size has a limit of 10GB.
- Even if you are below the 250MB limit, larger Lambda function packages have longer cold starts when compared to container image.
- You might want to use a custom Dockerfile to handle complex builds.
{ python: { container: true }}When you run sst deploy, it uses a built-in Dockerfile. It also needs
the Docker daemon to be running.
To use a custom Dockerfile, add one to the rooot of the uv workspace of the function.
├── sst.config.ts├── pyproject.toml└── function ├── pyproject.toml ├── Dockerfile └── src └── function └── api.pyYou can refer to this example of using a container image.
python.container.cache?
Type Input<boolean>
Default true
Controls whether Docker build cache is enabled.
Disable Docker build caching, useful for environments like Localstack where ECR cache export is not supported.
{ python: { container: { cache: false } }}retention?
Type Input<“${number} day” | “${number} days”>
Default “30 days”
Number of days to retain the workflow execution state.
role?
Type Input<string>
Default Creates a new role
Assigns the given IAM role ARN to the function. This allows you to pass in a previously created role.
By default, the function creates a new IAM role when it’s created. It’ll update this role if you add permissions or link resources.
However, if you pass in a role, you’ll need to update it manually if you add permissions or link resources.
{ role: "arn:aws:iam::123456789012:role/my-role"}runtime?
Type Input<“nodejs22.x” | “nodejs24.x” | “python3.13”>
Default “nodejs24.x”
The language runtime for the workflow.
AWS Lambda durable functions currently support "nodejs22.x", "nodejs24.x", and
"python3.13".
{ runtime: "python3.13"}storage?
Type Input<“${number} MB” | “${number} GB”>
Default “512 MB”
The amount of ephemeral storage allocated for the function. This sets the ephemeral storage of the lambda function (/tmp). Must be between “512 MB” and “10240 MB” (“10 GB”) in 1 MB increments.
{ storage: "5 GB"}tags?
Type Input<Record<string, Input<string>>>
A list of tags to add to the function.
{ tags: { "my-tag": "my-value" }}timeout?
Type Input<Object>
Configure timeout limits for the workflow execution and each underlying Lambda invocation.
timeout.execution?
Type Input<${number} minute | ${number} minutes | ${number} hour | ${number} hours | ${number} second | ${number} seconds | ${number} day | ${number} days> | undefined
Default “14 days”
Maximum execution time for the entire workflow execution, from when it starts until it completes.
This includes time spent across retries, replays, waits, and all durable invocations.
timeout.invocation?
Type Input<${number} minute | ${number} minutes | ${number} second | ${number} seconds> | undefined
Default “5 minutes”
Maximum execution time for each underlying Lambda invocation.
This is not a per-step timeout. A single invocation can run multiple steps before the workflow yields, waits, or replays.
transform?
Type Object
Transform how this component creates its underlying resources.
transform.function?
Type Object
Transform the underlying SST Function component resources.
transform.function.eventInvokeConfig?
Type FunctionEventInvokeConfigArgs | (args: FunctionEventInvokeConfigArgs, opts: ComponentResourceOptions, name: string) => void
Transform the Function Event Invoke Config resource. This is only created
when the retries property is set.
transform.function.function?
Type FunctionArgs | (args: FunctionArgs, opts: ComponentResourceOptions, name: string) => void
Transform the Lambda Function resource.
transform.function.logGroup?
Type LogGroupArgs | (args: LogGroupArgs, opts: ComponentResourceOptions, name: string) => void
Transform the CloudWatch LogGroup resource.
transform.function.role?
Type RoleArgs | (args: RoleArgs, opts: ComponentResourceOptions, name: string) => void
Transform the IAM Role resource.
volume?
Type Input<Object>
Mount an EFS file system to the function.
Create an EFS file system.
const vpc = new sst.aws.Vpc("MyVpc");const fileSystem = new sst.aws.Efs("MyFileSystem", { vpc });And pass it in.
{ volume: { efs: fileSystem }}By default, the file system will be mounted to /mnt/efs. You can change this by
passing in the path property.
{ volume: { efs: fileSystem, path: "/mnt/my-files" }}To use an existing EFS, you can pass in an EFS access point ARN.
{ volume: { efs: "arn:aws:elasticfilesystem:us-east-1:123456789012:access-point/fsap-12345678", }}volume.efs
Type Input<string | Efs>
The EFS file system to mount. Or an EFS access point ARN.
volume.path?
Type Input<string>
Default “/mnt/efs”
The path to mount the volume.
vpc?
Type Vpc | Input<Object>
Configure the function to connect to private subnets in a virtual private cloud or VPC. This allows your function to access private resources.
Create a Vpc component.
const myVpc = new sst.aws.Vpc("MyVpc");Or reference an existing VPC.
const myVpc = sst.aws.Vpc.get("MyVpc", "vpc-12345678901234567");And pass it in.
{ vpc: myVpc}vpc.privateSubnets
Type Input<Input<string>[]>
A list of VPC subnet IDs.
vpc.securityGroups
Type Input<Input<string>[]>
A list of VPC security group IDs.
Properties
arn
Type Output<string>
The ARN of the Lambda function backing the workflow.
name
Type Output<string>
The name of the Lambda function backing the workflow.
nodes
nodes.function
Type Function
The SST Function component backing the workflow.
qualifier
Type Output<undefined | string>
The published version qualifier backing the workflow.
SDK
Use the SDK in your runtime to interact with your infrastructure.
Links
This is accessible through the Resource object in the SDK.
-
namestringThe name of the Lambda function backing the workflow.
-
qualifierundefined|stringThe published version qualifier backing the workflow.
The workflow SDK is a thin wrapper around the
@aws/durable-execution-sdk-js
package and the AWS Lambda durable execution APIs.
SST also adds a few helpers on top, including ctx.stepWithRollback(),
ctx.rollbackAll(), and ctx.waitUntil().
import { workflow } from "sst/aws/workflow";Use stepWithRollback() and rollbackAll() to register compensating actions.
import { workflow } from "sst/aws/workflow";
export const handler = workflow.handler(async (_event, ctx) => { try { const order = await ctx.stepWithRollback("create-order", { run: async () => ({ orderId: "order_123" }), undo: async (error, result) => { await fetch(`https://example.com/orders/${result.orderId}`, { method: "DELETE", }); }, });
await ctx.step("charge-card", async () => { throw new Error("Card declined"); });
return order; } catch (error) { await ctx.rollbackAll(error); throw error; }});Use waitUntil() when you already know the exact time the workflow should resume.
import { workflow } from "sst/aws/workflow";
export const handler = workflow.handler( async (_event, ctx) => { const resumeAt = new Date(); resumeAt.setMinutes(resumeAt.getMinutes() + 10);
await ctx.waitUntil("wait-for-follow-up", resumeAt);
return ctx.step("send-follow-up", async () => { return { delivered: true }; }); },);describe
workflow.describe(arn, options?)Parameters
-
arnstring -
options?Options
Returns Promise<DescribeResponse>
Get the details for a single workflow execution.
fail
workflow.fail(token, input, options?)Parameters
-
tokenstring -
inputFailInput -
options?Options
Returns Promise<void>
Send a failure result for a pending workflow callback.
This is the equivalent to calling
SendDurableExecutionCallbackFailure.
handler
workflow.handler(input, config?)Parameters
-
inputHandler<TEvent,TResult,TLogger> -
config?DurableExecutionConfig
Returns DurableLambdaHandler
Create a durable workflow handler.
import { workflow } from "sst/aws/workflow";
export const handler = workflow.handler( async (_event, ctx) => { const user = await ctx.step("load-user", async () => { return { id: "user_123", email: "alice@example.com" }; });
await ctx.wait("pause-before-email", "1 minute");
return ctx.step("send-email", async () => { return { sent: true, userId: user.id }; }); },);heartbeat
workflow.heartbeat(token, options?)Parameters
-
tokenstring -
options?Options
Returns Promise<void>
Send a heartbeat for a pending workflow callback.
This is useful when the external system handling the callback is still doing work and needs to prevent the callback from timing out.
This is the equivalent to calling
SendDurableExecutionCallbackHeartbeat.
list
workflow.list(resource, query, options?)Parameters
Returns Promise<ListResponse>
List workflow executions.
The SDK returns only the first page of results.
start
workflow.start(resource, input, options?)Parameters
Returns Promise<StartResponse>
Start a new workflow execution.
This is the equivalent to calling
Invoke
for a durable Lambda function, using the durable invocation flow described in
Invoking durable Lambda functions.
stop
workflow.stop(arn, input?, options?)Parameters
-
arnstring -
input?StopInput -
options?Options
Returns Promise<StopResponse>
Stop a running workflow execution.
succeed
workflow.succeed(token, input?, options?)Parameters
-
tokenstring -
input?SucceedInput<TPayload> -
options?Options
Returns Promise<void>
Send a successful result for a pending workflow callback.
This is the equivalent to calling
SendDurableExecutionCallbackSuccess.
Context
Type Object
Only showing custom SDK methods here. For the full API, see the AWS Durable Execution SDK docs.
Context.rollbackAll
rollbackAll(error)Parameters
-
errorunknown
Returns Promise<void>
Execute all registered rollback steps in reverse order.
Context.stepWithRollback
stepWithRollback(name, handler, config?)Parameters
-
namestring -
handlerStepWithRollbackHandler<TOutput,TLogger> -
config?StepConfig<TOutput>
Returns DurablePromise<TOutput>
Execute a durable step and register a compensating rollback step if it succeeds.
If run throws, nothing is added to the rollback stack for that step.
Context.waitUntil
waitUntil(name, until)Parameters
-
namestring -
untilDate<>
Returns DurablePromise<void>
Wait until the provided time. Delays are rounded up to the nearest second.
DescribeResponse
DescribeResponse.arn
Type string
The ARN of the durable execution.
DescribeResponse.createdAt
Type Date<>
When the execution started.
DescribeResponse.endedAt?
Type Date<>
When the execution ended, if it has finished.
DescribeResponse.functionArn
Type string
The ARN of the workflow function.
DescribeResponse.name
Type string
The durable execution name.
DescribeResponse.status
Type ExecutionStatus
The current execution status.
DescribeResponse.version?
Type string
The version that started the execution.
Execution
Execution.arn
Type string
The ARN of the durable execution.
Execution.createdAt
Type Date<>
When the execution started.
Execution.endedAt?
Type Date<>
When the execution ended, if it has finished.
Execution.functionArn
Type string
The ARN of the workflow function.
Execution.name
Type string
The durable execution name.
Execution.status
Type ExecutionStatus
The current execution status.
ListResponse
Type Object
ListResponse.executions
Type Execution[]
The matching executions.
Options
Type Object
Options.aws?
Resource
Resource.name
Type string
The name of the workflow function.
Resource.qualifier
Type string
The version or alias qualifier to invoke.
Linked sst.aws.Workflow resources include this automatically.
StartResponse
Type Object
StartResponse.arn?
Type string
The ARN of the durable execution.
StartResponse.statusCode
Type number
The HTTP status code from Lambda.
StartResponse.version?
Type string
The function version that was executed.
StopResponse
Type Object
StopResponse.arn
Type string
The ARN of the durable execution.
StopResponse.status
Type “STOPPED”
The execution status after the stop call.
StopResponse.stoppedAt?
Type Date<>
When the execution was stopped.