Deploy to Terraform
This guide covers deploying Strands agents using Terraform infrastructure as code. Terraform enables consistent, repeatable deployments across AWS, Google Cloud, Azure, and other cloud providers.
Terraform supports multiple deployment targets. This deploy example illustates four deploy options from different Cloud Service Providers:
- AWS App Runner - Simple containerized deployment with automatic scaling
- AWS Lambda - Serverless functions for event-driven workloads
- Google Cloud Run - Fully managed serverless containers
- Azure Container Instances - Simple container deployment
Prerequisites
Section titled “Prerequisites”- Docker deployment guide completed - You must have a working containerized agent before proceeding:
- Terraform installed
- Cloud provider CLI configured:
- AWS: AWS CLI credentials
- GCP: gcloud CLI
- Azure: Azure CLI
Step 1: Container Registry Deployment
Section titled “Step 1: Container Registry Deployment”Cloud deployment requires your containerized agent to be available in a container registry. The following assumes you have completed the Docker deployment guide and pushed your image to the appropriate registry:
Docker Tutorial Project Structure:
Project Structure (Python):
my-python-app/├── agent.py # FastAPI application (from Docker tutorial)├── Dockerfile # Container configuration (from Docker tutorial)├── pyproject.toml # Created by uv init├── uv.lock # Created automatically by uvProject Structure (TypeScript):
my-typescript-app/├── index.ts # Express application (from Docker tutorial)├── Dockerfile # Container configuration (from Docker tutorial)├── package.json # Created by npm init├── tsconfig.json # TypeScript configuration├── package-lock.json # Created automatically by npmDeploy-specific Docker configurations
Image Requirements:
- Standard Docker images supported
Container Registry Requirements:
- Amazon Elastic Container Registry (See documentation to push Docker image to ECR)
Docker Deployment Guide Modifications:
- No special base image required (standard Docker images work)
- Ensure your app listens on port 8080 (or configure port in terraform)
- Build with:
docker build --platform linux/amd64 -t my-agent .
Image Requirements:
- Must use Lambda-compatible base images:
- Python:
public.ecr.aws/lambda/python:3.11 - TypeScript/Node.js:
public.ecr.aws/lambda/nodejs:20
- Python:
Container Registry Requirements:
- Amazon Elastic Container Registry (See documentation to push Docker image to ECR)
Docker Deployment Guide Modifications:
- Update Dockerfile base image to Lambda-compatible version
- Change CMD to Lambda handler format:
CMD ["index.handler"]orCMD ["app.lambda_handler"] - Build with Lambda flags:
docker build --platform linux/amd64 --provenance=false --sbom=false -t my-agent . - Add Lambda handler to your code:
- Python FastAPI (Recommended): Use Mangum:
lambda_handler = Mangum(app) - Manual handlers: Accept
(event, context)parameters and return Lambda-compatible responses
- Python FastAPI (Recommended): Use Mangum:
Lambda Handler Examples:
Python with Mangum:
from mangum import Mangumfrom your_app import app # Your existing FastAPI app
lambda_handler = Mangum(app)TypeScript:
export const handler = async (event: any, context: any) => { // Your existing agent logic here return { statusCode: 200, body: JSON.stringify({ message: "Agent response" }) };};Python:
def lambda_handler(event, context): # Your existing agent logic here return { 'statusCode': 200, 'body': json.dumps({'message': 'Agent response'}) }Image Requirements:
- Standard Docker images supported
Container Registry Requirements:
- Google Artifact Registry (See documentation to push Docker image to GAR)
Docker Deployment Guide Modifications:
- No special base image required (standard Docker images work)
- Ensure your app listens on the port specified by
PORTenvironment variable - Build with:
docker build --platform linux/amd64 -t my-agent .
Image Requirements:
- Standard Docker images supported
Container Registry Requirements:
- Azure Container Registry (See documentation to push Docker image to ACR)
Docker Deployment Guide Modifications:
- No special base image required (standard Docker images work)
- Ensure your app exposes the correct port (typically 8080)
- Build with:
docker build --platform linux/amd64 -t my-agent .
Step 2: Cloud Deployment Setup
Section titled “Step 2: Cloud Deployment Setup”Optional: Open AWS App Runner Setup All-in-One Bash Command
Copy and paste this bash script to create all necessary terraform files and skip remaining “Cloud Deployment Setup” steps below:
generate_aws_apprunner_terraform() { mkdir -p terraform
# Generate main.tf cat > terraform/main.tf << 'EOF'terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } }}
provider "aws" { region = var.aws_region}
resource "aws_iam_role" "apprunner_ecr_access_role" { name = "apprunner-ecr-access-role"
assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "build.apprunner.amazonaws.com" } } ] })}
resource "aws_iam_role_policy_attachment" "apprunner_ecr_access_policy" { role = aws_iam_role.apprunner_ecr_access_role.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSAppRunnerServicePolicyForECRAccess"}
resource "aws_apprunner_service" "agent" { service_name = "strands-agent-v4"
source_configuration { image_repository { image_identifier = var.agent_image image_configuration { port = "8080" runtime_environment_variables = { OPENAI_API_KEY = var.openai_api_key } } image_repository_type = "ECR" } auto_deployments_enabled = false authentication_configuration { access_role_arn = aws_iam_role.apprunner_ecr_access_role.arn } }
instance_configuration { cpu = "0.25 vCPU" memory = "0.5 GB" }}EOF
# Generate variables.tf cat > terraform/variables.tf << 'EOF'variable "aws_region" { description = "AWS region" type = string default = "us-east-1"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}EOF
# Generate outputs.tf cat > terraform/outputs.tf << 'EOF'output "agent_url" { description = "AWS App Runner service URL" value = aws_apprunner_service.agent.service_url}EOF
# Generate terraform.tfvars template cat > terraform/terraform.tfvars << 'EOF'agent_image = "your-account.dkr.ecr.us-east-1.amazonaws.com/my-image:latest"openai_api_key = "<your-openai-api-key>"EOF
echo "✅ AWS App Runner Terraform files generated in terraform/ directory"}
generate_aws_apprunner_terraformStep by Step Guide
Create terraform directory
mkdir terraformcd terraformCreate main.tf
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } }}
provider "aws" { region = var.aws_region}
resource "aws_iam_role" "apprunner_ecr_access_role" { name = "apprunner-ecr-access-role"
assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "build.apprunner.amazonaws.com" } } ] })}
resource "aws_iam_role_policy_attachment" "apprunner_ecr_access_policy" { role = aws_iam_role.apprunner_ecr_access_role.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSAppRunnerServicePolicyForECRAccess"}
resource "aws_apprunner_service" "agent" { service_name = "strands-agent-v4"
source_configuration { image_repository { image_identifier = var.agent_image image_configuration { port = "8080" runtime_environment_variables = { OPENAI_API_KEY = var.openai_api_key } } image_repository_type = "ECR" } auto_deployments_enabled = false authentication_configuration { access_role_arn = aws_iam_role.apprunner_ecr_access_role.arn } }
instance_configuration { cpu = "0.25 vCPU" memory = "0.5 GB" }}Create variables.tf
variable "aws_region" { description = "AWS region" type = string default = "us-east-1"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}Create outputs.tf
output "agent_url" { description = "AWS App Runner service URL" value = aws_apprunner_service.agent.service_url}Optional: Open AWS Lambda Setup All-in-One Bash Command
Copy and paste this bash script to create all necessary terraform files and skip remaining “Cloud Deployment Setup” steps below:
generate_aws_lambda_terraform() { mkdir -p terraform
# Generate main.tf cat > terraform/main.tf << 'EOF'terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } }}
provider "aws" { region = var.aws_region}
resource "aws_lambda_function" "agent" { function_name = "strands-agent" role = aws_iam_role.lambda.arn image_uri = var.agent_image package_type = "Image" architectures = ["x86_64"] timeout = 30 memory_size = 512
environment { variables = { OPENAI_API_KEY = var.openai_api_key } }}
resource "aws_lambda_function_url" "agent" { function_name = aws_lambda_function.agent.function_name authorization_type = "NONE"}
resource "aws_iam_role" "lambda" { name = "strands-agent-lambda-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "lambda.amazonaws.com" } }] })}
resource "aws_iam_role_policy_attachment" "lambda" { role = aws_iam_role.lambda.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"}EOF
# Generate variables.tf cat > terraform/variables.tf << 'EOF'variable "aws_region" { description = "AWS region" type = string default = "us-east-1"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}EOF
# Generate outputs.tf cat > terraform/outputs.tf << 'EOF'output "agent_url" { description = "AWS Lambda function URL" value = aws_lambda_function_url.agent.function_url}EOF
# Generate terraform.tfvars template cat > terraform/terraform.tfvars << 'EOF'agent_image = "your-account.dkr.ecr.us-east-1.amazonaws.com/my-image:latest"openai_api_key = "<your-openai-api-key>"EOF
echo "✅ AWS Lambda Terraform files generated in terraform/ directory"}
generate_aws_lambda_terraformStep by Step Guide
Create terraform directory
mkdir terraformcd terraformCreate main.tf
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } }}
provider "aws" { region = var.aws_region}
resource "aws_lambda_function" "agent" { function_name = "strands-agent" role = aws_iam_role.lambda.arn image_uri = var.agent_image package_type = "Image" architectures = ["x86_64"] timeout = 30 memory_size = 512
environment { variables = { OPENAI_API_KEY = var.openai_api_key } }}
resource "aws_lambda_function_url" "agent" { function_name = aws_lambda_function.agent.function_name authorization_type = "NONE"}
resource "aws_iam_role" "lambda" { name = "strands-agent-lambda-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "lambda.amazonaws.com" } }] })}
resource "aws_iam_role_policy_attachment" "lambda" { role = aws_iam_role.lambda.name policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"}Create variables.tf
variable "aws_region" { description = "AWS region" type = string default = "us-east-1"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}Create outputs.tf
output "agent_url" { description = "AWS Lambda function URL" value = aws_lambda_function_url.agent.function_url}Optional: Open Google Cloud Run Setup All-in-One Bash Command
Copy and paste this bash script to create all necessary terraform files and skip remaining “Cloud Deployment Setup” steps below:
generate_google_cloud_run_terraform() { mkdir -p terraform
# Generate main.tf cat > terraform/main.tf << 'EOF'terraform { required_providers { google = { source = "hashicorp/google" version = "~> 4.0" } }}
provider "google" { project = var.gcp_project region = var.gcp_region}
resource "google_cloud_run_service" "agent" { name = "strands-agent" location = var.gcp_region
template { spec { containers { image = var.agent_image env { name = "OPENAI_API_KEY" value = var.openai_api_key } } } }}
resource "google_cloud_run_service_iam_member" "public" { service = google_cloud_run_service.agent.name location = google_cloud_run_service.agent.location role = "roles/run.invoker" member = "allUsers"}EOF
# Generate variables.tf cat > terraform/variables.tf << 'EOF'variable "gcp_project" { description = "GCP project ID" type = string}
variable "gcp_region" { description = "GCP region" type = string default = "us-central1"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}EOF
# Generate outputs.tf cat > terraform/outputs.tf << 'EOF'output "agent_url" { description = "Google Cloud Run service URL" value = google_cloud_run_service.agent.status[0].url}EOF
# Generate terraform.tfvars template cat > terraform/terraform.tfvars << 'EOF'gcp_project = "<your-project-id>"agent_image = "gcr.io/your-project/my-image:latest"openai_api_key = "<your-openai-api-key>"EOF
echo "✅ Google Cloud Run Terraform files generated in terraform/ directory"}
generate_google_cloud_run_terraformStep by Step Guide
Create terraform directory
mkdir terraformcd terraformCreate main.tf
terraform { required_providers { google = { source = "hashicorp/google" version = "~> 4.0" } }}
provider "google" { project = var.gcp_project region = var.gcp_region}
resource "google_cloud_run_service" "agent" { name = "strands-agent" location = var.gcp_region
template { spec { containers { image = var.agent_image env { name = "OPENAI_API_KEY" value = var.openai_api_key } env { name = "GOOGLE_GENAI_USE_VERTEXAI" value = "false" } env { name = "GOOGLE_API_KEY" value = var.google_api_key } } } }}
resource "google_cloud_run_service_iam_member" "public" { service = google_cloud_run_service.agent.name location = google_cloud_run_service.agent.location role = "roles/run.invoker" member = "allUsers"}Create variables.tf
variable "gcp_project" { description = "GCP project ID" type = string}
variable "gcp_region" { description = "GCP region" type = string default = "us-central1"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}
variable "google_api_key" { description = "Google API key" type = string sensitive = true}Create outputs.tf
output "agent_url" { description = "Google Cloud Run service URL" value = google_cloud_run_service.agent.status[0].url}Optional: Open Azure Container Instances Setup All-in-One Bash Command
Copy and paste this bash script to create all necessary terraform files and skip remaining “Cloud Deployment Setup” steps below:
generate_azure_container_instance_terraform() { mkdir -p terraform
# Generate main.tf cat > terraform/main.tf << 'EOF'terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0" } }}
provider "azurerm" { features {}}
data "azurerm_container_registry" "acr" { name = var.acr_name resource_group_name = var.acr_resource_group}
resource "azurerm_resource_group" "main" { name = "strands-agent" location = var.azure_location}
resource "azurerm_container_group" "agent" { name = "strands-agent" location = azurerm_resource_group.main.location resource_group_name = azurerm_resource_group.main.name ip_address_type = "Public" os_type = "Linux"
image_registry_credential { server = "${var.acr_name}.azurecr.io" username = var.acr_name password = data.azurerm_container_registry.acr.admin_password }
container { name = "agent" image = var.agent_image cpu = "0.5" memory = "1.5"
ports { port = 8080 }
environment_variables = { OPENAI_API_KEY = var.openai_api_key } }}EOF
# Generate variables.tf cat > terraform/variables.tf << 'EOF'variable "azure_location" { description = "Azure location" type = string default = "East US"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}
variable "acr_name" { description = "Azure Container Registry name" type = string}
variable "acr_resource_group" { description = "Azure Container Registry resource group" type = string}EOF
# Generate outputs.tf cat > terraform/outputs.tf << 'EOF'output "agent_url" { description = "Azure Container Instance URL" value = "http://${azurerm_container_group.agent.ip_address}:8080"}EOF
# Generate terraform.tfvars template cat > terraform/terraform.tfvars << 'EOF'agent_image = "your-registry.azurecr.io/my-image:latest"openai_api_key = "<your-openai-api-key>"acr_name = "<your-acr-name>"acr_resource_group = "<your-resource-group>"EOF
echo "✅ Azure Container Instance Terraform files generated in terraform/ directory"}
generate_azure_container_instance_terraformStep by Step Guide
Create terraform directory
mkdir terraformcd terraformCreate main.tf
terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0" } }}
provider "azurerm" { features {}}
data "azurerm_container_registry" "acr" { name = var.acr_name resource_group_name = var.acr_resource_group}
resource "azurerm_resource_group" "main" { name = "strands-agent" location = var.azure_location}
resource "azurerm_container_group" "agent" { name = "strands-agent" location = azurerm_resource_group.main.location resource_group_name = azurerm_resource_group.main.name ip_address_type = "Public" os_type = "Linux"
image_registry_credential { server = "${var.acr_name}.azurecr.io" username = var.acr_name password = data.azurerm_container_registry.acr.admin_password }
container { name = "agent" image = var.agent_image cpu = "0.5" memory = "1.5"
ports { port = 8080 }
environment_variables = { OPENAI_API_KEY = var.openai_api_key } }}Create variables.tf
variable "azure_location" { description = "Azure location" type = string default = "East US"}
variable "agent_image" { description = "Container image for Strands agent" type = string}
variable "openai_api_key" { description = "OpenAI API key" type = string sensitive = true}
variable "acr_name" { description = "Azure Container Registry name" type = string}
variable "acr_resource_group" { description = "Azure Container Registry resource group" type = string}Create output.tf
output "agent_url" { description = "Azure Container Instance URL" value = "http://${azurerm_container_group.agent.ip_address}:8080"}Step 3: Configure Variables
Section titled “Step 3: Configure Variables”Update terraform/terraform.tfvars based on your chosen provider:
agent_image = "your-account.dkr.ecr.us-east-1.amazonaws.com/my-image:latest"openai_api_key = "<your-openai-api-key>"This example uses OpenAI, but any supported model provider can be configured. See the Strands documentation for all supported model providers.
Note: Bedrock model provider credentials are automatically passed using App Runner’s IAM role and do not need to be specified in Terraform.
agent_image = "your-account.dkr.ecr.us-east-1.amazonaws.com/my-image:latest"openai_api_key = "<your-openai-api-key>"This example uses OpenAI, but any supported model provider can be configured. See the Strands documentation for all supported model providers.
Note: Bedrock model provider credentials are automatically passed using Lambda’s IAM role and do not need to be specified in Terraform.
gcp_project = "your-project-id"agent_image = "gcr.io/your-project/my-image:latest"openai_api_key = "<your-openai-api-key>"This example uses OpenAI, but any supported model provider can be configured. See the Strands documentation for all supported model providers. For instance, to use Bedrock model provider credentials:
aws_access_key_id = "<your-aws-access-key-id>"aws_secret_access_key = "<your-aws-secret-key>"agent_image = "your-registry.azurecr.io/my-image:latest"openai_api_key = "<your-openai-api-key>"acr_name = "<your-registry>"acr_resource_group = "<your-resource-group>"This example uses OpenAI, but any supported model provider can be configured. See the Strands documentation for all supported model providers. For instance, to use Bedrock model provider credentials:
aws_access_key_id = "<your-aws-access-key-id>"aws_secret_access_key = "<your-aws-secret-key>"Step 4: Deploy Infrastructure
Section titled “Step 4: Deploy Infrastructure”# Initialize Terraformterraform init
# Review the deployment planterraform plan
# Deploy the infrastructureterraform apply
# Get the endpointsterraform outputStep 5: Test Your Deployment
Section titled “Step 5: Test Your Deployment”Test the endpoints using the output URLs:
# Health checkcurl http://<your-service-url>/ping
# Test agent invocationcurl -X POST http://<your-service-url>/invocations \ -H "Content-Type: application/json" \ -d '{"input": {"prompt": "What is artificial intelligence?"}}'Step 6: Making Changes
Section titled “Step 6: Making Changes”When you modify your code, redeploy with:
# Rebuild and push imagedocker build -t <your-registry>/my-image:latest .docker push <your-registry>/my-image:latest
# Update infrastructureterraform applyCleanup
Section titled “Cleanup”Remove the infrastructure when done:
terraform destroy