diff --git a/s3-lambda-bedrock-tf/Diagram.png b/s3-lambda-bedrock-tf/Diagram.png new file mode 100644 index 000000000..6083e2a9e Binary files /dev/null and b/s3-lambda-bedrock-tf/Diagram.png differ diff --git a/s3-lambda-bedrock-tf/README.md b/s3-lambda-bedrock-tf/README.md new file mode 100644 index 000000000..7810c2791 --- /dev/null +++ b/s3-lambda-bedrock-tf/README.md @@ -0,0 +1,237 @@ +# Serverless Image Analysis and Auto-Tagging with Amazon Bedrock Nova Lite + +![architecture](architecture/architecture.png) + +This pattern implements a serverless image analysis and auto-tagging service using Amazon S3, AWS Lambda and Amazon Bedrock's Nova Lite model. When images are uploaded to an S3 bucket, it automatically triggers a Lambda function that analyzes the image content using Amazon Nova Lite and applies descriptive tags as S3 metadata. + +The Lambda function processes the uploaded image by sending it to Amazon Bedrock's Nova Lite model for analysis. Nova Lite generates 10 descriptive words based on the image content, and these words are automatically applied as S3 object tags, enabling powerful search and organization capabilities for your image collection. + +Learn more about this pattern at Serverless Land Patterns: https://serverlessland.com/patterns/s3-lambda-bedrock-nova + +Important: this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the [AWS Pricing page](https://aws.amazon.com/pricing/) for details. You are responsible for any AWS costs incurred. No warranty is implied in this example. + +## Requirements + +* [Create an AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) if you do not already have one and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources. +* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) installed and configured +* [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) +* [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started) installed +* [Amazon Bedrock Nova Lite Foundation Model Access](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html#add-model-access) + +Note: The Lambda function uses Python 3.13 runtime. + +## Deployment Instructions + +For this pattern, you need access to Amazon Nova Lite foundation model (Model ID: amazon.nova-lite-v1:0). The default deployment region is us-east-1, but you can customize this using the aws_region variable. + +You must request access to the Nova Lite model before you can use it. If you try to use the model before you have requested access to it, you will receive an error message. + +1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository: + ``` + git clone https://github.com/aws-samples/serverless-patterns + ``` +1. Change directory to the pattern directory: + ``` + cd s3-lambda-bedrock-tf + ``` +1. From the command line, initialize terraform to downloads and installs the providers defined in the configuration: + ``` + terraform init + ``` +1. From the command line, review the planned changes before applying: + ``` + terraform plan + ``` + + Optionally, you can specify a custom AWS region (default is us-east-1): + ``` + terraform plan -var="aws_region=us-west-2" + ``` + +1. From the command line, apply the configuration in the main.tf file: + ``` + terraform apply + ``` + +1. Confirm the deployment by typing `yes` when prompted. + +## Configuration Variables + +The pattern supports the following Terraform variables: + +| Variable | Description | Type | Default | +|----------|-------------|------|---------| +| `aws_region` | AWS region for deployment | string | "us-east-1" | +| `prefix` | Prefix to associate with the resources | string | "s3-lambda-bedrock-tf" | + +### Region Support + +**Important:** Please check documentation for lastest Bedrock models availability + Amazon Bedrock -> User Guide -> Model support by AWS Region in Amazon Bedrock + https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html + Amazon Nova Lite is currently available in the following regions: + +Make sure Nova Lite is available in your chosen region before deployment. + +## Testing + +1. Upload an image to the S3 bucket using the AWS CLI. Replace `S3_BUCKET_NAME` with the generated `s3_image_bucket` from Terraform (refer to the Terraform Outputs section): + + ``` + aws s3 cp your-image.jpg s3://S3_BUCKET_NAME/images/ + ``` + + For example: + ``` + aws s3 cp sample-photo.jpg s3://s3-lambda-bedrock-tf-image-analysis-bucket-abc12345/images/ + ``` + +2. Monitor the Lambda function execution in CloudWatch Logs: + + ``` + aws logs tail "/aws/lambda/s3-lambda-bedrock-tf-image-analysis" --follow --format json + ``` + + You should see concise log entries like: + ``` + SUCCESS: images/sample-photo.jpg | Model: amazon.nova-lite-v1:0 | Nova Response: landscape, mountains, sunset, clouds, nature, scenic, outdoor, beautiful, peaceful, horizon | Tags Applied: 10 + ``` + +3. Check the applied tags on your uploaded image: + + ``` + aws s3api get-object-tagging --bucket S3_BUCKET_NAME --key images/your-image.jpg + ``` + + Example response: + ```json + { + "TagSet": [ + { + "Key": "ai-tag-1", + "Value": "landscape" + }, + { + "Key": "ai-tag-2", + "Value": "mountains" + }, + { + "Key": "ai-tag-3", + "Value": "sunset" + } + ] + } + ``` + +4. You can also view the tags in the AWS S3 Console by navigating to your bucket and selecting the uploaded image. + +### Supported Image Formats +- JPEG (.jpg, .jpeg) +- PNG (.png) + +## Architecture Details + +The solution automatically creates: +- S3 bucket with `images/` folder for organized storage +- Lambda function configured with Nova Lite model access +- CloudWatch log group with 14-day retention +- IAM roles and policies with least-privilege access +- S3 event notifications for automatic triggering + +## Cleanup + +1. Change directory to the pattern directory: + ``` + cd serverless-patterns/s3-lambda-bedrock-claude-image-analysis + ``` + +2. Delete all objects from the S3 bucket first: + ``` + aws s3 rm s3://YOUR_BUCKET_NAME --recursive + ``` + +3. Delete all created resources + ``` + terraform destroy + ``` + + Or with custom region: + ``` + terraform destroy -var="aws_region=us-west-2" + ``` + +4. Confirm the destruction by typing `yes` when prompted. + +5. Confirm all created resources has been deleted + ``` + terraform show + ``` + +## Troubleshooting + +### No CloudWatch Logs Appearing +1. Check if the Lambda function is being triggered: + ```bash + aws logs describe-log-groups --log-group-name-prefix "/aws/lambda/s3-lambda-bedrock-tf" + ``` + +2. Verify Lambda function permissions: + ```bash + aws iam get-role --role-name s3-lambda-bedrock-tf-image-analysis-role-* + ``` + +### Nova Lite Access Issues +1. Verify Nova Lite model access in your region: + ```bash + aws bedrock list-foundation-models --region YOUR_REGION --query 'modelSummaries[?contains(modelId, `nova-lite`)]' + ``` + +2. Check if Nova Lite model is accessible: + ```bash + aws bedrock get-foundation-model --model-identifier amazon.nova-lite-v1:0 --region YOUR_REGION + ``` + +3. Request model access in AWS Bedrock console if needed: + - Go to AWS Bedrock Console → Model Access + - Request access to `amazon.nova-lite-v1:0` + - Wait for approval (usually immediate) + +### Lambda Function Debug Steps + +If you see "ValidationException: unsupported input modality" errors: +1. Confirm you're using Nova Lite (not Nova Micro) in the Lambda code +2. Verify model access has been granted in Bedrock console +3. Check that you're deploying in a supported region (us-east-1, us-west-2) + +### No Tags Being Added to S3 Objects +1. Check Lambda execution logs for errors: + ```bash + aws logs filter-log-events --log-group-name "/aws/lambda/s3-lambda-bedrock-tf-image-analysis" --start-time $(date -d '1 hour ago' +%s)000 + ``` + +2. Manually test S3 tagging permissions: + ```bash + aws s3api put-object-tagging --bucket YOUR_BUCKET --key images/test.jpg --tagging 'TagSet=[{Key=test,Value=manual}]' + ``` + +## Log Output Examples + +**Successful Processing:** +``` +SUCCESS: images/nature-photo.jpg | Model: amazon.nova-lite-v1:0 | Nova Response: forest, trees, green, natural, peaceful, woodland, leaves, branches, sunlight, serene | Tags Applied: 10 +``` + +**Failed Processing:** +``` +FAILED: images/corrupted-file.jpg | Model: amazon.nova-lite-v1:0 | Error: An error occurred (ValidationException) when calling the InvokeModel operation +``` + +**Skipped Files:** +``` +SKIPPED: documents/readme.txt - Unsupported file type: txt +``` + +---- +Copyright 2025 Amazon.com, Inc. or its affiliates. All Rights Reserved. + +SPDX-License-Identifier: MIT-0 \ No newline at end of file diff --git a/s3-lambda-bedrock-tf/example-pattern.json b/s3-lambda-bedrock-tf/example-pattern.json new file mode 100644 index 000000000..922e178d4 --- /dev/null +++ b/s3-lambda-bedrock-tf/example-pattern.json @@ -0,0 +1,56 @@ +{ + "title": "Amazon S3 images autotagging with AWS Lambda and Amazon Bedrock", + "description": "Create a Lambda function that is triggered for every image file uploaded to S3 and tags the file using Bedrock.", + "language": "Python", + "level": "200", + "framework": "Terraform", + "introBox": { + "headline": "Lambda function triggered for every image file stored to S3", + "text": [ + "The terraform manifest deploys a Lambda function, an S3 bucket and the IAM resources required to run the application.", + "The Lambda function is triggered directly by S3 events when files with .jpg, .jpeg or .png extensions are uploaded to the images/ folder.", + "The Lambda function uses Amazon Bedrock Nova Lite model to analyze images and automatically apply descriptive tags." + ] + }, + "gitHub": { + "template": { + "repoURL": "https://github.com/aws-samples/serverless-patterns/tree/main/s3-lambda-bedrock-tf", + "templateURL": "serverless-patterns/s3-lambda-bedrock-tf", + "projectFolder": "s3-lambda-bedrock-tf", + "templateFile": "main.tf" + } + }, + "resources": { + "bullets": [ + { + "text": "Using Amazon S3 Event Notifications to invoke Lambda functions", + "link": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html" + } + ] + }, + "deploy": { + "text": [ + "terraform init", + "terraform apply" + ] + }, + "testing": { + "text": [ + "See the GitHub repo for detailed testing instructions." + ] + }, + "cleanup": { + "text": [ + "terraform destroy" + ] + }, + "authors": [ + { + "name": "Oriol Matavacas", + "image": "https://serverlessland.com/assets/images/resources/contributors/oriol-matavacas.jpg", + "bio": "Oriol Matavacas is a Sr. Solutions Architect at AWS based in Barcelona. Oriol primarily supporting customers on the journey to the Cloud. He enjoys building new solutions with scalability, availability and easy to maintain by using serverless.", + "linkedin": "oriol-matavacas-rodriguez-b165868a", + "twitter": "" + } + ] +} diff --git a/s3-lambda-bedrock-tf/main.tf b/s3-lambda-bedrock-tf/main.tf new file mode 100644 index 000000000..3fe302322 --- /dev/null +++ b/s3-lambda-bedrock-tf/main.tf @@ -0,0 +1,248 @@ +variable "aws_region" { + description = "AWS region for deployment" + type = string + default = "us-east-1" +} + +variable "prefix" { + description = "Prefix to associate with the resources" + type = string + default = "s3-lambda-bedrock-tf" +} + +provider "aws" { + region = var.aws_region +} + +resource "random_string" "suffix" { + length = 8 + special = false + upper = false +} + +# Data source for current region +data "aws_region" "current" {} + +# Data source for current caller identity (account ID) +data "aws_caller_identity" "current" {} + +# Create ZIP archive of Lambda function +data "archive_file" "lambda_zip" { + type = "zip" + source_file = "${path.module}/src/app.py" + output_path = "${path.module}/lambda_function.zip" +} + +# S3 bucket for storing images +resource "aws_s3_bucket" "image_bucket" { + bucket = "${lower(var.prefix)}-image-analysis-bucket-${random_string.suffix.result}" + force_destroy = true +} + +# Create images folder in S3 bucket +resource "aws_s3_object" "images_folder" { + bucket = aws_s3_bucket.image_bucket.id + key = "images/" + source = "/dev/null" +} + +# S3 bucket versioning +resource "aws_s3_bucket_versioning" "image_bucket_versioning" { + bucket = aws_s3_bucket.image_bucket.id + versioning_configuration { + status = "Enabled" + } +} + +# S3 bucket server side encryption +resource "aws_s3_bucket_server_side_encryption_configuration" "image_bucket_encryption" { + bucket = aws_s3_bucket.image_bucket.id + + rule { + apply_server_side_encryption_by_default { + sse_algorithm = "AES256" + } + } +} + +# Create CloudWatch Log Group for Lambda +resource "aws_cloudwatch_log_group" "lambda_log_group" { + name = "/aws/lambda/${lower(var.prefix)}-image-analysis" + retention_in_days = 14 + + lifecycle { + prevent_destroy = false + } +} + +# IAM Policy for Lambda function +resource "aws_iam_policy" "lambda_policy" { + name = "${lower(var.prefix)}-ImageAnalysisPolicy-${random_string.suffix.result}" + path = "/" + description = "IAM policy for Lambda function to access S3, Bedrock Nova Lite, and S3 tagging" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "s3:GetObjectAcl", + "s3:GetObject", + "s3:ListAllMyBuckets", + "s3:ListBucketVersions", + "s3:GetObjectAttributes", + "s3:ListBucket", + "s3:PutObjectTagging", + "s3:GetObjectTagging" + ] + Resource = [ + aws_s3_bucket.image_bucket.arn, + "${aws_s3_bucket.image_bucket.arn}/*" + ] + }, + { + Effect = "Allow" + Action = [ + "bedrock:InvokeModel", + "bedrock:InvokeModelWithResponseStream" + ] + Resource = [ + "arn:aws:bedrock:${var.aws_region}::foundation-model/amazon.nova-lite-v1:0" + ] + }, + { + Effect = "Allow" + Action = [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents", + "logs:DescribeLogGroups", + "logs:DescribeLogStreams" + ] + Resource = [ + "arn:aws:logs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:log-group:/aws/lambda/${lower(var.prefix)}-image-analysis", + "arn:aws:logs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:log-group:/aws/lambda/${lower(var.prefix)}-image-analysis:*" + ] + } + ] + }) +} + +# IAM role for Lambda function +resource "aws_iam_role" "lambda_role" { + name = "${lower(var.prefix)}-image-analysis-role-${random_string.suffix.result}" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "lambda.amazonaws.com" + } + } + ] + }) +} + +# Attach custom policy to Lambda role +resource "aws_iam_role_policy_attachment" "lambda_policy_attachment" { + policy_arn = aws_iam_policy.lambda_policy.arn + role = aws_iam_role.lambda_role.name +} + +# Attach basic execution policy for CloudWatch logs +resource "aws_iam_role_policy_attachment" "lambda_basic_execution" { + policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" + role = aws_iam_role.lambda_role.name +} + +# Lambda function +resource "aws_lambda_function" "image_analysis_function" { + filename = data.archive_file.lambda_zip.output_path + function_name = "${lower(var.prefix)}-image-analysis" + role = aws_iam_role.lambda_role.arn + handler = "app.lambda_handler" + source_code_hash = data.archive_file.lambda_zip.output_base64sha256 + runtime = "python3.13" + timeout = 120 + memory_size = 1024 + description = "Lambda to analyze images using Amazon Nova Lite and add descriptive tags" + + environment { + variables = { + LOG_LEVEL = "INFO" + AWS_REGION_CUSTOM = var.aws_region + NOVA_MODEL_ID = "amazon.nova-lite-v1:0" + } + } + + depends_on = [ + aws_iam_role_policy_attachment.lambda_policy_attachment, + aws_iam_role_policy_attachment.lambda_basic_execution, + aws_cloudwatch_log_group.lambda_log_group + ] +} + +# Lambda permission for S3 to invoke the function +resource "aws_lambda_permission" "s3_invoke" { + statement_id = "AllowExecutionFromS3Bucket" + action = "lambda:InvokeFunction" + function_name = aws_lambda_function.image_analysis_function.function_name + principal = "s3.amazonaws.com" + source_arn = aws_s3_bucket.image_bucket.arn +} + +# S3 bucket notification for images +resource "aws_s3_bucket_notification" "bucket_notification" { + bucket = aws_s3_bucket.image_bucket.id + + # Lambda function trigger for JPG images + lambda_function { + lambda_function_arn = aws_lambda_function.image_analysis_function.arn + events = ["s3:ObjectCreated:Put"] + filter_prefix = "images/" + filter_suffix = ".jpg" + } + + # Lambda function trigger for JPEG images + lambda_function { + lambda_function_arn = aws_lambda_function.image_analysis_function.arn + events = ["s3:ObjectCreated:Put"] + filter_prefix = "images/" + filter_suffix = ".jpeg" + } + + # Lambda function trigger for PNG images + lambda_function { + lambda_function_arn = aws_lambda_function.image_analysis_function.arn + events = ["s3:ObjectCreated:Put"] + filter_prefix = "images/" + filter_suffix = ".png" + } + + depends_on = [aws_lambda_permission.s3_invoke] +} + +# Outputs +output "lambda_function_arn" { + description = "The ARN of the Lambda function" + value = aws_lambda_function.image_analysis_function.arn +} + +output "s3_image_bucket" { + description = "The S3 bucket for image uploads" + value = aws_s3_bucket.image_bucket.id +} + +output "cloudwatch_log_group" { + description = "The CloudWatch Log Group for Lambda function" + value = aws_cloudwatch_log_group.lambda_log_group.name +} + +output "aws_region" { + description = "The AWS region used for deployment" + value = var.aws_region +} \ No newline at end of file diff --git a/s3-lambda-bedrock-tf/src/app.py b/s3-lambda-bedrock-tf/src/app.py new file mode 100644 index 000000000..7bf208a08 --- /dev/null +++ b/s3-lambda-bedrock-tf/src/app.py @@ -0,0 +1,180 @@ +import json +import boto3 +import base64 +from urllib.parse import unquote_plus +import logging +import os + +# Configure logging to ensure CloudWatch visibility +logging.basicConfig( + level=logging.INFO, + format='%(levelname)s - %(message)s', + force=True +) +logger = logging.getLogger(__name__) +logger.setLevel(logging.INFO) + +# Constants +SUPPORTED_EXTENSIONS = ['jpg', 'jpeg', 'png'] +NOVA_MODEL_ID = os.environ.get('NOVA_MODEL_ID', 'amazon.nova-lite-v1:0') +AWS_REGION = os.environ.get('AWS_REGION_CUSTOM', 'us-east-1') + +def lambda_handler(event, context): + """ + Lambda handler for processing image uploads using Amazon Nova Lite + """ + try: + # Extract S3 information + record = event["Records"][0] + bucket_name = str(record["s3"]["bucket"]["name"]) + file_key = unquote_plus(str(record["s3"]["object"]["key"])) + + # Check file extension + file_extension = file_key.lower().split('.')[-1] + if file_extension not in SUPPORTED_EXTENSIONS: + print(f"SKIPPED: {file_key} - Unsupported file type: {file_extension}") + return {"statusCode": 200, "body": "File type not supported"} + + # Process the image + image_bytes = read_s3_image(bucket_name, file_key) + tags = analyze_image_with_nova(image_bytes, file_key) + applied_tags = apply_tags_to_s3_object(bucket_name, file_key, tags) + + # Success log + print(f"SUCCESS: {file_key} | Model: {NOVA_MODEL_ID} | Nova Response: {', '.join(tags)} | Tags Applied: {len(applied_tags)}") + + return { + "statusCode": 200, + "body": json.dumps({ + "processed": True, + "file": file_key, + "model": NOVA_MODEL_ID, + "tags_count": len(applied_tags) + }) + } + + except Exception as e: + file_key = "unknown" + try: + file_key = unquote_plus(str(event["Records"][0]["s3"]["object"]["key"])) + except: + pass + + print(f"FAILED: {file_key} | Model: {NOVA_MODEL_ID} | Error: {str(e)}") + + return { + "statusCode": 500, + "body": json.dumps({ + "processed": False, + "file": file_key, + "error": str(e) + }) + } + + +def determine_image_format(filename): + """Determine image format based on file extension""" + extension = filename.lower().split('.')[-1] + format_mapping = {'jpg': 'jpeg', 'jpeg': 'jpeg', 'png': 'png'} + return format_mapping.get(extension, 'jpeg') + + +def analyze_image_with_nova(image_bytes, filename): + """Use Amazon Nova Lite to analyze image and return descriptive tags""" + try: + bedrock_runtime = boto3.client("bedrock-runtime", region_name=AWS_REGION) + + # Prepare request + image_base64 = base64.b64encode(image_bytes).decode('utf-8') + image_format = determine_image_format(filename) + + nova_request = { + "messages": [ + { + "role": "user", + "content": [ + { + "image": { + "format": image_format, + "source": {"bytes": image_base64} + } + }, + { + "text": "Analyze this image and provide exactly 10 descriptive single words that best describe what you see. Return only the words separated by commas, no explanations." + } + ] + } + ], + "inferenceConfig": { + "maxTokens": 100, + "temperature": 0.1 + } + } + + # Call Nova Lite + response = bedrock_runtime.invoke_model( + modelId=NOVA_MODEL_ID, + contentType="application/json", + accept="application/json", + body=json.dumps(nova_request) + ) + + # Parse response + response_data = json.loads(response.get("body").read()) + + if ("output" in response_data and + "message" in response_data["output"] and + "content" in response_data["output"]["message"] and + response_data["output"]["message"]["content"]): + + content = response_data["output"]["message"]["content"] + if isinstance(content, list) and len(content) > 0: + analysis_text = content[0].get("text", "") + else: + analysis_text = str(content) + + # Process into clean tags + tags = [word.strip().lower() for word in analysis_text.split(",")] + tags = [tag for tag in tags if tag and len(tag) > 0 and tag.replace('-', '').replace('_', '').isalnum()] + return tags[:10] + + return get_fallback_tags() + + except Exception: + return get_fallback_tags() + + +def get_fallback_tags(): + """Return fallback tags when Nova fails""" + return ["image", "photo", "content", "visual", "media", "file", "upload", "data", "picture", "object"] + + +def read_s3_image(bucket, key): + """Read image file from S3 bucket""" + s3_client = boto3.client("s3") + s3_object = s3_client.get_object(Bucket=bucket, Key=key) + return s3_object["Body"].read() + + +def apply_tags_to_s3_object(bucket, key, tag_words): + """Apply AI-generated tags to S3 object""" + s3_client = boto3.client("s3") + + # Create S3 tag set + tag_set = [] + for i, word in enumerate(tag_words[:10]): + clean_word = ''.join(c for c in word if c.isalnum() or c in ['-', '_']).strip() + if clean_word: + tag_set.append({ + 'Key': f'ai-tag-{i+1}', + 'Value': clean_word[:256] + }) + + if tag_set: + s3_client.put_object_tagging( + Bucket=bucket, + Key=key, + Tagging={'TagSet': tag_set} + ) + + return tag_set \ No newline at end of file