AWS Fargate is a a serverless compute engine that helps a number of widespread container use instances, like working micro-services structure purposes, batch processing, machine studying purposes, and migrating on premise purposes to the cloud with out having to handle servers or clusters of Amazon EC2 situations. AWS clients have a alternative of absolutely managed container companies, together with Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). Each companies help a broad array of compute choices, have deep integration with different AWS companies, and supply the worldwide scale and reliability you’ve come to anticipate from AWS. For extra particulars to decide on between ECS and EKS please refer this weblog.
On this weblog, we’ll stroll you thru a use case of working an Amazon ECS Job on AWS Fargate that may be initiated utilizing AWS Step Capabilities. We are going to use Terraform to mannequin the AWS infrastructure. The instance answer leverages Amazon ECS a scalable, excessive efficiency container administration service that helps Docker containers which are provisioned by Fargate to routinely scale, load stability, and handle scheduling of your containers for availability. For outlining the infrastructure, you need to use AWS CloudFormation, AWS CDK or Terraform by HashiCorp. Within the answer offered on this put up, we use Terraform by HashiCorp, an AWS Associate Community (APN) Superior Expertise Associate and member of the AWS DevOps Competency.
Terraform is an infrastructure as code software just like AWS CloudFormation that permits you to create, replace, and model your Amazon Internet Providers (AWS) infrastructure. Terraform present pleasant syntax (just like AWS CloudFormation) together with different options like planning (visibility to see the modifications earlier than they really occur), graphing, create templates to interrupt configurations into smaller chunks to prepare, preserve and reusability. We are going to leverage the capabilities and options of Terraform to construct an API primarily based ingestion course of into AWS. Let’s get began!
We are going to present the Terraform infrastructure definition and the supply code for a Java primarily based container utility that may learn and course of the recordsdata within the enter AWS S3 bucket. The recordsdata shall be processed and pushed to an Amazon Kinesis stream. The Kinesis stream is subscribed to an Amazon Knowledge Firehose which has a goal of an output AWS S3 bucket. The java utility is containerized utilizing a Dockerfile and the ECS duties are orchestrated utilizing the ECS activity definition that are additionally constructed with Terraform code.
At a high-level, we’ll undergo the next steps:
- Create a easy Java utility that may learn contents of Amazon S3 bucket folder and pushes it to Amazon Kinesis stream. The applying code is construct utilizing Maven.
- Use Terraform to outline the AWS infrastructure assets required for the appliance.
- Use Terraform instructions to plan, apply and destroy (or cleanup the infrastructure)
- The infrastructure builds a brand new Amazon VPC the place the required AWS assets are launched in a logically remoted digital community that you just outline. The infrastructure spins up Amazon SNS, NAT Gateway, S3 Gateway Endpoint, Elastic Community Interface, Amazon ECR, and so forth as proven within the answer structure diagram under.
- Offered script inserts pattern S3 content material recordsdata within the enter bucket which are wanted for the appliance processing.
- Navigate to AWS Console, AWS Step Capabilities and provoke the method. Validate the end in logs and the output in S3 bucket.
- Cleanup Script, that may clear up the AWS ECR, Amazon S3 enter recordsdata and destroy AWS assets created by the Terraform
By creating the infrastructure proven within the diagram under, you’ll incur fees past free tier. Please see under pricing part for every particular person companies’ particular particulars. Bear in mind to scrub up the constructed infrastructure for the needs of finishing this tutorial to keep away from any recurring value.
Overview of a few of the AWS companies used on this answer
- Amazon Elastic Container Service (ECS), a extremely scalable, excessive efficiency container administration service that helps Docker containers
- AWS Fargate is a serverless compute engine for containers that works with each Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate removes the necessity to provision and handle servers, allows you to specify and pay for assets per utility, and improves safety by way of utility isolation by design.
- Amazon Kinesis makes it straightforward to gather, course of, and analyze real-time, streaming knowledge so you will get well timed insights and react rapidly to new data.
- Amazon Digital Personal Cloud (Amazon VPC) is a service that allows you to launch AWS assets in a logically remoted digital community that you just outline. You’ve got full management over your digital networking surroundings, together with number of your personal IP handle vary, creation of subnets, and configuration of route tables and community gateways
Stipulations
We are going to use Docker Containers to deploy the Java utility. The next are required to setup your improvement surroundings:
- An AWS Account.
- Be sure to have Java put in and working in your machine. For directions, see Java Growth Equipment
- Apache Maven – Java utility code is constructed utilizing mvn packages and are deployed utilizing Terraform into AWS
- Arrange Terraform. For steps, see Terraform downloads
- AWS CLI – Be sure to configure your AWS CLI
- Docker
- Set up Docker primarily based in your OS.
- Be sure the docker daemon/service is working. We are going to construct, tag & push the appliance code utilizing the supplied Dockerfile to the Amazon ECR
Resolution
Listed here are the steps you’ll observe to get this answer up and working.
- Obtain the code and carry out maven package deal for the Java lambda code.
- Run Terraform command to spin up the infrastructure.
- Evaluation the code you downloaded and see how Terraform gives the same implementation for spinning up the infrastructure like that of AWS CloudFormation. Chances are you’ll use Visible Studio Code or your favourite alternative of IDE to open the folder (aws-stepfunctions-ecs-fargate-process)
- The git folder can have these folders
- “templates” – Terraform templates to construct the infrastructure
- “src” – Java utility supply code.
- Dockerfile
- “exec.sh” – a shell script that may construct the infrastructure, java code and can push to Amazon ECR. Be sure to have Docker working in your machine at this level. Additionally modify your account quantity the place the appliance have to deployed, examined.
- This step is required if you’re working the steps manually and never utilizing the present “exec.sh” script. Put pattern recordsdata within the enter S3 bucket location – It might be one thing like “my-stepfunction-ecs-app-dev-source-bucket-”
- In AWS Console, navigate to AWS Step Perform. Click on on “my-stepfunction-ecs-app-ECSTaskStateMachine”. Click on “Begin Execution” button.
- As soon as the Step Perform is accomplished, output of the processed recordsdata could be present in “my-stepfunction-ecs-app-dev-target-bucket-”
Detailed Walkthrough
1. Deploying the Terraform template to spin up the infrastructure
Obtain the code from the GitHub location.
$ git clone https://github.com/aws-samples/aws-stepfunctions-ecs-fargate-process
Take a second to evaluate the code construction as talked about above within the walkthrough of the answer. Within the “exec.sh” script/bash file supplied as a part of the code base folder, make sure that to switch <YOUR_ACCOUNT_NUMBER> along with your AWS account quantity (the place you are attempting to deploy/run this utility) and the <REGION> along with your AWS account area . This may create the infrastructure and pushes the java utility into the ECR. Final part on the script additionally creates pattern/dummy enter recordsdata for the supply S3 bucket.
$ cd aws-stepfunctions-ecs-fargate-process
$ chmod +x exec.sh
$ ./exec.sh
2. Handbook Deployment (Solely do in case you didn’t do the above step)
Do that solely if you’re not executing the above scripts and wished to carry out these steps manually
Step 1: Construct java utility
$ cd aws-stepfunctions-ecs-fargate-process
$ mvn clear package deal
Step 2: Deploy the infrastructure
$ cd templates
$ terraform plan
$ terraform apply --auto-approve
Step 3: Steps to construct and push Java utility into ECR (my-stepfunction-ecs-app-repo ECR repository created as a part of above infrastructure)
$ docker construct -t instance/ecsfargateservice .
$ docker tag instance/ecsfargateservice $ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/my-stepfunction-ecs-app-repo:newest
$ aws ecr get-login-password --region $REGION | docker login --username AWS --password-stdin $ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com
$ docker push $ACCOUNT_NUMBER.dkr.ecr.$REGION.amazonaws.com/my-stepfunction-ecs-app-repo:newest
Replace your area and account quantity above
Step 4: Pattern S3 recordsdata era to the enter bucket
$ echo "{"productId":"1" , "productName": "some Identify", "productVersion": "v1"}" >> "product_1.txt"
$ aws s3 --region $REGION cp "product_1.txt" my-stepfunction-ecs-app-dev-source-bucket-<YOUR_ACCOUNTNUMBER>
Notice: exec.sh script has logic to create a number of recordsdata to validate. The instructions supplied above will create 1 pattern file.
3. Stack Verification
As soon as the previous Terraform instructions full efficiently, take a second to establish the main elements which are deployed in AWS.
- Amazon VPC
- VPC – my-stepfunction-ecs-app-VPC
- Subnets
- Public subnet – my-stepfunction-ecs-app-public-subnet1
- Personal subnet – my-stepfunction-ecs-app-private-subnet1
- Web gateway – my-stepfunction-ecs-app-VPC
- NAT Gateway – my-stepfunction-ecs-app-NATGateway
- Elastic IP – my-stepfunction-ecs-app-elastic-ip
- VPC Endpoint
- AWS Step Capabilities
- my-stepfunction-ecs-app-ECSTaskStateMachine
- Amazon ECS
- Cluster – my-stepfunction-ecs-app-ECSCluster
- Job Definition – my-stepfunction-ecs-app-ECSTaskDefinition
- Amazon Kinesis
- Knowledge Stream – my-stepfunction-ecs-app-stream
- Supply stream – my-stepfunction-ecs-app-firehose-delivery-stream – discover the supply (kinesis stream) and the goal output S3 bucket
- S3
- my-stepfunction-ecs-app-dev-source-bucket-
- my-stepfunction-ecs-app-dev-target-bucket-
- Amazon ECR
- my-stepfunction-ecs-app-repo – Be sure to examine if the repository has the code/picture
- Amazon SNS
- my-stepfunction-ecs-app-SNSTopic – Notice this isn’t subscribed to any endpoint. Chances are you’ll accomplish that subscribing to your e mail Id, textual content message and so forth., utilizing AWS Console, API or CLI.
- CloudWatch – Log Teams
- my-stepfunction-ecs-app-cloudwatch-log-group
- /aws/ecs/fargate-app/
- /aws/ecs/fargate-app/
- my-stepfunction-ecs-app-cloudwatch-log-group
Let’s take a look at our stack from AWS Console>Step Capabilities>
- Click on on “my-stepfunction-ecs-app-ECSTaskStateMachine”.
- Click on on “Begin Execution”. The state machine will set off the ECS Fargate activity and can full as under
- To see the method:
- ECS:
- Navigate to AWS Console > ECS > Choose your cluster
- click on on “Duties” sub tab, choose the duty. and see the standing. Whereas the duty runs you could discover the standing shall be in PROVISIONING, PENDING, RUNNING, STOPPED states
- S3
- Navigate to the output S3 bucket – my-stepfunction-ecs-app-dev-target-bucket- to see the output
- Notice there might be a delay for the recordsdata to be processed by Amazon Kinesis, Kinesis Firehose to S3
- ECS:
Troubleshooting
- Java errors: Be sure to have JDK, maven put in for the compilation of the appliance code.
- Examine if native Docker is working.
- VPC – Examine VPC quota/limits. Present restrict is 5 per area
- ECR Deployment – CLI V2 is used at this level. Refer aws cli v1 vs cli 2 for points
- Points with working the set up/shell script
- Home windows customers – Shell scripts by default opens in a brand new window and closes as soon as achieved. To see the execution you possibly can paste the script contents in a home windows CMD and shall execute sequentially
- If you’re deploying by way of the supplied set up/cleanup scripts, make sure that to have “chmod +x exec.sh” or “chmod +777 exec.sh” (Elevate the execution permission of the scripts)
- Linux Customers – Permission points might come up if you’re not working as root person. you’ll have to “sudo su“ .
- If you’re working the steps manually, refer the “exec.sh” script for any distinction within the command execution
Pricing
4. Code Cleanup
Terraform destroy command will delete all of the infrastructure that had been deliberate and utilized. For the reason that S3 can have each pattern enter and the processed recordsdata generated, make sure that to delete the recordsdata earlier than initiating the destroy command.
This may be achieved both in AWS Console or utilizing AWS CLI (instructions supplied). See each choices under
Utilizing the cleanup script supplied
- Cleanup.sh
$ chmod +x cleanup.sh
$ ./cleanup.sh
Handbook Cleanup - Solely do if you didn't do the above step
- Clear up assets from the AWS Console
- Open AWS Console, choose S3
- Navigate to the bucket created as a part of the stack
- Delete the S3 bucket manually
- Equally navigate to “ECR”, choose the create repository – my-stepfunction-ecs-app-repo you’ll have multiple picture pushed to the repository relying on modifications (if any) achieved to your java code
- Choose all the pictures and delete the pictures pushed
- Clear up assets utilizing AWS CLI
# CLI Instructions to delete the S3
$ aws s3 rb s3://my-stepfunction-ecs-app-dev-source-bucket-<your-account-number> --force
$ aws s3 rb s3://my-stepfunction-ecs-app-dev-target-bucket-<your-account-number> --force
$ aws ecr batch-delete-image --repository-name my-stepfunction-ecs-app-repo --image-ids imageTag=newest
$ aws ecr batch-delete-image --repository-name my-stepfunction-ecs-app-repo --image-ids imageTag=untagged
- cd templates
terraform destroy –-auto-approve
Conclusion
This weblog put up covers methods to launch an utility course of utilizing Amazon ECS and AWS Fargate, built-in with numerous different AWS companies, and deploying the appliance code packaged with Java utilizing Maven. Chances are you’ll use any mixture of relevant programming languages to construct your utility logic. The pattern supplied has a Java code that’s packaged utilizing Dockerfile into the Amazon ECR.
We encourage you to do that instance and see for your self how this general utility design works inside AWS. Then, it should simply be a matter of changing your present utility, package deal them as Docker containers and let the Amazon ECS handle the appliance effectively.
In case you have any questions/suggestions about this weblog please present your feedback under!
References
Concerning the Writer
Sivasubramanian Ramani (Siva Ramani) is a Sr Cloud Software Architect at AWS. His experience is in utility optimization, serverless options and utilizing Microsoft utility workloads with AWS.