Monday, May 20, 2024
HomeRuby On RailsDeploying Rails with Docker and AWS Fargate

Deploying Rails with Docker and AWS Fargate


On this tutorial, you will learn to deploy a dockerized Ruby on Rails 7 app on Amazon’s Elastic Cloud utilizing Fargate. We are going to study what Fargate is and the way it makes the deployment of containerized functions comparatively seamless.

After finishing this tutorial, you will know the best way to carry out the next:

  • Dockerize a easy Rails utility.
  • Push your code to Amazon’s Elastic Container Repository (ECR).
  • Setup a PostgreSQL database utilizing Amazon’s RDS service.
  • Configure an Elastic Container cluster.
  • Deploy your Rails app to manufacturing utilizing Fargate.

Stipulations

  • An AWS account. If you do not have one, enroll right here.
  • AWS CLI, Docker, and Docker Compose put in in your growth machine.

Dockerizing a Easy Rails App

We’ll begin by making a easy Rails 7 app, which we’ll use all through this tutorial. You may seize the instance app right hereor use your individual to observe alongside.

Open the app in your favourite editor and create a Dockerfile within the root listing. Then, edit the file with the next:

FROM ruby:3.1.2-slim-bullseye AS app

WORKDIR /app

RUN apt-get replace 
  && apt-get set up -y --no-install-recommends build-essential curl libpq-dev 
  && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man 
  && apt-get clear 
  && useradd --create-home ruby 
  && chown ruby:ruby -R /app

USER ruby

COPY --chown=ruby:ruby bin/ ./bin
RUN chmod 0755 bin/*

ARG RAILS_ENV="manufacturing"
ENV RAILS_ENV="${RAILS_ENV}" 
    PATH="${PATH}:/residence/ruby/.native/bin" 
    USER="ruby"

COPY --chown=ruby:ruby --from=belongings /usr/native/bundle /usr/native/bundle
COPY --chown=ruby:ruby --from=belongings /app/public /public
COPY --chown=ruby:ruby . .

ENTRYPOINT ["/app/bin/docker-entrypoint-web"]

EXPOSE 8000

CMD ["rails", "s"]


In a nutshell, the directions we specify within the Dockerfile will outline the surroundings during which our app will run.

Subsequent, we’ll use Docker compose to construct all the pieces into a picture that may be deployed.

For the needs of this text, we gained’t go into the small print of what every line is doing, as it could require a wholly separate tutorial. As an alternative, you may take a look at this one, which ought to offer you a fast heads up.

With the Dockerfile accomplished, you may go on to the subsequent step of orchestrating the picture utilizing Docker compose.

Utilizing Docker compose, you may specify the construction of your app’s container or a number of containers and the way they impart with one another, in addition to databases and background jobs.

Once more, we cannot go an excessive amount of into the small print of Docker compose. For this tutorial, create a brand new file within the root of the app known as docker-compose.yml with it is contents set to appear to be this:

x-app: &default-app
  construct:
    context: "."
    goal: "app"
    args:
      - "RAILS_ENV=${RAILS_ENV:-production}"
      - "NODE_ENV=${NODE_ENV:-production}"
  depends_on:
    - "postgres"
    - "redis"
  env_file:
    - ".env"
  restart: "${DOCKER_RESTART_POLICY:-unless-stopped}"
  stop_grace_period: "3s"
  tty: true
  volumes:
    - "${DOCKER_WEB_VOLUME:-./public:/app/public}"

providers:
  postgres:
    deploy:
      assets:
        limits:
          cpus: "${DOCKER_POSTGRES_CPUS:-0}"
          reminiscence: "${DOCKER_POSTGRES_MEMORY:-0}"
    env_file:
      - ".env"
    picture: "postgres:14.4-bullseye"
    restart: "${DOCKER_RESTART_POLICY:-unless-stopped}"
    stop_grace_period: "3s"
    volumes:
      - "postgres:/var/lib/postgresql/information"

  redis:
    deploy:
      assets:
        limits:
          cpus: "${DOCKER_REDIS_CPUS:-0}"
          reminiscence: "${DOCKER_REDIS_MEMORY:-0}"
    env_file:
      - ".env"
    picture: "redis:7.0.2-bullseye"
    restart: "${DOCKER_RESTART_POLICY:-unless-stopped}"
    stop_grace_period: "3s"
    volumes:
      - "redis:/information"

  internet:
    <<: *default-app
    deploy:
      assets:
        limits:
          cpus: "${DOCKER_WEB_CPUS:-0}"
          reminiscence: "${DOCKER_WEB_MEMORY:-0}"
    healthcheck:
      take a look at: "${DOCKER_WEB_HEALTHCHECK_TEST:-curl localhost:8000/up}"
      interval: "60s"
      timeout: "3s"
      start_period: "5s"
      retries: 3
    ports:
      - "${DOCKER_WEB_PORT_FORWARD:-127.0.0.1:8000}:8000"

  employee:
    <<: *default-app
    command: "bundle exec sidekiq -C config/sidekiq.yml"
    entrypoint: []
    deploy:
      assets:
        limits:
          cpus: "${DOCKER_WORKER_CPUS:-0}"
          reminiscence: "${DOCKER_WORKER_MEMORY:-0}"

  cable:
    <<: *default-app
    command: "puma -p 28080 cable/config.ru"
    entrypoint: []
    deploy:
      assets:
        limits:
          cpus: "${DOCKER_CABLE_CPUS:-0}"
          reminiscence: "${DOCKER_CABLE_MEMORY:-0}"
    ports:
      - "${DOCKER_CABLE_PORT_FORWARD:-127.0.0.1:28080}:28080"

volumes:
  postgres: {}
  redis: {}

Utilizing the docker-compose.yml file, we have outlined an surroundings for our app, which incorporates the PostgreSQL database and Redis.

It’s value noting that, relying on the providers your specific app would require, the contents of your Docker compose file will change.

Now, we’re able to spin up a database for our containerized app. Use the docker-compose run internet rails db:setup to arrange the database and run migrations.

Then, run docker-compose up to spin up the container and run the app on localhost:8000.

In the intervening time, we’ve got efficiently created a dockerized Rails app on our growth machine.

Subsequent, we’ll set issues up on AWS, beginning with pushing our app picture to Amazon’s Docker picture registry, Elastic Container Registry (ECR).

First, nonetheless, we’ll want an IAM consumer with correct entry rights.

Setting Up an IAM Consumer with ECS Entry

Log into your AWS console residence (as a root consumer) and create a brand new IAM consumer with the next permissions:

  • AmazonEC2ContainerRegistryFullAccess
  • AmazonECS_FullAccess

Within the safety credentials tab, select the kind “Entry key”, as this can grant your newly created consumer a key/secret, which we’ll use within the AWS CLI instrument.

Create an ECR Repo

Utilizing our newly created CLI consumer, enter the command beneath to create a brand new container repo on the AWS ECR:

aws ecr create-repository --repository-name <username>/<repo-name>

This could return a response just like the one proven beneath. Take specific word of the repo URL, as we’ll use it within the upcoming steps.

ECR output

Pushing a Docker Picture to ECR

At this level, you have got a brand new picture repo on ECR. The subsequent step is to get our regionally constructed picture onto ECR, which is able to contain the next:

  • Constructing our picture.
  • Tagging our picture.
  • Authenticating to ECR.
  • Pushing the picture to ECR.

Constructing an Picture

We wish to be certain that we’re utilizing the newest model of our constructed app picture. Run the command beneath to generate one:

docker construct -t manufacturing .

Tagging an Picture

Tagging our picture ensures that we push to the right repo URL always. Moreover, since utilizing ECR requires that we’re authenticated always, run the command beneath to take action:

docker login -u AWS -p $(aws ecr get-login-password --region <YOUR AWS REGION>) XXXXXXX.dkr.ecr.us-east-1.amazonaws.com

This may lead to a Login succeeded message:

Login success

Docker Push to ECR

With that, push your Docker picture to the ECR repo:

docker push XXXXX.dkr.ecr.us-east-1.amazonaws.com/<REPO NAME>

If profitable, the command leads to one thing like the next:

Docker push

When you log into your AWS console, below ECR repositories, it’s best to see your newly pushed picture listed:

ECR image listing

We have now now efficiently pushed a Docker picture to the ECR service. What’s subsequent?

Setting Up a PostgreSQL Database on AWS RDS

Since our app will probably use a database, this step includes organising a PostgreSQL database on AWSs RDS service.

First, log into your AWS console and head over to the RDS dashboard. From there, click on on the DB situations hyperlink:

AWS RDS

Then, create a PostgreSQL database, making word of the next settings:

RDS settings

We make the database public in order that we’re capable of run migrations from our growth machine. All the opposite highlighted settings are default, however you may edit them as wanted.

After creating your database, it’s best to see it listed in your RDS database listing:

RDS DB list

We’re on the point of deploy our picture, however first, I’ll present a quick introduction to ECS and Fargate.

Introducing Amazon’s Elastic Container Service (ECS) and Fargate

Amazon’s Elastic Container Service AWS is a totally managed service that lets you deploy, handle, and scale containerized functions on Amazon’s cloud infrastructure.

Fargate is Amazon’s serverless compute engine that allows you to run utility containers (just like the one we constructed within the first a part of this tutorial) with out worrying an excessive amount of in regards to the underlying server infrastructure.

By combining each ECS and Fargate, you get the good thing about deploying and scaling your app containers on Amazon’s confirmed infrastructure with out the headache of manually provisioning servers or managing them.

Deploying a Container with ECS Fargate

Earlier than deploying our container, let’s get accustomed to the key parts that make up the deploy course of:

  • ECS clusters: Merely put, these are grouped assets, normally providers and duties. As soon as a cluster is configured, you may deploy containers on it utilizing process definitions.
  • Process definitions: Process definitions are the place you specify the assets required for a single or a number of Docker containers. These assets embody how a lot reminiscence a container will use, networking and safety teams, and so forth.
  • Duties: A process is an occasion the place these container definitions are specified.

Moreover, right here’s a high-level overview of what we’ll must set as much as get our containerized app operating:

  • Create a process definition.
  • Create a cluster the place our process definitions will run.
  • Run a process on the cluster we setup.

Subsequent, we’ll get began with making a process definition.

Making a Process Definition

Go to the ECS dashboard and click on on the duty definitions hyperlink on the left-side menu:

ECS dashboard

Within the window that opens, click on on Create new Process Definition. Then, choose the Fargate deployment possibility and hit “Subsequent”.

ECS Fargate option

This could convey you to the duty definition web page, the place you may outline your process:

New task definition

Right here, and for the needs of holding our tutorial comparatively easy, we’ll keep on with defaults as a lot as doable. To start with, set the “Process Definition Title” to one thing related. In our case, we’ll use “Internet”, as this process definition defines the providers that may run our “internet employee”.

Subsequent, we’ll arrange the “Process Reminiscence” and “Process CPU” as “0.5GB” and “0.25vCPU”, respectively.

Task memory and CPU

With that performed, it is time to add a container. Clicking on the Add Container button opens up a modal with a bunch of fields. Let’s undergo those that matter for the needs of our tutorial:

Adding task container

Right here, we’re defining our container with a container identify and setting the picture URI, which is the Docker picture we pushed to ECR earlier. Be sure that you add the tag identify you selected if you pushed it; in our case, word the “:newest” tag added to the top of the URI definition.

Within the “Superior container configuration” part, below “Atmosphere”, double-check that the “Important” possibility is chosen.

Essential checked

Subsequent, we’ll cowl the surroundings variables. The screenshot beneath exhibits a typical setup (yours may fluctuate relying on the kind of app you’re deploying):

Environment variables

Along with your surroundings variables added, click on on “Add” so as to add your container to the Process Definition, after which “Create” on the backside of the Process Definition web page to create it. Afterward, you will see a display screen displaying that the duty definition was created efficiently.

Task Definition created

With our process definition created, let’s work on creating our first cluster.

Making a Cluster

On the ECS dashboard, choose Clusters below “Amazon ECS” (to not be confused with the “Clusters” below “Amazon EKS”, that are for operating Kubernetes containers).

Within the window that opens, choose the “Networking solely” possibility since that is the one which works with the Fargate deployment service and click on on “Subsequent”.

Enter a reputation on your cluster and go away the “Create VPC” possibility unchecked, after which click on on “Create”. It is best to then see the same display screen because the one proven beneath (if you click on on “View Cluster”):

View cluster

Operating a Process and Viewing the Rails App

On the ECS dashboard, click on on the Process Definitions, which is able to take you to a web page containing all of your process definitions.

Choose your newly created process definition, and below the “Actions” drop-down button, choose “Run Process”:

Running a task

One other window opens up, the place you will must outline the cluster to run your process (in keeping with the duty definition you specified).

The final choices we enter for our containerized Rails app are as follows (see screenshots beneath):

  • Launch kind: “Fargate”
  • Working system household: “Linux”
  • Variety of duties (to run): 1

Task dashboard

Then, for “VPC and Safety teams”, choose the out there default VPC, and for the subnet, choose one which’s out there in your default area.

Task dashboard - vpc and security settings

Now, hit “Run Process” to run the duty. If you’re profitable, it’s best to see a display screen just like the one beneath:

Task running

Viewing the App

With our Process operating, click on on the Process ID on the Process view:

Task ID

On the window that opens, scroll all the way down to the Networking part and click on on “ENI Id” hyperlink:

ENI ID

This may convey you to a web page just like the one beneath:

ENI interface

Then, on the “IPv4 Public IP” column, it’s best to see an IP handle the place your app is now out there.

Conclusion

Utilizing AWS ECS and Fargate is simply one of many deployment strategies out there to Rails builders. When carried out accurately, this technique lets you scale your app deployment to a whole lot, even 1000’s of containers with out worrying an excessive amount of in regards to the server infrastructure. The draw back is that there is loads of configurations to do.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments