Thursday, March 28, 2024
HomeProgrammingRust, Lambda, and DynamoDB. Implementation of an AWS Lambda… | by Michael...

Rust, Lambda, and DynamoDB. Implementation of an AWS Lambda… | by Michael Hentges | Nov, 2022


Implementation of an AWS Lambda perform in Rust, that writes to DynamoDB. All deployed via Terraform, with a Rust HTTP Shopper as effectively!

Rust in Motion

As a part of my journey to study extra about Rust growth, I developed a Lambda service hosted on AWS that writes to a DynamoDB database and an related Rust HTTP consumer. Together with Rust, I used Terraform to handle the deployment of the AWS sources. This text is the 4th I’ve written on my Wi-fi Thermostat software that runs on a Raspberry Pi. You’ll find the others right here: Raspberry Pi Wi-fi Thermostat — in Rust, Rust Cross Compiling Made Straightforward, and Implementing Multi-Threaded Shared Reminiscence in Rust. All supply code is on the market at my GitHub repository.

We’re going to cowl the next on this article:

  1. Outline a JSON API as a separate crate shared throughout two associated tasks.
  2. Write an AWS Lambda perform in Rust, utilizing the AWS Rust SDK, that accepts an HTTP POST with a JSON payload of knowledge, and writes the info to a DynamoDB database.
  3. Use Terraform to outline and construct the database, the lambda perform, and the permissions glue required to have all of the items match collectively.
  4. Use the AWS CLI to deploy Lambda software executable updates.
  5. Write a Rust HTTP Shopper that sends the info to our Lambda perform.

I assume you’ve got the AWS CLI, Terraform, and Rust put in in your system, and your AWS account is about up and related to the CLI. It’s a bit of labor, however simple to comply with below every system’s documentation.

The use case for my software is to maintain monitor of my Raspberry Pi thermostat software’s standing and report historical past. A Rust software operating on a Raspberry Pi will push info to a cloud database. With exercise information in a cloud database, monitoring the appliance’s well being might be accomplished by inspecting the info — and avoiding having to open firewalls to let in outdoors observers. I additionally get a knowledge supply for historical past, which I can graph on a UI later.

I picked DynamoDB on AWS because the database platform. My information wants match simply inside DynamoDB’s free tier, and DynamoDB is an efficient place to push IoT time sequence information. As a substitute of immediately connecting the Pi software to the Dynamo database, I selected an HTTP-based service layer for the interface between the Raspberry PI and AWS. I’ve discovered HTTP companies to be extra resilient than direct DB connections — HTTP’s stateless nature makes it self-correcting throughout community outages. Pushing information via to a DB is a superb job for a Lambda perform — and with AWS just lately publishing a Rust SDK, I took the chance to construct out the Lambda perform as a Rust software. Right here’s an image of how the items match collectively that we’re going to study:

Our Structure

There are three primary elements to the appliance. First, the primary software is thermostat_pi, the consumer that creates the info we transfer to the database. Below this challenge is the Lambda perform challenge, named push_temp. Lastly, the temp_data challenge holds a definition of a knowledge transport API. All three tasks are on GitHub below the thermostat_pi software.

In temp_data, I began with a Rust struct that holds the info items for the thermostat software and enabled serde for JSON illustration:

//temp-data/src/lib.rs
use serde::Deserialize;
use serde::Serialize;

#[derive(Debug, Serialize, Deserialize)]
pub struct TempData {
pub record_date: String,
pub thermostat_on: bool,
pub temperature: f32,
pub thermostat_value: u16,
}

I created this in a separate Rust crate in order that it could possibly be shared with each the Pi software and lambda perform tasks — guaranteeing each all the time had been in sync. The temp-data Cargo.toml seems to be like this:

[package]
identify = "temp-data"
model = "0.1.0"
version = "2021"
license = "MIT"

[dependencies]
serde = {model = "1", options = ["derive"]}

I then outlined a corresponding DynamoDB database to carry this info. I made a decision on a Partition Key of “day” for the time-series information, which permits for retrieving a day’s value of knowledge with out scanning the complete desk. I additionally created a kind key for the date/time. This key construction will enable environment friendly learn entry to the info once I wish to arrange an alarm or graph historic information. I don’t have a lot expertise with DynamoDB, so there could possibly be a extra environment friendly option to remedy this drawback — however what I’ve works for me. Right here’s what the DynamoDB desk will appear like once we are completed:

Pattern Information in Dynamo

The Record_Day and Record_Date keys are strings to DynamoDB. The Record_Date format is RFC3339, which the Rust commonplace time package deal helps. It creates a string that may kind the time values accurately by alphabetical sorting.

Subsequent, we construct the lambda perform to take our incoming request and retailer it within the DynamoDB desk. The push-temp listing of my primary challenge (GitHub hyperlink) is the place this lives. The push-temp Cargo.toml comprises these entries:

[package]
identify = "push_temp"
model = "0.1.0"
version = "2021"
license = "MIT OR Apache-2.0"

[dependencies]
aws-config = "0.51.0"
aws-sdk-dynamodb = "0.21.0"
log = "0.4.14"
serde = {model = "1", options = ["derive"]}
tokio = "1.16.1"
tracing-subscriber = { model = "0.3", options = ["env-filter"] }
lambda_http = "0.7"
serde_json = "1.0.78"

# Our package deal that defines the struct of the incoming request
temp-data = { path="../temp-data" }

We’re utilizing the AWS SDK for Rust. I put all of the Rust code in the primary.rs file for our lambda perform. First, there’s some boilerplate to import our message struct, outline our response varieties, and get the Lambda atmosphere arrange:

//push-temp/src/primary.rs
use aws_sdk_dynamodb::mannequin::AttributeValue;
use aws_sdk_dynamodb::Shopper;
use lambda_http::{lambda_runtime::Error, service_fn, IntoResponse, Request};

extern crate temp_data;
use temp_data::TempData;

use log::{debug, error};
use serde::Serialize;

#[derive(Debug, Serialize)]
struct SuccessResponse {
pub physique: String,
}

#[derive(Debug, Serialize)]
struct FailureResponse {
pub physique: String,
}

// Implement Show for the Failure response in order that we are able to then implement Error.
impl std::fmt::Show for FailureResponse {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Consequence {
write!(f, "{}", self.physique)
}
}

impl std::error::Error for FailureResponse {}

The primary() perform registers an occasion handler for the incoming occasion; our handler perform is known as my_handler:

//push-temp/src/primary.rs (continued)
#[tokio::main]
async fn primary() -> Consequence<(), Error> {
tracing_subscriber::fmt::init();
debug!("logger has been arrange");
lambda_http::run(service_fn(my_handler)).await?;
Okay(())
}

Our my_handler() perform will run when an incoming request arrives. Our my_handler() perform must do a few issues. First, it grabs the incoming JSON from the request and parses it into our struct, request_struct. Discover that if the JSON parsing fails, an error worth returns at this level.

//push-temp/src/primary.rs (continued)
async fn my_handler(request: Request) -> Consequence<impl IntoResponse, Error> {
debug!("dealing with a request, Request is: {:?}", request);
let request_json = match request.physique() {
lambda_http::Physique::Textual content(json_string) => json_string,
_ => "",
};
debug!("Request JSON is : {:?}", request_json);
let request_struct: TempData = serde_json::from_str(request_json)?;

Subsequent, we have to push this struct into our DynamoDB desk. I’m selecting to separate every information ingredient into its personal DynamoDB attribute as a substitute of storing the JSON immediately. We do some minor information formatting to tug out the day as a separate attribute to make use of as our Partition Key. The remainder of the struct values convert into AttributeValues for the Dynamo DB API. Our error dealing with hides DynamoDB-specific error messages from the top person as an implementation element.

//push-temp/src/primary.rs (continued)

// arrange as a DynamoDB consumer
let config = aws_config::load_from_env().await;
let consumer = Shopper::new(&config);

// construct the values which can be saved within the DB
let record_date_av = AttributeValue::S(request_struct.record_date.clone());
let thermostat_on_av = AttributeValue::S(request_struct.thermostat_on.to_string());
let temperature_av = AttributeValue::N(request_struct.temperature.to_string());
let thermostat_value_av = AttributeValue::N(request_struct.thermostat_value.to_string());
let record_day_av: AttributeValue = AttributeValue::S(request_struct.record_date[..10].to_string());

// Retailer our information within the DB
let _resp = consumer
.put_item()
.table_name("Shop_Thermostat")
.merchandise("Record_Day", record_day_av)
.merchandise("Record_Date", record_date_av)
.merchandise("Thermostat_On", thermostat_on_av)
.merchandise("Temperature", temperature_av)
.merchandise("Thermostat_Value", thermostat_value_av)
.ship()
.await
.map_err(|err| {
error!("did not put merchandise in Shop_Thermostat, error: {}", err);
FailureResponse {
physique: "The lambda encountered an error and your message was not saved".to_owned(),
}
})?;
debug! {
"Efficiently saved merchandise {:?}", &request_struct
}
Okay("the lambda was profitable".to_string())
}

To deploy our customized Lambda perform to AWS, we have to create an executable referred to as “bootstrap.” We’d like Rust to construct our executable by cross-compiling to the x86_64-unknown-linux-musl goal — which is what the Lambda run time requires. I like utilizing simply as a command runner and created a easy justfile for the construct, which runs the 2 instructions we have to produce the executable referred to as “bootstrap” in our native listing. I take advantage of the cross instrument (cargo set up cross), which pulls down a Docker container for the cross-compile atmosphere. The AWS SDK paperwork options to cross for those who don’t wish to use an area docker container. Lastly, we copy the produced executable to the magic file identify of “bootstrap” and retailer it in our challenge root.

#push-temp/justfile
construct:
cross construct - launch - goal x86_64-unknown-linux-musl
cp goal/x86_64-unknown-linux-musl/launch/push_temp bootstrap

We may manually deploy our Lambda perform by zipping up the bootstrap file and importing it via the AWS internet interface. However different AWS items have to go across the Lambda perform for every thing to work. We have to arrange permissions for the Lambda perform to insert information into our DynamoDB tables and permissions for executing the Lambda perform itself.

Just lately, AWS has revealed a way to create a Lambda Perform URL — an HTTPS endpoint immediately related to a Lambda perform. For easy use instances like ours, a Lambda Perform URL permits for an easier setup and avoids having to create an API gateway endpoint. If an API gateway endpoint is essential to you, I’d counsel studying this text which incorporates the extra steps wanted. My method is a simplified model of the one described.

We may use the AWS console to create our Lambda Perform, Perform URL, and DynamoDB – but it surely’s not very repeatable. As a substitute, let’s use Terraform to outline the items we have to have a repeatable course of. It additionally provides us a clear option to delete every thing once we wish to try this. I break up up the Terraform configuration right into a set of recordsdata for each bit of our deployment, all situated within the root of the push_temp crate. First, a variables.tf file will outline a few shared values we’ll want:

#push-temp/variables.tf

# Enter variable definitions, modify in your wants
variable "aws_region" {
description = "AWS area for all sources."
sort = string
default = "us-east-2"
}

variable "push_temp_bin_path" {
description = "The binary path for the lambda."
sort = string
default = "./bootstrap"
}

Then, a primary.tf file units up our surroundings:

#push-temp/primary.tf

terraform {
required_providers {
aws = {
supply = "hashicorp/aws"
model = "~> 4.0"
}
archive = {
supply = "hashicorp/archive"
model = "~> 2.2.0"
}
}

required_version = "~> 1.0"
}

supplier "aws" {
area = var.aws_region
}

information "aws_caller_identity" "present" {}

Now we are able to arrange every of the sources we have to deploy. First, we create the DynamoDB desk. Notice that we outline the 2 columns we use as keys, and the remaining get dynamically created as we insert information. Our keys are strings, so we use the kind = “S” to outline them. We initialize the desk on the lowest attainable useful resource utilization, as we’ve got a single little Raspberry Pi sending us information.

#push-temp/dynamo.tf

# aws_dynamodb_table.shop-thermostat-table:
useful resource "aws_dynamodb_table" "shop-thermostat-table" {
hash_key = "Record_Day"
identify = "Shop_Thermostat"
range_key = "Record_Date"
billing_mode = "PAY_PER_REQUEST"
read_capacity = 0
write_capacity = 0

attribute {
identify = "Record_Day"
sort = "S"
}

attribute {
identify = "Record_Date"
sort = "S"
}
}

Subsequent, we are able to outline our lambda perform. We have to present the .zip file of our executable to Terraform for the preliminary deployment to arrange the Lambda perform. I don’t wish to use Terraform to deploy our executable on each software change — Terraform will not be a CI/CD instrument. However we’d like one thing to create the perform. So after the sources are all created efficiently, we’ll use a distinct methodology to deploy software updates.

We additionally arrange a Lambda Perform URL because the publicly reachable endpoint.

#push-temp/lambdas.tf

# Right here we seize the compiled executable and use the archive_file package deal
# to transform it into the .zip file we'd like.
information "archive_file" "push_temp_lambda_archive" {
sort = "zip"
source_file = var.push_temp_bin_path
output_path = "bootstrap.zip"
}

# Right here we arrange an IAM position for our Lambda perform
useful resource "aws_iam_role" "push_temp_lambda_execution_role" {
assume_role_policy = <<EOF
{
"Model": "2012–10–17",
"Assertion": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

# Right here we connect a permission to execute a lambda perform to our position
useful resource "aws_iam_role_policy_attachment" "push_temp_lambda_execution_policy" {
position = aws_iam_role.push_temp_lambda_execution_role.identify
policy_arn = "arn:aws:iam::aws:coverage/service-role/AWSLambdaBasicExecutionRole"
}

# Right here is the definition of our lambda perform
useful resource "aws_lambda_function" "push_temp_lambda" {
function_name = "PushTemp"
source_code_hash = information.archive_file.push_temp_lambda_archive.output_base64sha256
filename = information.archive_file.push_temp_lambda_archive.output_path
handler = "func"
runtime = "offered"

# right here we allow debug logging for our Rust run-time atmosphere. We might change
# this to one thing much less verbose for manufacturing.
atmosphere {
variables = {
"RUST_LOG" = "debug"
}
}

#This attaches the position outlined above to this lambda perform
position = aws_iam_role.push_temp_lambda_execution_role.arn
}

// Add lambda -> DynamoDB insurance policies to the lambda execution position
useful resource "aws_iam_role_policy" "write_db_policy" {
identify = "lambda_write_db_policy"
position = aws_iam_role.push_temp_lambda_execution_role.identify
coverage = <<EOF
{
"Model": "2012–10–17",
"Assertion": [
{
"Sid": "",
"Action": [
"dynamodb:PutItem"
],
"Impact": "Permit",
"Useful resource": "arn:aws:dynamodb: :${var.aws_region}::${information.aws_caller_identity.present.account_id}:desk/Shop_Thermostat"
}
]
}
EOF
}

// The Lambda Perform URL that enables direct entry to our perform
useful resource "aws_lambda_function_url" "push_temp_function" {
function_name = aws_lambda_function.push_temp_lambda.function_name
authorization_type = "NONE"
}

Lastly, we create an output file so we are able to get the API endpoint for calling our perform:

#push-temp/output.tf

# Output worth definitions
output "invoke_url" {
worth = aws_lambda_function_url.push_temp_function.function_url
}

Whew, that’s all accomplished! A `terraform init & terraform apply` will create all of the stuff, add our newly compiled perform, and make it prepared for testing!

We are able to name the exterior endpoint via curl, changing <endpoint> under with the worth that terraform outputs on the apply.

curl -X POST https://<endpoint>.lambda-url.us-east-2.on.aws/ 
-H 'Content material-Kind: software/json'
-d '{"record_date":"2022–02–03T13:22:22","thermostat_on":true,"temperature":"65","thermostat_value":"64"}'

You need to use the DynamoDB console to see your new report within the database:

To make software updates to the code after the preliminary deployment, I created a deploy goal in my justfile for the instructions wanted to deploy an up to date software. These instructions depend on the AWS CLI to be put in and configured for a similar area because the Lambda perform.

#push-temp/justfile (continued)
deploy: construct
cp goal/x86_64-unknown-linux-musl/launch/push_temp bootstrap
zip bootstrap.zip bootstrap
aws lambda update-function-code - function-name PushTemp - zip-file fileb://./bootstrap.zip

Now that we’ve got a working back-end that may settle for an HTTP Publish with our JSON information and persist it in DynamoDB, we are able to create a Rust front-end that sends that request. Our Cargo.toml in the primary software once more has a reference to our shared TempData crate in order that we are able to use the shared struct.

[dependencies]
temp-data = { path="temp-data" }

I created a perform store_temp_data() to make use of each time new information is on the market inside the Rust software. I move within the information and the endpoint URL, which is within the run-time configuration elsewhere. I’m utilizing the reqwest crate for the bottom HTTP consumer. Our perform begins by initializing the consumer and constructing the request construction TempData we noticed earlier. We additionally seize the present time and convert it to the RFC3339 format.

//thermostat-pi/src/send_temp.rs

use reqwest;
use reqwest::Error;
use time::format_description::well_known::Rfc3339;
use time::macros::offset;
use time::OffsetDateTime;

extern crate temp_data;
use temp_data::TempData;

pub async fn store_temp_data(
thermostat_on: bool,
current_temp: f32,
thermostat_value: i16,
aws_url: &str,
) -> Consequence<(), Error> {
let consumer = reqwest::Shopper::new();

// Get the present time, offset to my timezone
let now = OffsetDateTime::now_utc().to_offset(offset!(-6));
let now = now.format(&Rfc3339).unwrap();

let physique = TempData {
record_date: now,
thermostat_on: thermostat_on,
temperature: current_temp
thermostat_value: thermostat_value
};

Subsequent, we ship the request to our endpoint, serializing it into JSON alongside the best way, and deal with the response. I’m selecting to log the error and return OK on an error, as this can be a non-critical perform for our software.

//thermostat-pi/src/send_temp.rs (continued) 

let response = consumer
.submit(aws_url)
.json(&physique)
.ship()
.await;

match response {
Okay(r) => {
tracing::debug!("response: {:?}", r);
}
Err(e) => {
tracing::error!("Error sending to AWS, {}", e);
}
}

Okay(())
}

And that’s it! For the 5 issues we got down to accomplish, listed below are our key takeaways:

  1. We’re defining the TempData struct in a separate crate (listing) with its personal Cargo.toml giving us a typical referenceable construction for the API between our consumer and server functions. Using the struct to outline a JSON-based interface between the consumer and server, and utilizing serde to serialize and deserialize our TempData construction on both finish, is straightforward to arrange and preserve in sync throughout our tasks.
  2. The AWS Rust SDK gives easy-to-use interfaces for Rust for Lambda definition and DynamoDB entry. Rust makes for a fantastic Lambda execution atmosphere with its pace and low reminiscence footprint.
  3. Terraform works nice to construct out the entire AWS elements we’d like and arrange the permissions items required to attach every thing collectively.
  4. Utilizing the AWS CLI is a simple option to replace our Lambda executable on demand.
  5. The reqwest crate provides us a simple means to ship HTTP requests for our consumer software.

I hope you discover this handy in your Rust journey! In case you have any enchancment concepts, please present suggestions within the feedback.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments