Lesson Details

With our crate in place we can set up our lambda code for the Netlify Function.

We'll need lambda_http without the default features, and with the apigw_rest feature as well as tokio, with at least the macros feature (The other option we've covered before is full).

We've covered these dependencies in the Introduction to Serverless with Netlify workshop, so we'll focus on other aspects here.

[dependencies]
lambda_http = { version = "0.8.1", default-features = false, features = [
    "apigw_rest",
] }
tokio = { version = "1.29.1", features = ["macros"] }

and make use of them in our new pokemon-api/src/main.rs.

use lambda_http::{
    http::header::CONTENT_TYPE, run, service_fn, Body,
    Error, Request, Response,
};

async fn function_handler(
    event: Request,
) -> Result<Response<Body>, Error> {
    dbg!(event);
    let html = "<html><body><h1>hello!</h1></body></html>";
    let resp = Response::builder()
        .status(200)
        .header(CONTENT_TYPE, "text/html")
        .body(Body::Text(html.to_string()))?;
    Ok(resp)
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    run(service_fn(function_handler)).await
}

How lambda functions work

The setup for a lambda function can be through of as two phases:

  • what happens on a cold start
  • what happens to process a single incoming event

The cold start

In our case, the main function runs on cold start, which means it runs once for each instance of our function Netlify or AWS would bring up. Another instance of a function is generally started if there aren't enough instances available to handle the current number of concurrent requests.

The run function call in main internally bootstraps a Stream and a while loop to process each event, similar to what we did in the Pokemon Csv upload course, except this one never completes.

The run function takes a type that implements the Service trait as an argument and uses that service to process each event. If you track the Service type down through the source code, you find out that this is a trait that is re-exported from tower::Service. tower::Service is a common building block supported by a number of different server frameworks in the Rust ecosystem, so its not surprising to see it here.

lambda_http provides a utility function that converts a regular function into a Service called service_fn. This again, is a re-export of the same function from tower::service_fn.

The function_handler function is the one that the run processing uses to process each event.

#[tokio::main]
async fn main() -> Result<(), Error> {
    run(service_fn(function_handler)).await
}

The main function is wrapped in the tokio::main macro, which wraps our application in the tokio async runtime. That's what lets us await the lambda runtime and process each event.

Altogether this means that when our function is instantiated for the first time, we're setting up an infinite async loop to process each event from the Stream, and then await'ing on that infinite loop, which means our code will run forever.

Event processing

Our runtime of choice (Netlify or AWS) will freeze our function after an event is processed, and un-freeze it when a new event comes in. This means that our infinite loop will keep running our function_handler as long as new events keep coming in.

async fn function_handler(
    event: Request,
) -> Result<Response<Body>, Error> {
    dbg!(event);
    let html = "<html><body><h1>hello!</h1></body></html>";
    let resp = Response::builder()
        .status(200)
        .header(CONTENT_TYPE, "text/html")
        .body(Body::Text(html.to_string()))?;
    Ok(resp)
}

The handler function accepts one argument: our Request. A Request includes the request path, http method used, headers, and more.

At this point, the project will build... but that doesn't do much for us.

cargo build -p pokemon-api

but that doesn't do too much for us aside from confirming we set up the function correctly because if we cargo run there are environment variables that are required to run.

thread 'main' panicked at 'Missing AWS_LAMBDA_FUNCTION_NAME env var: NotPresent', /Users/chris/.cargo/registry/src/index.crates.io-6f17d22bba15001f/lambda_runtime-0.8.2/src/lib.rs:66:65
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

These environment variables are handled for us by cargo lambda.

Start the cargo lambda watch process.

cargo lambda watch

and then in a new terminal we can send an example payload to the lambda. Since we aren't using any of the data in the request, it doesn't matter what the values in the request payload are but it does matter that it matches a certain JSON structure.

We'll use a built-in payload example called apigw-request. Using cargo lambda, we can invoke our pokemon-api lambda with the example payload.

❯ cargo lambda invoke pokemon-api --data-example apigw-request
{"statusCode":200,"headers":{"content-type":"text/html"},"multiValueHeaders":{"content-type":["text/html"]},"body":"<html><body><h1>hello!</h1></body></html>","isBase64Encoded":false}

The response JSON looks like this. This is a debug-level view of the response. In production our responses won't come back as a JSON object.

{
  "statusCode": 200,
  "headers":
  {
    "content-type": "text/html"
  },
  "multiValueHeaders":
  {
    "content-type":
    [
      "text/html"
    ]
  },
  "body": "<html><body><h1>hello!</h1></body></html>",
  "isBase64Encoded": false
}

And cargo-lambda watch will have logged out the event it saw on the server.

❯ cargo lambda watch
 INFO invoke server listening on [::]:9000
 INFO starting lambda function function="pokemon-api" manifest="Cargo.toml"
   Compiling ...
   Compiling pokemon-api v0.1.0 (/rust-adventure/pokemon-api-netlify/crates/pokemon-api)
    Finished dev [unoptimized + debuginfo] target(s) in 6.53s
     Running `target/debug/pokemon-api`
[crates/pokemon-api/src/main.rs:9] event = Request {
    method: POST,
    uri: https://gy415nuibc.execute-api.us-east-1.amazonaws.com/testStage/hello/world?name=me,
    version: HTTP/1.1,
    headers: {
        "accept": "*/*",
        "accept-encoding": "gzip, deflate",
        "cache-control": "no-cache",
        "cloudfront-forwarded-proto": "https",
        "cloudfront-is-desktop-viewer": "true",
        "cloudfront-is-mobile-viewer": "false",
        "cloudfront-is-smarttv-viewer": "false",
        "cloudfront-is-tablet-viewer": "false",
        "cloudfront-viewer-country": "US",
        "content-type": "application/json",
        "headername": "headerValue",
        "host": "gy415nuibc.execute-api.us-east-1.amazonaws.com",
        "postman-token": "9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f",
        "user-agent": "PostmanRuntime/2.4.5",
        "via": "1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)",
        "x-amz-cf-id": "pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A==",
        "x-forwarded-for": "54.240.196.186, 54.182.214.83",
        "x-forwarded-port": "443",
        "x-forwarded-proto": "https",
        "x-amzn-trace-id": "Root=1-650bba84-2a31f6a5843f8c820231e2e1;Parent=b3e995b786cda8c2;Sampled=1",
    },
    body: Text(
        "{\r\n\t\"a\": 1\r\n}",
    ),
}