The source of truth for how our function behaves belongs to the production environment. Netlify helps out a lot here with the ability to deploy branches into the production environment, which means when we PR code into our serverless project it can be deployed and we can integration test or manually review before merging.
This early version of a "testing in production" workflow that matches common mainstream workflows is very powerful.
Combining it with feature flags to enable/disable code paths and other mechanisms can lead to faster and safer deploys.
Before we deploy, when it comes to Rust, there are a few tools we use every time we compile that give us confidence our code is operating as we intend it to. Type checking with cargo check
, linting with cargo clippy
, benchmarking with cargo bench
, and the final compilation itself all contribute to how much confidence we can have in our deployments.
We have one more tool we can set up and use: cargo test
.
Cargo comes with built-in support for approaches like unit and integration testing.
We can write a unit test for our serverless function that will allow us to build confidence that it handles event objects properly and gives us the return values we expect.
This will let us run our handler code locally, in CI, or elsewhere, simulating one event being processed.
The request fixture
We're going to take advantage of the JSON fixture we previously used when we ran cargo lambda invoke
. example-apigw-request.json is the one we'll use, which I'll reproduce here in a code block for convenience.
Save it as crates/pokemon-api/src/apigw-request.json
.
{
"resource": "/{proxy+}",
"path": "/hello/world",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"cache-control": "no-cache",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Content-Type": "application/json",
"headerName": "headerValue",
"Host": "gy415nuibc.execute-api.us-east-1.amazonaws.com",
"Postman-Token": "9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f",
"User-Agent": "PostmanRuntime/2.4.5",
"Via": "1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A==",
"X-Forwarded-For": "54.240.196.186, 54.182.214.83",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"multiValueHeaders": {
"Accept": [
"*/*"
],
"Accept-Encoding": [
"gzip, deflate"
],
"cache-control": [
"no-cache"
],
"CloudFront-Forwarded-Proto": [
"https"
],
"CloudFront-Is-Desktop-Viewer": [
"true"
],
"CloudFront-Is-Mobile-Viewer": [
"false"
],
"CloudFront-Is-SmartTV-Viewer": [
"false"
],
"CloudFront-Is-Tablet-Viewer": [
"false"
],
"CloudFront-Viewer-Country": [
"US"
],
"Content-Type": [
"application/json"
],
"headerName": [
"headerValue"
],
"Host": [
"gy415nuibc.execute-api.us-east-1.amazonaws.com"
],
"Postman-Token": [
"9f583ef0-ed83-4a38-aef3-eb9ce3f7a57f"
],
"User-Agent": [
"PostmanRuntime/2.4.5"
],
"Via": [
"1.1 d98420743a69852491bbdea73f7680bd.cloudfront.net (CloudFront)"
],
"X-Amz-Cf-Id": [
"pn-PWIJc6thYnZm5P0NMgOUglL1DYtl0gdeJky8tqsg8iS_sgsKD1A=="
],
"X-Forwarded-For": [
"54.240.196.186, 54.182.214.83"
],
"X-Forwarded-Port": [
"443"
],
"X-Forwarded-Proto": [
"https"
]
},
"queryStringParameters": {
"name": "me"
},
"multiValueQueryStringParameters": {
"name": [
"me"
]
},
"pathParameters": {
"proxy": "hello/world"
},
"stageVariables": {
"stageVariableName": "stageVariableValue"
},
"requestContext": {
"accountId": "12345678912",
"resourceId": "roq9wj",
"path": "/hello/world",
"stage": "testStage",
"domainName": "gy415nuibc.execute-api.us-east-2.amazonaws.com",
"domainPrefix": "y0ne18dixk",
"requestId": "deef4878-7910-11e6-8f14-25afc3e9ae33",
"protocol": "HTTP/1.1",
"identity": {
"cognitoIdentityPoolId": "theCognitoIdentityPoolId",
"accountId": "theAccountId",
"cognitoIdentityId": "theCognitoIdentityId",
"caller": "theCaller",
"apiKey": "theApiKey",
"apiKeyId": "theApiKeyId",
"accessKey": "ANEXAMPLEOFACCESSKEY",
"sourceIp": "192.168.196.186",
"cognitoAuthenticationType": "theCognitoAuthenticationType",
"cognitoAuthenticationProvider": "theCognitoAuthenticationProvider",
"userArn": "theUserArn",
"userAgent": "PostmanRuntime/2.4.5",
"user": "theUser"
},
"authorizer": {
"principalId": "admin",
"clientId": 1,
"clientName": "Exata"
},
"resourcePath": "/{proxy+}",
"httpMethod": "POST",
"requestTime": "15/May/2020:06:01:09 +0000",
"requestTimeEpoch": 1589522469693,
"apiId": "gy415nuibc"
},
"body": "{\r\n\t\"a\": 1\r\n}"
}
Writing a test
We can then use that fixture JSON file to build up the Request argument that we're expecting in our lambda function. Notice that we're testing function_handler
not our main.rs
.
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn accepts_apigw_request() {
let input = include_str!("apigw-request.json");
let request = lambda_http::request::from_str(input)
.expect("failed to create request");
let _response = function_handler(request)
.await
.expect("failed to handle request");
}
}
Starting at the top: The cfg
macro lets us conditionally include source code in our compilation. In this case our test code only gets included if we're running cargo test
as the test
condition is only true if the Rust compiler is in "test mode", which cargo test
sets.
There are quite a few conditions we can use in a cfg macro, such as for target operating systems, features, or architectures.
In this case we're conditionally include a submodule which we've called tests
out of convention. The name of this submodule doesn't matter.
Rust modules don't necessarily reflect the file system and we can see that here, where tests
is a sub-module that we define entirely in the same file as main.rs
.
It's incredibly useful for unit tests to be able to access any items defined in the parent module, so we use super::*
to bring all of the items, such as function_handler
into scope from the parent module.
function_handler
is an async function, which means we need a tokio runtime. While Rust offers the #[test]
macro to indicate tests, the tokio crate matches this with its own async version of the macro called tokio::test
.
The test macro is basically a flag to the test runner to say "execute this function as a test", since test functions are really no different than regular functions. The tokio::test
macro additionally sets up a tokio runtime for us. We can have control over what kind of runtime tokio uses in our tests if we want, including running as single-threaded, multi-threaded, or even starting with time stopped.
function_handler
again is async
, so our test needs to be an async function as well. Test functions accept no arguments and return either ()
(which is the default return value for all functions) or a Result
. This is similar to how main
works.
Any panic will fail the test.
Once we're in our test, we can construct an event
to mock the first argument to our handler. We use the include_str!
macro to pull in the JSON fixture file, and the lambda_http::request::from_str
function to turn that into a Request
struct.
This could fail, so we use .expect
on the Result, since a panic will fail our test if the conversion to a Request
fails.
let input = include_str!("apigw-request.json");
let request = lambda_http::request::from_str(input)
.expect("failed to create request");
We can then pass that request
into our function_handler
directly.
let _response = function_handler(request)
.await
.expect("failed to handle request");
We have to await the function_handler
since it's an async function, and then we expect
the Result
return value which will panic and fail the test if it is an Err
.
Running the test
Then running cargo test -p pokemon-api
will find this test and run it for us.
❯ cargo test -p pokemon-api
Compiling pokemon-api v0.1.0 (/rust-adventure/pokemon-api-netlify/crates/pokemon-api)
Finished test [unoptimized + debuginfo] target(s) in 0.35s
Running unittests src/main.rs (target/debug/deps/pokemon_api-af6d6f386c8e99c7)
running 1 test
test tests::accepts_apigw_request ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s