AWS SAM
This is a Dagger package to help you deploy serverless functions with ease. It is a superset of AWS SAM, which allows you to build and deploy Lambda function(s). The aim is to integrate the lambda deployment to your current Dagger pipeline. This way, you can build and deploy with a single Dagger environment.
⚒️ Prerequisite
Before we can build, test & deploy our example app with Dagger, we need to have Docker Engine running.
We also need to install dagger
.
🔰 Quickstart
Everyone should be able to develop and deploy their AWS SAM functions using a local pipeline. Having to commit & push in order to test a change slows down iteration.
Locally
An AWS SAM project requires the following environment variables:
AWS_ACCESS_KEY_ID=<your AWS access key id>
AWS_REGION=<your AWS region>
// if you use a .zip archive you have to provide a S3 bucket
AWS_S3_BUCKET=<your S3 bucket>
AWS_SECRET_KEY=<your AWS secret key>
AWS_STACK_NAME=<your stack name>
Now we are ready to write the plan to build and deploy a SAM function with dagger.
Plan for a .zip archive
This is a the plan for a .zip archives
function:
package samZip
import (
"dagger.io/dagger"
"universe.dagger.io/alpha/aws/sam"
)
dagger.#Plan & {
_common: config: sam.#Config & {
accessKey: client.env.AWS_ACCESS_KEY_ID
region: client.env.AWS_REGION
bucket: client.env.AWS_S3_BUCKET
secretKey: client.env.AWS_SECRET_ACCESS_KEY
stackName: client.env.AWS_STACK_NAME
}
client: {
filesystem: "./": read: contents: dagger.#FS
env: {
AWS_ACCESS_KEY_ID: string
AWS_REGION: string
AWS_S3_BUCKET: string
AWS_SECRET_ACCESS_KEY: dagger.#Secret
AWS_STACK_NAME: string
}
}
actions: {
build: sam.#Package & _common & {
fileTree: client.filesystem."./".read.contents
}
deploy: sam.#DeployZip & _common & {
input: build.output
}
}
}
Now we can run dagger do deploy
to build an AWS SAM function and deploy it to AWS Lambda.
Plan for a Docker image
This is a the plan for a docker image
function.
In case of building a Docker image we have to define the Docker socket and we don't need the S3 bucket anymore.
package samImage
import (
"dagger.io/dagger"
"universe.dagger.io/alpha/aws/sam"
)
dagger.#Plan & {
_common: config: sam.#Config & {
accessKey: client.env.AWS_ACCESS_KEY_ID
region: client.env.AWS_REGION
secretKey: client.env.AWS_SECRET_ACCESS_KEY
stackName: client.env.AWS_STACK_NAME
clientSocket: client.network."unix:///var/run/docker.sock".connect
}
client: {
filesystem: "./": read: contents: dagger.#FS
network: "unix:///var/run/docker.sock": connect: dagger.#Socket
env: {
AWS_ACCESS_KEY_ID: string
AWS_REGION: string
AWS_SECRET_ACCESS_KEY: dagger.#Secret
AWS_STACK_NAME: string
}
}
actions: {
build: sam.#Build & _common & {
fileTree: client.filesystem."./".read.contents
}
deploy: sam.#Deployment & _common & {
input: build.output
}
}
}
Now we can run dagger do deploy
to build an AWS SAM function and deploy it to AWS Lambda.
GitLab CI
Build & deploy .zip archives with GitLab CI
If we want to run the above plans in a GitLab CI environment, we can do that without any changes to the .zip archives
.
First step is to create a .gitlab-ci.yml
with the following content:
.docker:
image: docker:${DOCKER_VERSION}-git
services:
- docker:${DOCKER_VERSION}-dind
variables:
# See https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker-in-docker-with-tls-enabled-in-the-docker-executor
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_VERIFY: '1'
DOCKER_TLS_CERTDIR: '/certs'
DOCKER_CERT_PATH: '/certs/client'
# Faster than the default, apparently
DOCKER_DRIVER: overlay2
DOCKER_VERSION: '20.10'
.dagger:
extends: [.docker]
variables:
DAGGER_VERSION: 0.2.27
DAGGER_LOG_FORMAT: plain
DAGGER_CACHE_PATH: .dagger-cache
ARGS: ''
cache:
key: dagger-${CI_JOB_NAME}
paths:
- ${DAGGER_CACHE_PATH}
before_script:
- apk add --no-cache curl
- |
# install dagger
cd /usr/local
curl -L https://dl.dagger.io/dagger/install.sh | sh
cd -
script:
- dagger project update
- |
dagger \
do \
${ARGS} \
--log-format=plain \
--log-level debug
build:
extends: [.dagger]
variables:
ARGS: deploy
Triggering the pipeline will build our AWS SAM function and deploy it to AWS Lambda.
Remember to set the needed environment variables in your GitLab CI environment.
Build & deploy a Docker image with GitLab CI
If we want to run the plan with the Docker image in a GitLab CI environment, we have to make small changes.
This is because in GitLab we have to use a DinD-Service
and we cannot connect via docker socket
- we have to use tcp-socket
.
First we have to change the plan itself to use tcp-socket
:
package samImageGitlabCI
import (
"dagger.io/dagger"
"universe.dagger.io/alpha/aws/sam"
)
dagger.#Plan & {
_common: config: sam.#Config & {
ciKey: actions.ciKey
accessKey: client.env.AWS_ACCESS_KEY_ID
region: client.env.AWS_REGION
secretKey: client.env.AWS_SECRET_ACCESS_KEY
stackName: client.env.AWS_STACK_NAME
if (client.env.DOCKER_PORT_2376_TCP != _|_) {
host: client.env.DOCKER_PORT_2376_TCP
}
if (actions.ciKey != null) {
certs: client.filesystem."/certs/client".read.contents
}
clientSocket: client.network."unix:///var/run/docker.sock".connect
}
client: {
filesystem: {
"./": read: contents: dagger.#FS
if actions.ciKey != null {
"/certs/client": read: contents: dagger.#FS
}
}
if actions.ciKey == null {
network: "unix:///var/run/docker.sock": connect: dagger.#Socket
}
env: {
AWS_ACCESS_KEY_ID: string
AWS_REGION: string
AWS_SECRET_ACCESS_KEY: dagger.#Secret
AWS_STACK_NAME: string
DOCKER_PORT_2376_TCP?: string
}
}
actions: {
ciKey: *null | string
build: sam.#Build & _common & {
fileTree: client.filesystem."./".read.contents
}
deploy: sam.#Deployment & _common & {
input: build.output
}
}
}
Next we have to update our .gitlab-ci.yml
with the following content:
.docker:
image: docker:${DOCKER_VERSION}-git
services:
- docker:${DOCKER_VERSION}-dind
variables:
# See https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker-in-docker-with-tls-enabled-in-the-docker-executor
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_VERIFY: '1'
DOCKER_TLS_CERTDIR: '/certs'
DOCKER_CERT_PATH: '/certs/client'
# Faster than the default, apparently
DOCKER_DRIVER: overlay2
DOCKER_VERSION: '20.10'
.dagger:
extends: [.docker]
variables:
DAGGER_VERSION: 0.2.27
DAGGER_LOG_FORMAT: plain
DAGGER_CACHE_PATH: .dagger-cache
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_REGION: $AWS_REGION
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_STACK_NAME: $AWS_STACK_NAME
cache:
key: dagger-${CI_JOB_NAME}
paths:
- ${DAGGER_CACHE_PATH}
before_script:
- apk add --no-cache curl
- |
# install dagger
cd /usr/local
curl -L https://dl.dagger.io/dagger/install.sh | sh
cd -
script:
- dagger project update
- |
dagger \
do \
${ARGS} \
--with 'actions: ciKey: "gitlab"' \
--log-format=plain \
--log-level debug
build:
extends: [.dagger]
variables:
ARGS: deploy
Notice that we have added --with 'actions: ciKey: "gitlab"'
to the dagger do deploy
command.
If we trigger the pipeline, it should build our AWS SAM function and deploy everything to AWS Lambda.
Remember to set the needed environment variables in your GitLab CI environment.
🤝 Contributing
If something doesn't work as expected, please open an issue.
If you intend to contribute, please follow our contributing guidelines! 🚀