Use Dagger with Argo Workflows
Introduction
Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. This guide explains how to run Dagger pipelines in Argo Workflows.
Requirements
This guide assumes that you have a basic understanding of Kubernetes and Argo Workflows, and that your Kubernetes cluster has been configured following the Run Dagger on Kubernetes guide.
Step 1: Install Argo Workflows
The first step is to install Argo Workflows in the Kubernetes cluster.
Follow the Argo Workflows quickstart steps, adjusting them as needed to your own requirements. Once you've successfully installed Argo Workflows in your cluster, continue to the next step.
Step 2: Run a sample workflow
The sample workflow will clone and run the CI for the greetings-api demo project. This project uses the Dagger Go SDK for CI.
Create a file called workflow.yaml
with the following content:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: dagger-in-argo-
spec:
entrypoint: dagger-workflow
volumes:
- name: dagger-conn
hostPath:
path: /var/run/dagger
- name: gomod-cache
persistentVolumeClaim:
claimName: gomod-cache
templates:
- name: dagger-workflow
inputs:
artifacts:
- name: project-source
path: /work
git:
repo: https://github.com/kpenfound/greetings-api.git
revision: "main"
- name: dagger-cli
path: /usr/local/bin
mode: 0755
http:
url: https://github.com/dagger/dagger/releases/download/v0.8.7/dagger_v0.8.7_linux_arm64.tar.gz
container:
image: golang:1.21.0-bookworm
command: ["sh", "-c"]
args: ["dagger run go run ./ci ci"]
workingDir: /work
env:
- name: "_EXPERIMENTAL_DAGGER_RUNNER_HOST"
value: "unix:///var/run/dagger/buildkitd.sock"
- name: "DAGGER_CLOUD_TOKEN"
valueFrom:
secretKeyRef:
name: dagger-cloud
key: token
volumeMounts:
- name: dagger-conn
mountPath: /var/run/dagger
- name: gomod-cache
mountPath: /go/pkg/mod
A few important points to note:
- The workflow uses hardwired artifacts to clone the Git repository and to install the Dagger CLI.
unix:///var/run/dagger/buildkitd.sock
is mounted and specified with the_EXPERIMENTAL_DAGGER_RUNNER_HOST
environment variable.- The Dagger CLI
dagger_v0.8.7_linux_amd64.tar.gz
is downloaded and installed. Confirm the version and architecture are accurate for your cluster and project. - The image
golang:1.21.0-bookworm
is used as the runtime for the pipeline because the example project requires Go. - Setting the
DAGGER_CLOUD_TOKEN
environment variable is only necessary if integrating with Dagger Cloud.
The workflow uses a PersistentVolumeClaim for the runtime dependencies of the pipeline, such as the Dagger Go SDK.
Create the PersistentVolumeClaim configuration in a file called gomodcache.yaml
:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gomod-cache
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Apply the configuration:
kubectl apply -n argo -f ./gomodcache.yaml
When you're satisfied with the workflow configuration, run it with Argo:
argo submit -n argo --watch ./workflow.yaml
The --watch
argument provides an ongoing status feed of the workflow request in Argo. To see the logs from your workflow, note the pod name and in another terminal run kubectl logs -f POD_NAME
Once the workflow has successfully completed, run it again with argo submit -n argo --watch ./workflow.yaml
. Dagger's caching should result in a significantly faster second execution.
Conclusion
This example demonstrated how to integrate Dagger with Argo Workflows. However, this is a basic example and it's likely that you will want to also integrate Argo Workflows and Argo Events into your CI/CD pipeline. These topics are outside the scope of this guide, but you can find numerous third-party tutorials on these topics, such as this guide on implementing CI/CD pipeline using Argo Workflows and Argo Events.
To learn more about Dagger, use the API Key Concepts page and the Go, Node.js and Python SDK References. For more information on Argo Workflows, refer to the official documentation.