Skip to main content

Quickstart

Perform a multi-stage build

Now that you have a working Dagger pipeline, let's refine and optimize it.

You may have noticed that the previous listing exported the build artifacts to a directory on the host, and then copied them to a directory in the destination container. While this works, a more efficient approach is to use a multi-stage build...something that Dagger, by virtue of its design, excels at.

This is because Dagger SDK objects like Container and Directory can be thought of as collections of state. You can save this state and reference it elsewhere (even in a different Dagger pipeline or engine). You can also update the state from the point you left off, or use it an input to another operation.

In the context of a multi-stage build, this means that you can use Dagger to:

  • Perform a build in a container.
  • Obtain and save a Directory object referencing the filesystem state of that container (including the build artifacts) after the build.
  • Pass the saved Directory object as a parameter to a different container or pipeline, thereby transferring the saved filesystem state (and build artifacts) to that container or pipeline.
  • Perform further container or pipeline operations as needed.

Let's now update our pipeline to use a multi-stage build, as described above.

package main

import (
"context"
"fmt"
"math"
"math/rand"
"os"

"dagger.io/dagger"
)

func main() {
ctx := context.Background()

// initialize Dagger client
client, err := dagger.Connect(ctx, dagger.WithLogOutput(os.Stderr))
if err != nil {
panic(err)
}
defer client.Close()

// use a node:16-slim container
// mount the source code directory on the host
// at /src in the container
source := client.Container().
From("node:16-slim").
WithDirectory("/src", client.Host().Directory(".", dagger.HostDirectoryOpts{
Exclude: []string{"node_modules/", "ci/", "build/"},
}))

// set the working directory in the container
// install application dependencies
runner := source.WithWorkdir("/src").
WithExec([]string{"npm", "install"})

// run application tests
test := runner.WithExec([]string{"npm", "test", "--", "--watchAll=false"})

// first stage
// build application
buildDir := test.WithExec([]string{"npm", "run", "build"}).
Directory("./build")

// second stage
// use an nginx:alpine container
// copy the build/ directory from the first stage
// publish the resulting container to a registry
ref, err := client.Container().
From("nginx:1.23-alpine").
WithDirectory("/usr/share/nginx/html", buildDir).
Publish(ctx, fmt.Sprintf("ttl.sh/hello-dagger-%.0f", math.Floor(rand.Float64()*10000000))) //#nosec
if err != nil {
panic(err)
}

fmt.Printf("Published image to: %s\n", ref)
}

Run the pipeline by executing the command below from the application directory:

dagger run go run ci/main.go

This revised pipeline produces the same result as before, but using a two-stage process:

  • In the first stage, the pipeline installs dependencies, runs tests and builds the application in the node:16-slim container. However, instead of exporting the build/ directory to the host, it saves the corresponding Directory object as a constant. This object represents the filesystem state of the build/ directory in the container after the build, and is portable to other Dagger pipelines.
  • In the second stage, the pipeline uses the saved Directory object as input, thereby transferring the filesystem state (the built React application) to the nginx:alpine container. It then publishes the result to a registry as previously described.