Overview
Module resolution gave us the vocabulary for reasoning about the compiler’s graph. Now we turn that vocabulary into infrastructure. This chapter digs into std.Build beyond the basics, exploring artifact tours and library/executable workspaces. We will register modules intentionally, compose multi-package workspaces, generate build outputs without touching shell scripts, and drive cross-target matrices from a single build.zig. See Build.zig.
You will learn how named write-files, anonymous modules, and resolveTargetQuery feed the build runner, how to keep vendored code isolated from registry dependencies, and how to wire CI jobs that prove your graph behaves in Debug and Release builds alike. See build_runner.zig.
How the Build System Executes
Before diving into advanced patterns, it’s essential to understand how std.Build executes. The following diagram shows the complete flow from the Zig compiler invoking your build.zig script through to final artifact installation:
Your build.zig is a regular Zig program compiled and executed by the compiler. The build() function is the entry point, receiving a *std.Build instance that provides the API for defining steps, artifacts, and dependencies. Build arguments (-D flags) are parsed by b.option() and flow into your build logic as compile-time constants. The build runner then traverses the step dependency graph you’ve declared, executing only the steps needed to satisfy the requested target (defaulting to the install step). This declarative model ensures reproducibility: the same inputs always produce the same build graph.
Learning Goals
- Register reusable modules and anonymous packages explicitly, controlling which names appear in the import namespace. 25
- Generate deterministic artifacts (reports, manifests) from the build graph using named write-files instead of ad-hoc shell scripting.
- Coordinate multi-target builds with
resolveTargetQuery, including host sanity checks and cross-compilation pipelines. 22, Compile.zig - Structure composite workspaces so vendored modules remain private while registry packages stay self-contained. 24
- Capture reproducibility guarantees in CI: install steps, run steps, and generated artifacts all hang off
std.Build.Stepdependencies.
Building a Workspace Surface
A workspace is just a build graph with clear namespace boundaries. The following example promotes three modules—analytics, reporting, and a vendored adapters helper—and shows how a root executable consumes them. We emphasize which modules are globally registered, which remain anonymous, and how to emit documentation straight from the build graph.
const std = @import("std");
pub fn build(b: *std.Build) void {
// Standard target and optimization options allow the build to be configured
// for different architectures and optimization levels via CLI flags
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
// Create the analytics module - the foundational module that provides
// core metric calculation and analysis capabilities
const analytics_mod = b.addModule("analytics", .{
.root_source_file = b.path("workspace/analytics/lib.zig"),
.target = target,
.optimize = optimize,
});
// Create the reporting module - depends on analytics to format and display metrics
// Uses addModule() which both creates and registers the module in one step
const reporting_mod = b.addModule("reporting", .{
.root_source_file = b.path("workspace/reporting/lib.zig"),
.target = target,
.optimize = optimize,
// Import analytics module to access metric types and computation functions
.imports = &.{.{ .name = "analytics", .module = analytics_mod }},
});
// Create the adapters module using createModule() - creates but does not register
// This demonstrates an anonymous module that other code can import but won't
// appear in the global module namespace
const adapters_mod = b.createModule(.{
.root_source_file = b.path("workspace/adapters/vendored.zig"),
.target = target,
.optimize = optimize,
// Adapters need analytics to serialize metric data
.imports = &.{.{ .name = "analytics", .module = analytics_mod }},
});
// Create the main application module that orchestrates all dependencies
// This demonstrates how a root module can compose multiple imported modules
const app_module = b.createModule(.{
.root_source_file = b.path("workspace/app/main.zig"),
.target = target,
.optimize = optimize,
.imports = &.{
// Import all three workspace modules to access their functionality
.{ .name = "analytics", .module = analytics_mod },
.{ .name = "reporting", .module = reporting_mod },
.{ .name = "adapters", .module = adapters_mod },
},
});
// Create the executable artifact using the composed app module as its root
// The root_module field replaces the legacy root_source_file approach
const exe = b.addExecutable(.{
.name = "workspace-app",
.root_module = app_module,
});
// Install the executable to zig-out/bin so it can be run after building
b.installArtifact(exe);
// Set up a run command that executes the built executable
const run_cmd = b.addRunArtifact(exe);
// Forward any command-line arguments passed to the build system to the executable
if (b.args) |args| {
run_cmd.addArgs(args);
}
// Create a custom build step "run" that users can invoke with `zig build run`
const run_step = b.step("run", "Run workspace app with registered modules");
run_step.dependOn(&run_cmd.step);
// Create a named write files step to document the module dependency graph
// This is useful for understanding the workspace structure without reading code
const graph_files = b.addNamedWriteFiles("graph");
// Generate a text file documenting the module registration hierarchy
_ = graph_files.add("module-graph.txt",
\\workspace module registration map:
\\ analytics -> workspace/analytics/lib.zig
\\ reporting -> workspace/reporting/lib.zig (imports analytics)
\\ adapters -> (anonymous) workspace/adapters/vendored.zig
\\ exe root -> workspace/app/main.zig
);
// Create a custom build step "graph" that generates module documentation
// Users can invoke this with `zig build graph` to output the dependency map
const graph_step = b.step("graph", "Emit module graph summary to zig-out");
graph_step.dependOn(&graph_files.step);
}
The build() function follows a deliberate cadence:
b.addModule("analytics", …)registers a public name so the entire workspace can@import("analytics"). Module.zigb.createModulecreates a private module (adapters) that only the root executable sees—ideal for vendored code that consumers should not reach. 24b.addNamedWriteFiles("workspace-graph")produces amodule-graph.txtfile inzig-out/, documenting the namespace mapping without bespoke tooling.- Every dependency is threaded through
.imports, so the compiler never falls back to filesystem guessing. 25
$ zig build --build-file 01_workspace_build.zig runmetric: response_ms
count: 6
mean: 12.95
deviation: 1.82
profile: stable
json export: {
"name": "response_ms",
"mean": 12.950,
"deviation": 1.819,
"profile": "stable"
}$ zig build --build-file 01_workspace_build.zig graphNo stdout expected.Named write-files obey the cache: rerunning zig build … graph without changes is instant. Check zig-out/graph/module-graph.txt to see the mapping emitted by the build runner.
Library code for the workspace
To keep this example self-contained, the modules live next to the build script. Feel free to adapt them to your needs or swap in registry dependencies declared in build.zig.zon.
// Analytics library for statistical calculations on metrics
const std = @import("std");
// Represents a named metric with associated numerical values
pub const Metric = struct {
name: []const u8,
values: []const f64,
};
// Calculates the arithmetic mean (average) of all values in a metric
// Returns the sum of all values divided by the count
pub fn mean(metric: Metric) f64 {
var total: f64 = 0;
for (metric.values) |value| {
total += value;
}
return total / @as(f64, @floatFromInt(metric.values.len));
}
// Calculates the standard deviation of values in a metric
// Uses the population standard deviation formula: sqrt(sum((x - mean)^2) / n)
pub fn deviation(metric: Metric) f64 {
const avg = mean(metric);
var accum: f64 = 0;
// Sum the squared differences from the mean
for (metric.values) |value| {
const delta = value - avg;
accum += delta * delta;
}
// Return the square root of the variance
return std.math.sqrt(accum / @as(f64, @floatFromInt(metric.values.len)));
}
// Classifies a metric as "variable" or "stable" based on its standard deviation
// Metrics with deviation > 3.0 are considered variable, otherwise stable
pub fn highlight(metric: Metric) []const u8 {
return if (deviation(metric) > 3.0)
"variable"
else
"stable";
}
//! Reporting module for displaying analytics metrics in various formats.
//! This module provides utilities to render metrics as human-readable text
//! or export them in CSV format for further analysis.
const std = @import("std");
const analytics = @import("analytics");
/// Renders a metric's statistics to a writer in a human-readable format.
/// Outputs the metric name, number of data points, mean, standard deviation,
/// and performance profile label.
///
/// Parameters:
/// - metric: The analytics metric to render
/// - writer: Any writer interface that supports the print() method
///
/// Returns an error if writing to the output fails.
pub fn render(metric: analytics.Metric, writer: anytype) !void {
try writer.print("metric: {s}\n", .{metric.name});
try writer.print("count: {}\n", .{metric.values.len});
try writer.print("mean: {d:.2}\n", .{analytics.mean(metric)});
try writer.print("deviation: {d:.2}\n", .{analytics.deviation(metric)});
try writer.print("profile: {s}\n", .{analytics.highlight(metric)});
}
/// Exports a metric's statistics as a CSV-formatted string.
/// Creates a two-row CSV with headers and a single data row containing
/// the metric's name, mean, deviation, and highlight label.
///
/// Parameters:
/// - metric: The analytics metric to export
/// - allocator: Memory allocator for the resulting string
///
/// Returns a heap-allocated CSV string, or an error if allocation or formatting fails.
/// Caller is responsible for freeing the returned memory.
pub fn csv(metric: analytics.Metric, allocator: std.mem.Allocator) ![]u8 {
return std.fmt.allocPrint(
allocator,
"name,mean,deviation,label\n{s},{d:.3},{d:.3},{s}\n",
.{ metric.name, analytics.mean(metric), analytics.deviation(metric), analytics.highlight(metric) },
);
}
const std = @import("std");
const analytics = @import("analytics");
/// Serializes a metric into a JSON-formatted string representation.
///
/// Creates a formatted JSON object containing the metric's name, calculated mean,
/// standard deviation, and performance profile classification. The caller owns
/// the returned memory and must free it when done.
///
/// Returns an allocated string containing the JSON representation, or an error
/// if allocation fails.
pub fn emitJson(metric: analytics.Metric, allocator: std.mem.Allocator) ![]u8 {
return std.fmt.allocPrint(
allocator,
"{{\n \"name\": \"{s}\",\n \"mean\": {d:.3},\n \"deviation\": {d:.3},\n \"profile\": \"{s}\"\n}}\n",
.{ metric.name, analytics.mean(metric), analytics.deviation(metric), analytics.highlight(metric) },
);
}
// Import standard library for core functionality
const std = @import("std");
// Import analytics module for metric data structures
const analytics = @import("analytics");
// Import reporting module for metric rendering
const reporting = @import("reporting");
// Import adapters module for data format conversion
const adapters = @import("adapters");
/// Application entry point demonstrating workspace dependency usage
/// Shows how to use multiple workspace modules together for metric processing
pub fn main() !void {
// Create a fixed-size buffer for stdout operations to avoid dynamic allocation
var stdout_buffer: [512]u8 = undefined;
// Initialize a buffered writer for stdout to improve I/O performance
var writer_state = std.fs.File.stdout().writer(&stdout_buffer);
const out = &writer_state.interface;
// Create a sample metric with response time measurements in milliseconds
const metric = analytics.Metric{
.name = "response_ms",
.values = &.{ 12.0, 12.4, 11.9, 12.1, 17.0, 12.3 },
};
// Render the metric using the reporting module's formatting
try reporting.render(metric, out);
// Initialize general purpose allocator for JSON serialization
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
// Ensure allocator cleanup on function exit
defer _ = gpa.deinit();
// Convert metric to JSON format using the adapters module
const json = try adapters.emitJson(metric, gpa.allocator());
// Free allocated JSON string when done
defer gpa.allocator().free(json);
// Output the JSON representation of the metric
try out.print("json export: {s}\n", .{json});
// Flush buffered output to ensure all data is written
try out.flush();
}
Dependency hygiene checklist
- Register vendored modules with distinct names and share them only via
.imports. Do not leak them throughb.addModuleunless consumers are expected to import them directly. - Treat
zig-out/workspace-graph/module-graph.txtas living documentation. Commit outputs for CI verification or diff them to catch accidental namespace changes. - For registry dependencies, forward
b.dependency()handles exactly once and wrap them in local modules. This keeps upgrade churn isolated. 24
Build Options as Configuration
Build options provide a powerful mechanism for making your workspace configurable. The following diagram shows how command-line -D flags flow through b.option(), get added to a generated module via b.addOptions(), and become compile-time constants accessible via @import("build_options"):
This pattern is essential for parameterized workspaces. Use b.option(bool, "feature-x", "Enable feature X") to declare options, then call options.addOption("feature_x", feature_x) to make them available at compile time. The generated module is automatically rebuilt when options change, ensuring your binaries always reflect the current configuration. This technique works for version strings, feature flags, debug settings, and any other build-time constant your code needs.
Target Matrices and Release Channels
Complex projects often ship multiple binaries: debug utilities for contributors, ReleaseFast builds for production, and WASI artifacts for automation. Rather than duplicating build logic per target, assemble a matrix that iterates over std.Target.Query definitions.
Understanding Target Resolution
Before iterating over targets, it’s important to understand how b.resolveTargetQuery transforms partial specifications into fully-resolved targets. The following diagram shows the resolution process:
When you pass a Target.Query with null CPU or OS fields, the resolver detects your native platform and fills in concrete values. Similarly, if you specify an OS without an ABI, the resolver applies the default ABI for that OS (e.g., .gnu for Linux, .msvc for Windows). This resolution happens once per query and produces a ResolvedTarget containing the fully-specified Target plus metadata about whether values came from native detection. Understanding this distinction is crucial for cross-compilation: a query with .cpu_arch = .x86_64 and .os_tag = .linux yields a different resolved target on each host platform due to CPU model and feature detection.
const std = @import("std");
/// Represents a target/optimization combination in the build matrix
/// Each combo defines a unique build configuration with a descriptive name
const Combo = struct {
/// Human-readable identifier for this build configuration
name: []const u8,
/// Target query specifying the CPU architecture, OS, and ABI
query: std.Target.Query,
/// Optimization level (Debug, ReleaseSafe, ReleaseFast, or ReleaseSmall)
optimize: std.builtin.OptimizeMode,
};
pub fn build(b: *std.Build) void {
// Define a matrix of target/optimization combinations to build
// This demonstrates cross-compilation capabilities and optimization strategies
const combos = [_]Combo{
// Native build with debug symbols for development
.{ .name = "native-debug", .query = .{}, .optimize = .Debug },
// Linux x86_64 build optimized for maximum performance
.{ .name = "linux-fast", .query = .{ .cpu_arch = .x86_64, .os_tag = .linux, .abi = .gnu }, .optimize = .ReleaseFast },
// WebAssembly build optimized for minimal binary size
.{ .name = "wasi-small", .query = .{ .cpu_arch = .wasm32, .os_tag = .wasi }, .optimize = .ReleaseSmall },
};
// Create a top-level step that builds all target/optimize combinations
// Users can invoke this with `zig build matrix`
const matrix_step = b.step("matrix", "Build every target/optimize pair");
// Track the run step for the first (host) executable to create a sanity check
var host_run_step: ?*std.Build.Step = null;
// Iterate through each combo to create and configure build artifacts
for (combos, 0..) |combo, index| {
// Resolve the target query into a concrete target specification
// This validates the query and fills in any unspecified fields with defaults
const resolved = b.resolveTargetQuery(combo.query);
// Create a module with the resolved target and optimization settings
// Using createModule allows precise control over compilation parameters
const module = b.createModule(.{
.root_source_file = b.path("matrix/app.zig"),
.target = resolved,
.optimize = combo.optimize,
});
// Create an executable artifact with a unique name for this combo
// The name includes the combo identifier to distinguish build outputs
const exe = b.addExecutable(.{
.name = b.fmt("matrix-{s}", .{combo.name}),
.root_module = module,
});
// Install the executable to zig-out/bin for distribution
b.installArtifact(exe);
// Add this executable's build step as a dependency of the matrix step
// This ensures all executables are built when running `zig build matrix`
matrix_step.dependOn(&exe.step);
// For the first combo (assumed to be the native/host target),
// create a run step for quick testing and validation
if (index == 0) {
// Create a command to run the host executable
const run_cmd = b.addRunArtifact(exe);
// Forward any command-line arguments to the executable
if (b.args) |args| {
run_cmd.addArgs(args);
}
// Create a dedicated step for running the host variant
const run_step = b.step("run-host", "Run host variant for sanity checks");
run_step.dependOn(&run_cmd.step);
// Store the run step for later use in the matrix step
host_run_step = run_step;
}
}
// If a host run step was created, add it as a dependency to the matrix step
// This ensures that building the matrix also runs a sanity check on the host executable
if (host_run_step) |run_step| {
matrix_step.dependOn(run_step);
}
}
Key techniques:
- Predeclare a slice of
{ name, query, optimize }combos. Queries matchzig build -Dtargetsemantics but stay type-checked. b.resolveTargetQueryconverts each query into aResolvedTargetso the module inherits canonical CPU/OS defaults.- Aggregating everything under a
matrixstep keeps CI wiring clean: callzig build -Drelease-mode=fast matrix(or leave defaults) and let dependencies ensure artefacts exist. - Running the first (host) target as part of the matrix catches regressions without cross-runner emulation. For deeper coverage, enable
b.enable_qemu/b.enable_wasmtimebefore callingaddRunArtifact.
$ zig build --build-file 02_multi_target_matrix.zig matrixtarget: x86_64-linux-gnu optimize: Debug
Running Cross-Compiled Targets
When your matrix includes cross-compilation targets, you’ll need external executors to actually run the binaries. The build system automatically selects the appropriate executor based on host/target compatibility:
Enable emulators in your build script by setting b.enable_qemu = true or b.enable_wasmtime = true before calling addRunArtifact. On macOS ARM hosts, x86_64 targets automatically use Rosetta 2. For Linux cross-architecture testing, QEMU user-mode emulation runs ARM/RISC-V/MIPS binaries transparently when the OS matches. WASI targets require Wasmtime, while Windows binaries on Linux can use Wine. If no executor is available, the run step will fail with Executor.bad_os_or_cpu—detect this early by testing matrix coverage on representative CI hosts.
Cross targets that rely on native system libraries (e.g. glibc) need appropriate sysroot packs. Populate ZIG_LIBC or configure b.libc_file before adding those combos to production pipelines.
Vendoring vs Registry Dependencies
- Registry-first approach: keep
build.zig.zonhashes authoritative, then register each dependency module viab.dependency()andmodule.addImport(). 24 - Vendor-first approach: drop sources into
deps/<name>/and wire them withb.addAnonymousModuleorb.createModule. Document the provenance inmodule-graph.txtso collaborators know which code is pinned locally. - Whichever strategy you choose, record a policy in CI: a step that fails if
zig out/workspace-graph/module-graph.txtchanges unexpectedly, or a lint that checks vendored directories for LICENSE files.
CI Scenarios and Automation Hooks
Step Dependencies in Practice
CI pipelines benefit from understanding how build steps compose. The following diagram shows a real-world step dependency graph from the Zig compiler’s own build system:
Notice how the default install step (zig build) depends on binary installation, documentation, and library files—but not tests. Meanwhile, the test step depends on compilation plus all test substeps. This separation lets CI run zig build for release artifacts and zig build test for validation in parallel jobs. Each step only executes when its dependencies change, thanks to content-addressed caching. You can inspect this graph locally with zig build --verbose or by adding a custom step that dumps dependencies.
Automation Patterns
- Artifact verification: Add a
zig build graphjob that uploadsmodule-graph.txtalongside compiled binaries. Consumers can diff namespaces between releases. - Matrix extension: Parameterize the combos array via build options (
-Dinclude-windows=true). Useb.option(bool, "include-windows", …)to let CI toggle extra targets without editing source. - Security posture: Pipe
zig build --fetch(Chapter 24) into the matrix run so caches populate before cross jobs run offline. See 24. - Reproducibility: Teach CI to run
zig build installtwice and assert no files change between runs. Becausestd.Buildrespects content hashing, the second invocation should no-op unless inputs changed.
Advanced Test Organization
For comprehensive projects, organizing tests into categories with matrix application requires careful step composition. The following diagram shows a production-grade test hierarchy:
The umbrella test step aggregates all test categories, letting you run the full suite with zig build test. Individual categories can be invoked separately (zig build test-fmt, zig build test-modules) for faster iteration. Notice how only the module tests receive matrix configuration—format checking and CLI tests don’t vary by target. Use b.option([]const u8, "test-filter", …) to let CI run subsets, and apply optimization modes selectively based on test type. This pattern scales to hundreds of test files while keeping build times manageable through parallel execution and caching.
Notes & Caveats
b.addModuleregisters a name globally for the current build graph;b.createModulekeeps the module private. Mixing them up leads to surprising imports or missing symbols. 25- Named write-files respect the cache. Delete
.zig-cacheif you need to regenerate them from scratch; otherwise the step can trick you into thinking a change landed when it actually hit the cache. - When iterating matrices, always prune stale binaries with
zig build uninstall(or a customStep.RemoveDir) to avoid cross-version confusion.
Under the Hood: Dependency Tracking
The build system’s caching and incremental behavior relies on the compiler’s sophisticated dependency tracking infrastructure. Understanding this helps explain why cached builds are so fast and why certain changes trigger broader rebuilds than expected.
The compiler tracks dependencies at multiple granularities: source file hashes (src_hash_deps), navigation values (nav_val_deps), types (nav_ty_deps), interned constants, ZON files, embedded files, and namespace membership. All these maps point into a shared dep_entries array containing DepEntry structures that form linked lists. Each entry participates in two lists: one linking all analysis units that depend on a particular dependee (traversed during invalidation), and one linking all dependees of a particular analysis unit (traversed during cleanup). When you modify a source file, the compiler hashes it, looks up dependents in src_hash_deps, and marks only those analysis units as outdated. This granular tracking is why changing a private function in one file doesn’t rebuild unrelated modules—the dependency graph precisely captures what actually depends on what. The build system leverages this infrastructure through content addressing: step outputs are cached by their input hashes, and reused when inputs haven’t changed.
Exercises
- Extend
01_workspace_build.zigso thegraphstep emits both a human-readable table and a JSON document. Hint: callgraph_files.add("module-graph.json", …)withstd.jsonoutput. See json.zig. - Add a
-Dtarget-filteroption to02_multi_target_matrix.zigthat limits matrix execution to a comma-separated allowlist. Usestd.mem.splitScalarto parse the value. 22 - Introduce a registry dependency via
b.dependency("logging", .{})and expose it to the workspace withmodule.addImport("logging", dep.module("logging")). Document the new namespace inmodule-graph.txt.
Caveats, alternatives, edge cases
- Large workspaces might exceed default install directory limits. Use
b.setInstallPrefixorb.setLibDirbefore adding artifacts to route outputs into per-target directories. - On Windows,
resolveTargetQueryrequiresabi = .msvcif you expect MSVC-compatible artifacts; the default.gnuABI yields MinGW binaries. - If you supply anonymous modules to dependencies, remember they are not deduplicated. Reuse the same
b.createModuleinstance when multiple artefacts need the same vendored code.
Summary
- Workspaces stay predictable when you register every module explicitly and document the mapping via named write-files.
resolveTargetQueryand iteration-friendly combos let you scale to multiple targets without copy/pasting build logic.- CI jobs benefit from
std.Buildprimitives: steps articulate dependencies, run artefacts gate sanity checks, and named artefacts capture reproducible metadata.
Together with Chapters 22–25, you now have the tools to craft deterministic Zig build graphs that scale across packages, targets, and release channels.