Chapter 26Build System Advanced Topics

构建系统高级主题

概述

模块解析为我们提供了推理编译器图的词汇。现在,我们将这些词汇转化为基础设施。本章深入探讨std.Build的基础知识之外的内容,探索工件巡检和库/可执行文件工作区。我们将有意地注册模块,组合多包工作区,在不接触shell脚本的情况下生成构建输出,并从单个build.zig驱动跨目标矩阵。参见Build.zig

你将学习命名的写入文件、匿名模块和resolveTargetQuery如何为构建运行器提供信息,如何将供应商代码与注册表依赖项隔离,以及如何连接CI作业以证明你的图在调试和发布构建中的行为都一样。参见build_runner.zig

构建系统如何执行

在深入探讨高级模式之前,了解std.Build如何执行至关重要。下图显示了从Zig编译器调用你的build.zig脚本到最终工件安装的完整流程:

graph TB subgraph "CMake阶段(stage2)" CMAKE["CMake"] ZIG2_C["zig2.c<br/>(生成的C代码)"] ZIGCPP["zigcpp<br/>(C++ LLVM/Clang包装器)"] ZIG2["zig2可执行文件"] CMAKE --> ZIG2_C CMAKE --> ZIGCPP ZIG2_C --> ZIG2 ZIGCPP --> ZIG2 end subgraph "本机构建系统(stage3)" BUILD_ZIG["build.zig<br/>本机构建脚本"] BUILD_FN["build()函数"] COMPILER_STEP["addCompilerStep()"] EXE["std.Build.Step.Compile<br/>(编译器可执行文件)"] INSTALL["安装步骤"] BUILD_ZIG --> BUILD_FN BUILD_FN --> COMPILER_STEP COMPILER_STEP --> EXE EXE --> INSTALL end subgraph "构建参数" ZIG_BUILD_ARGS["ZIG_BUILD_ARGS<br/>--zig-lib-dir<br/>-Dversion-string<br/>-Dtarget<br/>-Denable-llvm<br/>-Doptimize"] end ZIG2 -->|"zig2 build"| BUILD_ZIG ZIG_BUILD_ARGS --> BUILD_FN subgraph "输出" STAGE3_BIN["stage3/bin/zig"] STD_LIB["stage3/lib/zig/std/"] LANGREF["stage3/doc/langref.html"] end INSTALL --> STAGE3_BIN INSTALL --> STD_LIB INSTALL --> LANGREF

你的build.zig是一个由编译器编译和执行的常规Zig程序。build()函数是入口点,接收一个*std.Build实例,该实例提供了用于定义步骤、工件和依赖项的API。构建参数(-D标志)由b.option()解析,并作为编译时常量流入你的构建逻辑。然后,构建运行器遍历你声明的步骤依赖图,只执行满足请求目标所需的步骤(默认为安装步骤)。这种声明式模型确保了可重现性:相同的输入总是产生相同的构建图。

学习目标

  • 明确注册可重用模块和匿名包,控制哪些名称出现在导入命名空间中。25
  • 使用命名的写入文件从构建图中生成确定性工件(报告、清单),而不是临时的shell脚本。
  • 使用resolveTargetQuery协调多目标构建,包括主机健全性检查和交叉编译管道。22Compile.zig
  • 构建复合工作区,使供应商模块保持私有,而注册表包保持自包含。24
  • 在CI中捕获可重现性保证:安装步骤、运行步骤和生成的工件都依赖于std.Build.Step依赖项。

构建工作区界面

一个工作区只是一个具有清晰命名空间边界的构建图。以下示例提升了三个模块——analyticsreporting和一个供应商的adapters助手——并展示了一个根可执行文件如何使用它们。我们强调哪些模块是全局注册的,哪些保持匿名,以及如何直接从构建图发出文档。

Zig
const std = @import("std");

pub fn build(b: *std.Build) void {
    // Standard target and optimization options allow the build to be configured
    // for different architectures and optimization levels via CLI flags
    const target = b.standardTargetOptions(.{});
    const optimize = b.standardOptimizeOption(.{});

    // Create the analytics module - the foundational module that provides
    // core metric calculation and analysis capabilities
    const analytics_mod = b.addModule("analytics", .{
        .root_source_file = b.path("workspace/analytics/lib.zig"),
        .target = target,
        .optimize = optimize,
    });

    // Create the reporting module - depends on analytics to format and display metrics
    // Uses addModule() which both creates and registers the module in one step
    const reporting_mod = b.addModule("reporting", .{
        .root_source_file = b.path("workspace/reporting/lib.zig"),
        .target = target,
        .optimize = optimize,
        // Import analytics module to access metric types and computation functions
        .imports = &.{.{ .name = "analytics", .module = analytics_mod }},
    });

    // Create the adapters module using createModule() - creates but does not register
    // This demonstrates an anonymous module that other code can import but won't
    // appear in the global module namespace
    const adapters_mod = b.createModule(.{
        .root_source_file = b.path("workspace/adapters/vendored.zig"),
        .target = target,
        .optimize = optimize,
        // Adapters need analytics to serialize metric data
        .imports = &.{.{ .name = "analytics", .module = analytics_mod }},
    });

    // Create the main application module that orchestrates all dependencies
    // This demonstrates how a root module can compose multiple imported modules
    const app_module = b.createModule(.{
        .root_source_file = b.path("workspace/app/main.zig"),
        .target = target,
        .optimize = optimize,
        .imports = &.{
            // Import all three workspace modules to access their functionality
            .{ .name = "analytics", .module = analytics_mod },
            .{ .name = "reporting", .module = reporting_mod },
            .{ .name = "adapters", .module = adapters_mod },
        },
    });

    // Create the executable artifact using the composed app module as its root
    // The root_module field replaces the legacy root_source_file approach
    const exe = b.addExecutable(.{
        .name = "workspace-app",
        .root_module = app_module,
    });

    // Install the executable to zig-out/bin so it can be run after building
    b.installArtifact(exe);

    // Set up a run command that executes the built executable
    const run_cmd = b.addRunArtifact(exe);
    // Forward any command-line arguments passed to the build system to the executable
    if (b.args) |args| {
        run_cmd.addArgs(args);
    }

    // Create a custom build step "run" that users can invoke with `zig build run`
    const run_step = b.step("run", "Run workspace app with registered modules");
    run_step.dependOn(&run_cmd.step);

    // Create a named write files step to document the module dependency graph
    // This is useful for understanding the workspace structure without reading code
    const graph_files = b.addNamedWriteFiles("graph");
    // Generate a text file documenting the module registration hierarchy
    _ = graph_files.add("module-graph.txt",
        \\workspace module registration map:
        \\  analytics  -> workspace/analytics/lib.zig
        \\  reporting  -> workspace/reporting/lib.zig (imports analytics)
        \\  adapters   -> (anonymous) workspace/adapters/vendored.zig
        \\  exe root   -> workspace/app/main.zig
    );

    // Create a custom build step "graph" that generates module documentation
    // Users can invoke this with `zig build graph` to output the dependency map
    const graph_step = b.step("graph", "Emit module graph summary to zig-out");
    graph_step.dependOn(&graph_files.step);
}

build()函数遵循一个刻意的节奏:

  • b.addModule("analytics", …)注册一个公共名称,以便整个工作区可以@import("analytics")Module.zig
  • b.createModule创建一个私有模块(adapters),只有根可执行文件可以看到——非常适合消费者不应接触的供应商代码。24
  • b.addNamedWriteFiles("workspace-graph")zig-out/中生成一个module-graph.txt文件,记录命名空间映射,而无需定制工具。
  • 每个依赖项都通过.imports进行线程化,因此编译器永远不会回退到基于文件系统的猜测。25
运行工作区应用
Shell
$ zig build --build-file 01_workspace_build.zig run
输出
Shell
metric: response_ms
count: 6
mean: 12.95
deviation: 1.82
profile: stable
json export: {
  "name": "response_ms",
  "mean": 12.950,
  "deviation": 1.819,
  "profile": "stable"
}
生成模块图
Shell
$ zig build --build-file 01_workspace_build.zig graph
输出
Shell
No stdout expected.

命名的写入文件遵循缓存:在没有更改的情况下重新运行zig build … graph是即时的。检查zig-out/graph/module-graph.txt以查看构建运行器发出的映射。

工作区库代码

为了使本示例自包含,模块与构建脚本并存。你可以根据需要调整它们,或换用在build.zig.zon中声明的注册表依赖项。

Zig

// Analytics library for statistical calculations on metrics
const std = @import("std");

// Represents a named metric with associated numerical values
pub const Metric = struct {
    name: []const u8,
    values: []const f64,
};

// Calculates the arithmetic mean (average) of all values in a metric
// Returns the sum of all values divided by the count
pub fn mean(metric: Metric) f64 {
    var total: f64 = 0;
    for (metric.values) |value| {
        total += value;
    }
    return total / @as(f64, @floatFromInt(metric.values.len));
}

// Calculates the standard deviation of values in a metric
// Uses the population standard deviation formula: sqrt(sum((x - mean)^2) / n)
pub fn deviation(metric: Metric) f64 {
    const avg = mean(metric);
    var accum: f64 = 0;
    // Sum the squared differences from the mean
    for (metric.values) |value| {
        const delta = value - avg;
        accum += delta * delta;
    }
    // Return the square root of the variance
    return std.math.sqrt(accum / @as(f64, @floatFromInt(metric.values.len)));
}

// Classifies a metric as "variable" or "stable" based on its standard deviation
// Metrics with deviation > 3.0 are considered variable, otherwise stable
pub fn highlight(metric: Metric) []const u8 {
    return if (deviation(metric) > 3.0)
        "variable"
    else
        "stable";
}
Zig
//! Reporting module for displaying analytics metrics in various formats.
//! This module provides utilities to render metrics as human-readable text
//! or export them in CSV format for further analysis.

const std = @import("std");
const analytics = @import("analytics");

/// Renders a metric's statistics to a writer in a human-readable format.
/// Outputs the metric name, number of data points, mean, standard deviation,
/// and performance profile label.
///
/// Parameters:
///   - metric: The analytics metric to render
///   - writer: Any writer interface that supports the print() method
///
/// Returns an error if writing to the output fails.
pub fn render(metric: analytics.Metric, writer: anytype) !void {
    try writer.print("metric: {s}\n", .{metric.name});
    try writer.print("count: {}\n", .{metric.values.len});
    try writer.print("mean: {d:.2}\n", .{analytics.mean(metric)});
    try writer.print("deviation: {d:.2}\n", .{analytics.deviation(metric)});
    try writer.print("profile: {s}\n", .{analytics.highlight(metric)});
}

/// Exports a metric's statistics as a CSV-formatted string.
/// Creates a two-row CSV with headers and a single data row containing
/// the metric's name, mean, deviation, and highlight label.
///
/// Parameters:
///   - metric: The analytics metric to export
///   - allocator: Memory allocator for the resulting string
///
/// Returns a heap-allocated CSV string, or an error if allocation or formatting fails.
/// Caller is responsible for freeing the returned memory.
pub fn csv(metric: analytics.Metric, allocator: std.mem.Allocator) ![]u8 {
    return std.fmt.allocPrint(
        allocator,
        "name,mean,deviation,label\n{s},{d:.3},{d:.3},{s}\n",
        .{ metric.name, analytics.mean(metric), analytics.deviation(metric), analytics.highlight(metric) },
    );
}
Zig
const std = @import("std");
const analytics = @import("analytics");

/// Serializes a metric into a JSON-formatted string representation.
/// 
/// Creates a formatted JSON object containing the metric's name, calculated mean,
/// standard deviation, and performance profile classification. The caller owns
/// the returned memory and must free it when done.
///
/// Returns an allocated string containing the JSON representation, or an error
/// if allocation fails.
pub fn emitJson(metric: analytics.Metric, allocator: std.mem.Allocator) ![]u8 {
    return std.fmt.allocPrint(
        allocator,
        "{{\n  \"name\": \"{s}\",\n  \"mean\": {d:.3},\n  \"deviation\": {d:.3},\n  \"profile\": \"{s}\"\n}}\n",
        .{ metric.name, analytics.mean(metric), analytics.deviation(metric), analytics.highlight(metric) },
    );
}
Zig

// Import standard library for core functionality
const std = @import("std");
// Import analytics module for metric data structures
const analytics = @import("analytics");
// Import reporting module for metric rendering
const reporting = @import("reporting");
// Import adapters module for data format conversion
const adapters = @import("adapters");

/// Application entry point demonstrating workspace dependency usage
/// Shows how to use multiple workspace modules together for metric processing
pub fn main() !void {
    // Create a fixed-size buffer for stdout operations to avoid dynamic allocation
    var stdout_buffer: [512]u8 = undefined;
    // Initialize a buffered writer for stdout to improve I/O performance
    var writer_state = std.fs.File.stdout().writer(&stdout_buffer);
    const out = &writer_state.interface;

    // Create a sample metric with response time measurements in milliseconds
    const metric = analytics.Metric{
        .name = "response_ms",
        .values = &.{ 12.0, 12.4, 11.9, 12.1, 17.0, 12.3 },
    };

    // Render the metric using the reporting module's formatting
    try reporting.render(metric, out);

    // Initialize general purpose allocator for JSON serialization
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    // Ensure allocator cleanup on function exit
    defer _ = gpa.deinit();

    // Convert metric to JSON format using the adapters module
    const json = try adapters.emitJson(metric, gpa.allocator());
    // Free allocated JSON string when done
    defer gpa.allocator().free(json);

    // Output the JSON representation of the metric
    try out.print("json export: {s}\n", .{json});
    // Flush buffered output to ensure all data is written
    try out.flush();
}

std.fmt.allocPrint在Zig 0.15.2中与分配器管道配合得很好,当你希望构建时助手在没有堆全局变量的情况下运行时。在发出CSV或JSON快照时,优先使用它而不是临时的ArrayList用法。参见v0.15.2fmt.zig

依赖项卫生清单

  • 使用不同的名称注册供应商模块,并仅通过.imports共享它们。除非期望消费者直接导入它们,否则不要通过b.addModule泄漏它们。
  • zig-out/workspace-graph/module-graph.txt视为活动文档。提交输出以进行CI验证,或对其进行差异比较以捕获意外的命名空间更改。
  • 对于注册表依赖项,精确地转发b.dependency()句柄一次,并将其包装在本地模块中。这可以使升级的混乱保持隔离。24

构建选项作为配置

构建选项提供了一个强大的机制,使你的工作区可配置。下图显示了命令行-D标志如何流经b.option(),通过b.addOptions()添加到生成的模块中,并成为可通过@import("build_options")访问的编译时常量:

graph LR subgraph "命令行" CLI["-Ddebug-allocator<br/>-Denable-llvm<br/>-Dversion-string<br/>等"] end subgraph "build.zig" PARSE["b.option()<br/>解析选项"] OPTIONS["exe_options = <br/>b.addOptions()"] ADD["exe_options.addOption()"] PARSE --> OPTIONS OPTIONS --> ADD end subgraph "生成的模块" BUILD_OPTIONS["build_options<br/>(自动生成)"] CONSTANTS["pub const mem_leak_frames = 4;<br/>pub const have_llvm = true;<br/>pub const version = '0.16.0';<br/>等"] BUILD_OPTIONS --> CONSTANTS end subgraph "编译器源代码" IMPORT["@import('build_options')"] USE["if (build_options.have_llvm) { ... }"] IMPORT --> USE end CLI --> PARSE ADD --> BUILD_OPTIONS BUILD_OPTIONS --> IMPORT

这种模式对于参数化工作区至关重要。使用b.option(bool, "feature-x", "Enable feature X")来声明选项,然后调用options.addOption("feature_x", feature_x)使其在编译时可用。当选项更改时,生成的模块会自动重新构建,确保你的二进制文件始终反映当前配置。此技术适用于版本字符串、功能标志、调试设置以及你的代码需要的任何其他构建时常量。

目标矩阵和发布渠道

复杂的项目通常会发布多个二进制文件:用于贡献者的调试工具、用于生产的ReleaseFast构建以及用于自动化的WASI工件。与其为每个目标复制构建逻辑,不如组装一个遍历std.Target.Query定义的矩阵。

理解目标解析

在遍历目标之前,了解b.resolveTargetQuery如何将部分规范转换为完全解析的目标至关重要。下图显示了解析过程:

graph LR subgraph "用户输入" Query["Target.Query"] Query --> QCpu["cpu_arch: ?Cpu.Arch"] Query --> QModel["cpu_model: CpuModel"] Query --> QOs["os_tag: ?Os.Tag"] Query --> QAbi["abi: ?Abi"] end subgraph "解析过程" Resolve["resolveTargetQuery()"] Query --> Resolve Detection["本地检测"] Defaults["应用默认值"] Detection --> Resolve Defaults --> Resolve end subgraph "完全解析" Target["Target"] Resolve --> Target Target --> TCpu["cpu: Cpu"] Target --> TOs["os: Os"] Target --> TAbi["abi: Abi"] Target --> TOfmt["ofmt: ObjectFormat"] end

当你传递一个带有null CPU或OS字段的Target.Query时,解析器会检测你的本机平台并填充具体的值。同样,如果你指定了一个没有ABI的操作系统,解析器会应用该操作系统的默认ABI(例如,Linux为.gnu,Windows为.msvc)。此解析对每个查询只发生一次,并生成一个ResolvedTarget,其中包含完全指定的Target以及关于值是否来自本机检测的元数据。理解这种区别对于交叉编译至关重要:一个带有.cpu_arch = .x86_64.os_tag = .linux的查询在每个主机平台上都会产生不同的解析目标,因为CPU型号和功能检测不同。

Zig
const std = @import("std");

/// Represents a target/optimization combination in the build matrix
/// Each combo defines a unique build configuration with a descriptive name
const Combo = struct {
    /// Human-readable identifier for this build configuration
    name: []const u8,
    /// Target query specifying the CPU architecture, OS, and ABI
    query: std.Target.Query,
    /// Optimization level (Debug, ReleaseSafe, ReleaseFast, or ReleaseSmall)
    optimize: std.builtin.OptimizeMode,
};

pub fn build(b: *std.Build) void {
    // Define a matrix of target/optimization combinations to build
    // This demonstrates cross-compilation capabilities and optimization strategies
    const combos = [_]Combo{
        // Native build with debug symbols for development
        .{ .name = "native-debug", .query = .{}, .optimize = .Debug },
        // Linux x86_64 build optimized for maximum performance
        .{ .name = "linux-fast", .query = .{ .cpu_arch = .x86_64, .os_tag = .linux, .abi = .gnu }, .optimize = .ReleaseFast },
        // WebAssembly build optimized for minimal binary size
        .{ .name = "wasi-small", .query = .{ .cpu_arch = .wasm32, .os_tag = .wasi }, .optimize = .ReleaseSmall },
    };

    // Create a top-level step that builds all target/optimize combinations
    // Users can invoke this with `zig build matrix`
    const matrix_step = b.step("matrix", "Build every target/optimize pair");

    // Track the run step for the first (host) executable to create a sanity check
    var host_run_step: ?*std.Build.Step = null;

    // Iterate through each combo to create and configure build artifacts
    for (combos, 0..) |combo, index| {
        // Resolve the target query into a concrete target specification
        // This validates the query and fills in any unspecified fields with defaults
        const resolved = b.resolveTargetQuery(combo.query);
        
        // Create a module with the resolved target and optimization settings
        // Using createModule allows precise control over compilation parameters
        const module = b.createModule(.{
            .root_source_file = b.path("matrix/app.zig"),
            .target = resolved,
            .optimize = combo.optimize,
        });

        // Create an executable artifact with a unique name for this combo
        // The name includes the combo identifier to distinguish build outputs
        const exe = b.addExecutable(.{
            .name = b.fmt("matrix-{s}", .{combo.name}),
            .root_module = module,
        });

        // Install the executable to zig-out/bin for distribution
        b.installArtifact(exe);
        
        // Add this executable's build step as a dependency of the matrix step
        // This ensures all executables are built when running `zig build matrix`
        matrix_step.dependOn(&exe.step);

        // For the first combo (assumed to be the native/host target),
        // create a run step for quick testing and validation
        if (index == 0) {
            // Create a command to run the host executable
            const run_cmd = b.addRunArtifact(exe);
            
            // Forward any command-line arguments to the executable
            if (b.args) |args| {
                run_cmd.addArgs(args);
            }
            
            // Create a dedicated step for running the host variant
            const run_step = b.step("run-host", "Run host variant for sanity checks");
            run_step.dependOn(&run_cmd.step);
            
            // Store the run step for later use in the matrix step
            host_run_step = run_step;
        }
    }

    // If a host run step was created, add it as a dependency to the matrix step
    // This ensures that building the matrix also runs a sanity check on the host executable
    if (host_run_step) |run_step| {
        matrix_step.dependOn(run_step);
    }
}

关键技术:

  • 预先声明一个{ name, query, optimize }组合的切片。查询与zig build -Dtarget语义匹配,但保持类型检查。
  • b.resolveTargetQuery将每个查询转换为ResolvedTarget,以便模块继承规范的CPU/OS默认值。
  • 将所有内容聚合到一个matrix步骤下,可以使CI连接保持清晰:调用zig build -Drelease-mode=fast matrix(或保留默认值),并让依赖项确保工件存在。
  • 将第一个(主机)目标作为矩阵的一部分运行,可以在没有跨运行器仿真的情况下捕获回归。为了更深入的覆盖,请在调用addRunArtifact之前启用b.enable_qemu / b.enable_wasmtime
运行矩阵构建
Shell
$ zig build --build-file 02_multi_target_matrix.zig matrix
输出(主机变体)
target: x86_64-linux-gnu
optimize: Debug

运行交叉编译目标

当你的矩阵包含交叉编译目标时,你需要外部执行器来实际运行二进制文件。构建系统会根据主机/目标的兼容性自动选择适当的执行器:

flowchart TD Start["getExternalExecutor(host, candidate)"] CheckMatch{"操作系统 + CPU\n兼容?"} CheckDL{"link_libc &&\nhas dynamic_linker?"} DLExists{"动态链接器\n存在于主机上?"} Native["Executor.native"] CheckRosetta{"macOS + arm64 主机\n&& x86_64 目标?"} Rosetta["Executor.rosetta"] CheckQEMU{"操作系统匹配 &&\nallow_qemu?"} QEMU["Executor.qemu\n(例如 'qemu-aarch64')"] CheckWasmtime{"target.isWasm() &&\nallow_wasmtime?"} Wasmtime["Executor.wasmtime"] CheckWine{"target.os == .windows\n&& allow_wine?"} Wine["Executor.wine"] CheckDarling{"target.os.isDarwin()\n&& allow_darling?"} Darling["Executor.darling"] BadDL["Executor.bad_dl"] BadOsCpu["Executor.bad_os_or_cpu"] Start --> CheckMatch CheckMatch --> |是|CheckDL CheckMatch --> |否|CheckRosetta CheckDL --> |没有 libc|Native CheckDL --> |有 libc|DLExists DLExists --> |是|Native DLExists --> |否|BadDL CheckRosetta --> |是|Rosetta CheckRosetta --> |否|CheckQEMU CheckQEMU --> |是|QEMU CheckQEMU --> |否|CheckWasmtime CheckWasmtime --> |是|Wasmtime CheckWasmtime --> |否|CheckWine CheckWine --> |是|Wine CheckWine --> |否|CheckDarling CheckDarling --> |是|Darling CheckDarling --> |否|BadOsCpu

当你的矩阵包含交叉编译目标时,你需要外部执行器来实际运行二进制文件。构建系统会根据主机/目标的兼容性自动选择适当的执行器:

通过在调用addRunArtifact之前在你的构建脚本中设置b.enable_qemu = trueb.enable_wasmtime = true来启用模拟器。在macOS ARM主机上,x86_64目标会自动使用Rosetta 2。对于Linux跨架构测试,QEMU用户模式模拟会透明地运行ARM/RISC-V/MIPS二进制文件,前提是操作系统匹配。WASI目标需要Wasmtime,而Linux上的Windows二进制文件可以使用Wine。如果没有可用的执行器,运行步骤将失败并显示Executor.bad_os_or_cpu——通过在代表性的CI主机上测试矩阵覆盖范围来尽早检测到这一点。

依赖于本机系统库(例如glibc)的交叉目标需要适当的sysroot包。在将这些组合添加到生产管道之前,请填充ZIG_LIBC或配置b.libc_file

供应商与注册表依赖项

  • 注册表优先方法:保持build.zig.zon哈希的权威性,然后通过b.dependency()module.addImport()注册每个依赖项模块。24
  • 供应商优先方法:将源代码放入deps/<name>/并用b.addAnonymousModuleb.createModule连接它们。在module-graph.txt中记录来源,以便协作者知道哪些代码是本地固定的。
  • 无论你选择哪种策略,都要在CI中记录一个策略:一个在zig out/workspace-graph/module-graph.txt意外更改时失败的步骤,或者一个检查供应商目录中是否存在LICENSE文件的lint。

CI场景和自动化钩子

实践中的步骤依赖

CI管道受益于理解构建步骤如何组合。下图显示了来自Zig编译器自身构建系统的真实世界步骤依赖图:

graph TB subgraph "安装步骤(默认)" INSTALL["b.getInstallStep()"] end subgraph "编译器工件" EXE_STEP["exe.step<br/>(编译编译器)"] INSTALL_EXE["install_exe.step<br/>(安装二进制文件)"] end subgraph "文档" LANGREF["generateLangRef()"] INSTALL_LANGREF["install_langref.step"] STD_DOCS_GEN["autodoc_test"] INSTALL_STD_DOCS["install_std_docs.step"] end subgraph "库文件" LIB_FILES["installDirectory(lib/)"] end subgraph "测试步骤" TEST["test step"] FMT["test-fmt step"] CASES["test-cases step"] MODULES["test-modules step"] end INSTALL --> INSTALL_EXE INSTALL --> INSTALL_LANGREF INSTALL --> LIB_FILES INSTALL_EXE --> EXE_STEP INSTALL_LANGREF --> LANGREF INSTALL --> INSTALL_STD_DOCS INSTALL_STD_DOCS --> STD_DOCS_GEN TEST --> EXE_STEP TEST --> FMT TEST --> CASES TEST --> MODULES CASES --> EXE_STEP MODULES --> EXE_STEP

注意默认安装步骤(zig build)如何依赖于二进制安装、文档和库文件——但依赖于测试。同时,测试步骤依赖于编译加上所有测试子步骤。这种分离使得CI可以在并行作业中运行zig build以获取发布工件,并运行zig build test以进行验证。由于内容寻址缓存,每个步骤仅在其依赖项更改时才执行。你可以使用zig build --verbose在本地检查此图,或通过添加一个转储依赖项的自定义步骤。

自动化模式

  • 工件验证:添加一个zig build graph作业,将module-graph.txt与编译的二进制文件一起上传。消费者可以在版本之间进行命名空间差异比较。
  • 矩阵扩展:通过构建选项(-Dinclude-windows=true)参数化组合数组。使用b.option(bool, "include-windows", …)让CI在不编辑源代码的情况下切换额外的目标。
  • 安全姿态:将zig build --fetch(第24章)管道传输到矩阵运行中,以便在交叉作业离线运行之前填充缓存。参见24
  • 可重现性:教CI运行zig build install两次,并断言两次运行之间没有文件更改。因为std.Build尊重内容哈希,所以除非输入已更改,否则第二次调用应该是无操作的。

高级测试组织

对于综合项目,将测试组织成类别并应用矩阵需要仔细的步骤组合。下图显示了一个生产级测试层次结构:

graph TB subgraph "测试步骤" TEST_STEP["test step<br/>(总步骤)"] FMT["test-fmt<br/>格式检查"] CASES["test-cases<br/>编译器测试用例"] MODULES["test-modules<br/>每个目标的模块测试"] UNIT["test-unit<br/>编译器单元测试"] STANDALONE["独立测试"] CLI["CLI测试"] STACK_TRACE["堆栈跟踪测试"] ERROR_TRACE["错误跟踪测试"] LINK["链接测试"] C_ABI["C ABI测试"] INCREMENTAL["test-incremental<br/>增量编译"] end subgraph "模块测试" BEHAVIOR["行为测试<br/>test/behavior.zig"] COMPILER_RT["compiler_rt测试<br/>lib/compiler_rt.zig"] ZIGC["zigc测试<br/>lib/c.zig"] STD["std测试<br/>lib/std/std.zig"] LIBC_TESTS["libc测试"] end subgraph "测试配置" TARGET_MATRIX["test_targets数组<br/>不同架构<br/>不同操作系统<br/>不同ABI"] OPT_MODES["优化模式:<br/>Debug, ReleaseFast<br/>ReleaseSafe, ReleaseSmall"] FILTERS["test-filter<br/>test-target-filter"] end TEST_STEP --> FMT TEST_STEP --> CASES TEST_STEP --> MODULES TEST_STEP --> UNIT TEST_STEP --> STANDALONE TEST_STEP --> CLI TEST_STEP --> STACK_TRACE TEST_STEP --> ERROR_TRACE TEST_STEP --> LINK TEST_STEP --> C_ABI TEST_STEP --> INCREMENTAL MODULES --> BEHAVIOR MODULES --> COMPILER_RT MODULES --> ZIGC MODULES --> STD TARGET_MATRIX --> MODULES OPT_MODES --> MODULES FILTERS --> MODULES

总测试步骤聚合了所有测试类别,让你用zig build test运行完整的测试套件。单个类别可以单独调用(zig build test-fmtzig build test-modules)以加快迭代速度。注意只有模块测试接收矩阵配置——格式检查和CLI测试不因目标而异。使用b.option([]const u8, "test-filter", …)让CI运行子集,并根据测试类型有选择地应用优化模式。这种模式可以扩展到数百个测试文件,同时通过并行执行和缓存保持构建时间可管理。

注意事项、替代方案、边缘情况

  • b.addModule为当前构建图全局注册一个名称;b.createModule使模块保持私有。混淆它们会导致意外的导入或符号丢失。25
  • 命名的写入文件尊重缓存。如果你需要从头重新生成它们,请删除.zig-cache;否则,该步骤可能会欺骗你,让你以为更改已生效,而实际上它命中了缓存。
  • 迭代矩阵时,请始终使用zig build uninstall(或自定义的Step.RemoveDir)修剪过时的二进制文件,以避免跨版本混淆。

幕后:依赖跟踪

构建系统的缓存和增量行为依赖于编译器复杂的依赖跟踪基础设施。理解这一点有助于解释为什么缓存的构建如此之快,以及为什么某些更改会触发比预期更广泛的重建。

graph TB subgraph "InternPool - 依赖存储" SRCHASHDEPS["src_hash_deps<br/>Map: TrackedInst.Index → DepEntry.Index"] NAVVALDEPS["nav_val_deps<br/>Map: Nav.Index → DepEntry.Index"] NAVTYDEPS["nav_ty_deps<br/>Map: Nav.Index → DepEntry.Index"] INTERNEDDEPS["interned_deps<br/>Map: Index → DepEntry.Index"] ZONFILEDEPS["zon_file_deps<br/>Map: FileIndex → DepEntry.Index"] EMBEDFILEDEPS["embed_file_deps<br/>Map: EmbedFile.Index → DepEntry.Index"] NSDEPS["namespace_deps<br/>Map: TrackedInst.Index → DepEntry.Index"] NSNAMEDEPS["namespace_name_deps<br/>Map: NamespaceNameKey → DepEntry.Index"] FIRSTDEP["first_dependency<br/>Map: AnalUnit → DepEntry.Index"] DEPENTRIES["dep_entries<br/>ArrayListUnmanaged<DepEntry>"] FREEDEP["free_dep_entries<br/>ArrayListUnmanaged<DepEntry.Index>"] end subgraph "DepEntry结构" DEPENTRY["DepEntry<br/>{depender: AnalUnit,<br/>next_dependee: DepEntry.Index.Optional,<br/>next_depender: DepEntry.Index.Optional}"] end SRCHASHDEPS --> DEPENTRIES NAVVALDEPS --> DEPENTRIES NAVTYDEPS --> DEPENTRIES INTERNEDDEPS --> DEPENTRIES ZONFILEDEPS --> DEPENTRIES EMBEDFILEDEPS --> DEPENTRIES NSDEPS --> DEPENTRIES NSNAMEDEPS --> DEPENTRIES FIRSTDEP --> DEPENTRIES DEPENTRIES --> DEPENTRY FREEDEP -.->|"重用索引"| DEPENTRIES

编译器在多个粒度上跟踪依赖关系:源文件哈希(src_hash_deps)、导航值(nav_val_deps)、类型(nav_ty_deps)、内部常量、ZON文件、嵌入文件和命名空间成员资格。所有这些映射都指向一个共享的dep_entries数组,其中包含形成链表的DepEntry结构。每个条目都参与两个列表:一个链接所有依赖于特定被依赖者的分析单元(在失效期间遍历),另一个链接特定分析单元的所有被依赖者(在清理期间遍历)。当你修改一个源文件时,编译器会对其进行哈希,在src_hash_deps中查找依赖者,并仅将那些分析单元标记为过时。这种精细的跟踪就是为什么在一个文件中更改一个私有函数不会重建不相关的模块——依赖图精确地捕获了实际依赖于什么。构建系统通过内容寻址利用了这种基础设施:步骤输出按其输入哈希缓存,并在输入未更改时重用。

练习

  • 扩展01_workspace_build.zig,以便graph步骤同时发出一个人类可读的表格和一个JSON文档。提示:使用std.json输出调用graph_files.add("module-graph.json", …)。参见json.zig
  • 02_multi_target_matrix.zig添加一个-Dtarget-filter选项,将矩阵执行限制为一个逗号分隔的允许列表。使用std.mem.splitScalar来解析该值。22
  • 通过b.dependency("logging", .{})引入一个注册表依赖项,并使用module.addImport("logging", dep.module("logging"))将其公开给工作区。在module-graph.txt中记录新的命名空间。

注意事项、替代方案、边缘情况

  • 大型工作区可能会超出默认的安装目录限制。在添加工件之前,使用b.setInstallPrefixb.setLibDir将输出路由到每个目标目录。
  • 在Windows上,如果你期望与MSVC兼容的工件,resolveTargetQuery需要abi = .msvc;默认的.gnu ABI会产生MinGW二进制文件。
  • 如果你向依赖项提供匿名模块,请记住它们不会被去重。当多个工件需要相同的供应商代码时,请重用同一个b.createModule实例。

总结

  • 当你明确注册每个模块并通过命名的写入文件记录映射时,工作区保持可预测。
  • resolveTargetQuery和对迭代友好的组合让你可以在不复制/粘贴构建逻辑的情况下扩展到多个目标。
  • CI作业受益于std.Build原语:步骤阐明依赖关系,运行工件门控健全性检查,命名的工件捕获可重现的元数据。

结合第22-25章,你现在拥有了制作确定性Zig构建图的工具,这些构建图可以在包、目标和发布渠道中扩展。

Help make this chapter better.

Found a typo, rough edge, or missing explanation? Open an issue or propose a small improvement on GitHub.