Overview
The previous chapter focused on formatting and text, while other chapters introduced basic printing with simple buffered output. This chapter dives into Zig 0.15.2’s streaming primitives: the modern std.Io.Reader / std.Io.Writer interfaces and their supporting adapters (limited views, discarding, duplication, simple counting). These abstractions intentionally expose buffer internals so performance-critical paths (formatting, delimiter scanning, hashing) remain deterministic and allocation-free. Unlike opaque I/O layers found in other languages, Zig’s adapters are ultra-thin—often plain structs whose methods manipulate explicit slices and indices. Writer.zigReader.zig
You will learn how to create fixed in-memory writers, migrate legacy std.io.fixedBufferStream usage, cap reads with limited, duplicate an input stream (tee), discard output efficiently, and assemble pipelines (e.g., delimiter processing) without hidden allocations. Each example is small, self-contained, and demonstrates a single concept you can reuse when connecting to files, sockets, or future async abstractions.
Learning Goals
- Construct fixed-buffer writers/readers with
Writer.fixed/Reader.fixedand inspect buffered data. - Migrate from legacy
std.io.fixedBufferStreamto the newer APIs safely.44 - Enforce byte limits using
Reader.limitedto guard parsers against runaway inputs.Limited.zig - Implement duplication (tee) and discard patterns without extra allocations.10
- Stream delimiter-separated data using
takeDelimiter/ related helpers for line processing. - Reason about when buffered vs. direct streaming is chosen and its performance implications.39
Fundamentals: Fixed Writers & Readers
The cornerstone abstractions are value types representing the state of a stream endpoint. A fixed writer buffers bytes until either full or flushed. A fixed reader exposes slices of its buffered region and offers peek/take semantics, facilitating incremental parsing without copying.3
Basic Fixed Writer ()
Create an in-memory writer, emit formatted content, then inspect and forward the buffered slice. This mirrors earlier formatting patterns but without allocating an ArrayList or dealing with dynamic capacity.45
const std = @import("std");
// Demonstrates basic buffered writing using the new std.Io.Writer API
// and then flushing to stdout via the older std.io File writer.
pub fn main() !void {
var buf: [128]u8 = undefined;
// New streaming Writer backed by a fixed buffer. Writes accumulate until flushed/consumed.
var w: std.Io.Writer = .fixed(&buf);
try w.print("Header: {s}\n", .{"I/O adapters"});
try w.print("Value A: {d}\n", .{42});
try w.print("Value B: {x}\n", .{0xdeadbeef});
// Grab buffered bytes and print through std.debug (stdout)
const buffered = w.buffered();
std.debug.print("{s}", .{buffered});
}
$ zig run reader_writer_basics.zigHeader: I/O adapters
Value A: 42
Value B: deadbeefThe buffer is user-owned; you decide its lifetime and size budget. No implicit heap allocation occurs—critical for tight loops or embedded targets.
Migrating from
Legacy fixedBufferStream (lowercase io) returns wrapper types with reader() / writer() methods. Zig 0.15.2 retains them for compatibility but prefers std.Io.Writer.fixed / Reader.fixed for uniform adapter composition.1fixed_buffer_stream.zig
const std = @import("std");
// Demonstrates legacy fixedBufferStream (deprecated in favor of std.Io.Writer.fixed)
// to highlight migration paths.
pub fn main() !void {
var backing: [64]u8 = undefined;
var fbs = std.io.fixedBufferStream(&backing);
const w = fbs.writer();
try w.print("Legacy buffered writer example: {s} {d}\n", .{ "answer", 42 });
try w.print("Capacity used: {d}/{d}\n", .{ fbs.getWritten().len, backing.len });
// Echo buffer contents to stdout.
std.debug.print("{s}", .{fbs.getWritten()});
}
$ zig run fixed_buffer_stream.zigLegacy buffered writer example: answer 42
Capacity used: 42/64Prefer the new capital Io APIs for future interoperability; fixedBufferStream may eventually phase out as more adapters target the modern interfaces.
Limiting Input ()
Wrap a reader with a hard cap to defend against oversized inputs (e.g., header sections, magic prefixes). Once the limit exhausts, subsequent reads indicate end of stream early, protecting downstream logic.4
const std = @import("std");
// Reads at most N bytes from an input using std.Io.Reader.Limited
pub fn main() !void {
const input = "Hello, world!\nRest is skipped";
var r: std.Io.Reader = .fixed(input);
var tmp: [8]u8 = undefined; // buffer backing the limited reader
var limited = r.limited(.limited(5), &tmp); // allow only first 5 bytes
var out_buf: [64]u8 = undefined;
var out: std.Io.Writer = .fixed(&out_buf);
// Pump until limit triggers EndOfStream for the limited reader
_ = limited.interface.streamRemaining(&out) catch |err| {
switch (err) {
error.WriteFailed, error.ReadFailed => unreachable,
}
};
std.debug.print("{s}\n", .{out.buffered()});
}
$ zig run limited_reader.zigHelloUse limited(.limited(N), tmp_buffer) for protocol guards; parsing functions can assume bounded consumption and bail out cleanly on premature end.33
Adapters & Patterns
Higher-level behaviors (counting, tee, discard, delimiter streaming) emerge from simple loops over buffered() and small helper functions rather than heavy inheritance or trait chains.39
Counting Bytes (Buffered Length)
For many scenarios, you only need the number of bytes produced so far—reading the writer’s current buffered slice length suffices, avoiding a dedicated counting adapter.10
const std = @import("std");
// Simple counting example using Writer.fixed and buffered length.
pub fn main() !void {
var buf: [128]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try w.print("Counting: {s} {d}\n", .{"bytes", 123});
try w.print("And more\n", .{});
const written = w.buffered().len;
std.debug.print("Total bytes logically written: {d}\n", .{written});
}
$ zig run counting_writer.zigTotal bytes logically written: 29For streaming sinks where buffer length resets after flush, integrate a custom update function (see hashing writer design) to accumulate totals across flush boundaries.
Discarding Output ()
Benchmarks and dry-runs often need to measure formatting or transformation cost without retaining the result. Consuming the buffer zeros its length; subsequent writes continue normally.45
const std = @import("std");
// Demonstrate std.Io.Writer.Discarding to ignore outputs (useful in benchmarks)
pub fn main() !void {
var buf: [32]u8 = undefined;
var w: std.Io.Writer = .fixed(&buf);
try w.print("Ephemeral output: {d}\n", .{999});
// Discard content by consuming buffered bytes
_ = std.Io.Writer.consumeAll(&w);
// Show buffer now empty
std.debug.print("Buffer after consumeAll length: {d}\n", .{w.buffered().len});
}
$ zig run discarding_writer.zigBuffer after consumeAll length: 0consumeAll is a structural no-allocation operation; it simply adjusts end and (if needed) shifts remaining bytes. Cheap enough for tight inner loops.
Tee / Duplication
Duplicating a stream ("teeing") can be built manually: peek, write to both targets, toss. This avoids intermediary heap buffers and works for finite or pipelined inputs.28
const std = @import("std");
fn tee(r: *std.Io.Reader, a: *std.Io.Writer, b: *std.Io.Writer) !void {
while (true) {
const chunk = r.peekGreedy(1) catch |err| switch (err) {
error.EndOfStream => break,
error.ReadFailed => return err,
};
try a.writeAll(chunk);
try b.writeAll(chunk);
r.toss(chunk.len);
}
}
pub fn main() !void {
const input = "tee me please";
var r: std.Io.Reader = .fixed(input);
var abuf: [64]u8 = undefined;
var bbuf: [64]u8 = undefined;
var a: std.Io.Writer = .fixed(&abuf);
var b: std.Io.Writer = .fixed(&bbuf);
try tee(&r, &a, &b);
std.debug.print("A: {s}\nB: {s}\n", .{ a.buffered(), b.buffered() });
}
$ zig run tee_stream.zigA: tee me please
B: tee me pleaseAlways peekGreedy(1) (or appropriate size) before writing; failing to ensure buffered content can cause needless underlying reads or premature termination.44
Delimiter Streaming Pipeline
Line- or record-based protocols benefit from takeDelimiter, which returns slices excluding the delimiter. Loop until null to process all logical lines without copying or allocation.31
const std = @import("std");
// Demonstrates composing Reader -> Writer pipeline with delimiter streaming.
pub fn main() !void {
const data = "alpha\nbeta\ngamma\n";
var r: std.Io.Reader = .fixed(data);
var out_buf: [128]u8 = undefined;
var out: std.Io.Writer = .fixed(&out_buf);
while (true) {
// Stream one line (excluding the delimiter) then print processed form
const line_opt = r.takeDelimiter('\n') catch |err| switch (err) {
error.StreamTooLong => unreachable,
error.ReadFailed => return err,
};
if (line_opt) |line| {
try out.print("Line({d}): {s}\n", .{ line.len, line });
} else break;
}
std.debug.print("{s}", .{out.buffered()});
}
$ zig run stream_pipeline.zigLine(5): alpha
Line(4): beta
Line(5): gammatakeDelimiter yields null after the final segment—even if the underlying data ends with a delimiter—allowing simple termination checks without extra state.4
Notes & Caveats
- Fixed buffers are finite: exceeding capacity triggers writes that may fail—choose sizes based on worst-case formatted output.45
limitedenforces a hard ceiling; any remainder of the original stream remains unread (preventing over-read vulnerabilities).- Delimiter streaming requires nonzero buffer capacity; extremely tiny buffers can degrade performance due to frequent underlying reads.39
- Mixing legacy
std.io.fixedBufferStreamand newstd.Io.*is safe, but prefer consistency for future maintenance. - Counting via
buffered().lenexcludes flushed data—use a persistent accumulator if you flush mid-pipeline.10
Exercises
- Implement a simple line counter that aborts if any single line exceeds 256 bytes using
limitedwrappers.4 - Build a tee that also computes a SHA-256 hash of all streamed bytes using
Hasher.updatefrom the hashing writer adapter.sha2.zig - Write a delimiter + limit based reader that extracts only the first M CSV fields from large records without reading the entire line.44
- Extend the counting example to track both logical (post-format) and raw content length when using
{any}formatting.45
Caveats, Alternatives, Edge Cases
- Zero-capacity writers are legal but will immediately force drains—avoid for performance unless intentionally testing error paths.
- A tee loop that copies very large buffered chunks may monopolize cache; consider chunking for huge streams to improve locality.39
takeDelimitertreats end-of-stream similarly to a delimiter; if you must distinguish trailing empty segments, track whether the last byte processed was the delimiter.31- Direct mixing with filesystem APIs (Chapter 28) introduces platform-specific buffering; re-validate limits when wrapping OS file descriptors.28
- If future async I/O introduces suspend points, adapters that rely on tight peek/toss loops must ensure invariants across yields—document assumptions early.17