Overview
Zig’s approach to dynamic memory is explicit, composable, and testable. Rather than hiding allocation behind implicit globals, APIs accept a std.mem.Allocator and return ownership clearly to their caller. This chapter shows the core allocator interface (alloc, free, realloc, resize, create, destroy), introduces the most common allocator implementations (page allocator, Debug/GPA with leak detection, arenas, and fixed buffers), and establishes patterns for passing allocators through your own APIs (see Allocator.zig and heap.zig).
You’ll learn when to prefer bulk-free arenas, how to use a fixed stack buffer to eliminate heap traffic, and how to grow and shrink allocations safely. These skills underpin the rest of the book—from collections to I/O adapters—and will make the later projects both faster and more robust (see 03).
Learning Goals
- Use
std.mem.Allocatorto allocate, free, and resize typed slices and single items. - Choose an allocator: page allocator, Debug/GPA (leak detection), arena, fixed buffer, or a stack-fallback composition.
- Design functions that accept an allocator and return owned memory to the caller (see 08).
The Allocator Interface
Zig’s allocator is a small, value-type interface with methods for typed allocation and explicit deallocation. The wrappers handle sentinels and alignment so you can stay at the []T level most of the time.
alloc/free, create/destroy, and sentinels
The essentials: allocate a typed slice, mutate its elements, then free. For single items, prefer create/destroy. Use allocSentinel (or dupeZ) when you need a null terminator for C interop.
const std = @import("std");
pub fn main() !void {
const allocator = std.heap.page_allocator; // OS-backed; fast & simple
// Allocate a small buffer and fill it.
const buf = try allocator.alloc(u8, 5);
defer allocator.free(buf);
for (buf, 0..) |*b, i| b.* = 'a' + @as(u8, @intCast(i));
std.debug.print("buf: {s}\n", .{buf});
// Create/destroy a single item.
const Point = struct { x: i32, y: i32 };
const p = try allocator.create(Point);
defer allocator.destroy(p);
p.* = .{ .x = 7, .y = -3 };
std.debug.print("point: (x={}, y={})\n", .{ p.x, p.y });
// Allocate a null-terminated string (sentinel). Great for C APIs.
var hello = try allocator.allocSentinel(u8, 5, 0);
defer allocator.free(hello);
@memcpy(hello[0..5], "hello");
std.debug.print("zstr: {s}\n", .{hello});
}
$ zig run alloc_free_basics.zigbuf: abcde
point: (x=7, y=-3)
zstr: helloPrefer {s} to print []const u8 slices (no terminator required). Use allocSentinel or dupeZ when interoperating with APIs that require a trailing \0.
How the Allocator Interface Works Under the Hood
The std.mem.Allocator type is a type-erased interface using a pointer and vtable. This design allows any allocator implementation to be passed through the same interface, enabling runtime polymorphism without virtual dispatch overhead for the common case.
The vtable contains four fundamental operations:
- alloc: Returns a pointer to
lenbytes with specified alignment, or error on failure - resize: Attempts to expand or shrink memory in place, returns
bool - remap: Attempts to expand or shrink memory, allowing relocation (used by
realloc) - free: Frees and invalidates a region of memory
The high-level API (create, destroy, alloc, free, realloc) wraps these vtable functions with type-safe, ergonomic methods. This two-layer design keeps allocator implementations simple while providing convenient typed allocation to users (see Allocator.zig).
Debug/GPA and Arena Allocators
For whole-program work, a Debug/GPA is the default: it tracks allocations and reports leaks at deinit(). For scoped, scratch allocations, an arena returns everything in one shot during deinit().
const std = @import("std");
pub fn main() !void {
// GeneralPurposeAllocator with leak detection on deinit.
var gpa: std.heap.GeneralPurposeAllocator(.{}) = .init;
defer {
const leaked = gpa.deinit() == .leak;
if (leaked) @panic("leak detected");
}
const alloc = gpa.allocator();
const nums = try alloc.alloc(u64, 4);
defer alloc.free(nums);
for (nums, 0..) |*n, i| n.* = @as(u64, i + 1);
var sum: u64 = 0;
for (nums) |n| sum += n;
std.debug.print("gpa sum: {}\n", .{sum});
// Arena allocator: bulk free with deinit.
var arena_inst = std.heap.ArenaAllocator.init(alloc);
defer arena_inst.deinit();
const arena = arena_inst.allocator();
const msg = try arena.dupe(u8, "temporary allocations live here");
std.debug.print("arena msg len: {}\n", .{msg.len});
}
$ zig run gpa_arena.ziggpa sum: 10
arena msg len: 31In Zig 0.15.x, std.heap.GeneralPurposeAllocator is a thin alias to the Debug allocator. Always check the return of deinit(): .leak indicates something wasn’t freed.
Choosing and Composing Allocators
Allocators are regular values: you can pass them, wrap them, and compose them. Two workhorse tools are the fixed buffer allocator (for stack-backed bursts of allocations) and realloc/resize for dynamic growth and shrinkage.
Wrapping Allocators for Safety and Debugging
Because allocators are just values with a common interface, you can wrap one allocator to add functionality. The std.mem.validationWrap function demonstrates this pattern by adding safety checks before delegating to an underlying allocator.
The ValidationAllocator wrapper validates that:
- Allocation sizes are greater than zero
- Returned pointers have correct alignment
- Memory lengths are valid in resize/free operations
This pattern is powerful: you can build custom allocator wrappers that add logging, metrics collection, memory limits, or other cross-cutting concerns without modifying the underlying allocator. The wrapper simply delegates to underlying_allocator after performing its checks or side effects. mem.zig
Fixed buffer on the stack
Use a FixedBufferAllocator to get fast, zero-syscall allocations from a stack array. When you run out, you’ll get error.OutOfMemory—exactly the signal you need to fall back or trim inputs.
const std = @import("std");
pub fn main() !void {
var backing: [32]u8 = undefined;
var fba = std.heap.FixedBufferAllocator.init(&backing);
const A = fba.allocator();
// 3 small allocations should fit.
const a = try A.alloc(u8, 8);
const b = try A.alloc(u8, 8);
const c = try A.alloc(u8, 8);
_ = a;
_ = b;
_ = c;
// This one should fail (32 total capacity, 24 already used).
if (A.alloc(u8, 16)) |_| {
std.debug.print("unexpected success\n", .{});
} else |err| switch (err) {
error.OutOfMemory => std.debug.print("fixed buffer OOM as expected\n", .{}),
else => return err,
}
}
$ zig run fixed_buffer.zigfixed buffer OOM as expectedFor a graceful fallback, compose a fixed buffer over a slower allocator with std.heap.stackFallback(N, fallback). The returned object has a .get() method that yields a fresh Allocator each time.
Growing and shrinking safely with realloc/resize
realloc returns a new slice (and may move the allocation). resize attempts to change length in place and returns bool; remember to also update your slice’s len when it succeeds.
const std = @import("std");
pub fn main() !void {
var gpa: std.heap.GeneralPurposeAllocator(.{}) = .init;
defer { _ = gpa.deinit(); }
const alloc = gpa.allocator();
var buf = try alloc.alloc(u8, 4);
defer alloc.free(buf);
for (buf, 0..) |*b, i| b.* = 'A' + @as(u8, @intCast(i));
std.debug.print("len={} contents={s}\n", .{ buf.len, buf });
// Grow using realloc (may move).
buf = try alloc.realloc(buf, 8);
for (buf[4..], 0..) |*b, i| b.* = 'a' + @as(u8, @intCast(i));
std.debug.print("grown len={} contents={s}\n", .{ buf.len, buf });
// Shrink in-place using resize; remember to slice.
if (alloc.resize(buf, 3)) {
buf = buf[0..3];
std.debug.print("shrunk len={} contents={s}\n", .{ buf.len, buf });
} else {
// Fallback when in-place shrink not supported by allocator.
buf = try alloc.realloc(buf, 3);
std.debug.print("shrunk (realloc) len={} contents={s}\n", .{ buf.len, buf });
}
}
$ zig run resize_and_realloc.ziglen=4 contents=ABCD
grown len=8 contents=ABCDabcd
shrunk (realloc) len=3 contents=ABCAfter resize(buf, n) == true, the old buf still has its previous len. Re-slice it (buf = buf[0..n]) so downstream code sees the new length.
How Alignment Works Under the Hood
Zig’s memory system uses a compact power-of-two alignment representation. The std.mem.Alignment enum stores alignment as a log₂ value, allowing efficient storage while providing rich utility methods.
This compact representation provides utility methods for:
- Converting to/from byte units:
@"16".toByteUnits()returns16,fromByteUnits(16)returns@"16" - Aligning addresses forward:
forward(addr)rounds up to next aligned boundary - Aligning addresses backward:
backward(addr)rounds down to previous aligned boundary - Checking alignment:
check(addr)returnstrueif address meets alignment requirement - Type alignment:
of(T)returns the alignment of typeT
When you see alignedAlloc(T, .@"16", n) or use alignment in custom allocators, you’re working with this log₂ representation. The compact storage allows Zig to track alignment efficiently without wasting space (see mem.zig).
Allocator-as-parameter pattern
Your APIs should accept an allocator and return owned memory to the caller. This keeps lifetimes explicit and lets your users pick the right allocator for their context (arena for scratch, GPA for general use, fixed buffer when available).
const std = @import("std");
fn joinSep(allocator: std.mem.Allocator, parts: []const []const u8, sep: []const u8) ![]u8 {
var total: usize = 0;
for (parts) |p| total += p.len;
if (parts.len > 0) total += sep.len * (parts.len - 1);
var out = try allocator.alloc(u8, total);
var i: usize = 0;
for (parts, 0..) |p, idx| {
@memcpy(out[i .. i + p.len], p);
i += p.len;
if (idx + 1 < parts.len) {
@memcpy(out[i .. i + sep.len], sep);
i += sep.len;
}
}
return out;
}
pub fn main() !void {
// Use GPA to build a string, then free.
var gpa: std.heap.GeneralPurposeAllocator(.{}) = .init;
defer { _ = gpa.deinit(); }
const A = gpa.allocator();
const joined = try joinSep(A, &.{ "zig", "likes", "allocators" }, "-");
defer A.free(joined);
std.debug.print("gpa: {s}\n", .{joined});
// Try with a tiny fixed buffer to demonstrate OOM.
var buf: [8]u8 = undefined;
var fba = std.heap.FixedBufferAllocator.init(&buf);
const B = fba.allocator();
if (joinSep(B, &.{ "this", "is", "too", "big" }, ",")) |s| {
// If it somehow fits, free it (unlikely with 16 bytes here).
B.free(s);
std.debug.print("fba unexpectedly succeeded\n", .{});
} else |err| switch (err) {
error.OutOfMemory => std.debug.print("fba: OOM as expected\n", .{}),
else => return err,
}
}
$ zig run allocator_parameter.ziggpa: zig-likes-allocators
fba: OOM as expectedReturning []u8 (or []T) shifts ownership cleanly to the caller; document that the caller must free. When you can, offer a comptime-friendly variant that writes into a caller-provided buffer. 04
Notes & Caveats
- Free what you allocate. In this book, examples use
defer allocator.free(buf)immediately after a successfulalloc. - Shrinking: prefer
resizefor in-place shrink; fall back toreallocif it returnsfalse. - Arenas: never return arena-owned memory to long-lived callers. Arena memory dies at
deinit(). - GPA/Debug: check
deinit()and wire leak detection into tests withstd.testing(see testing.zig). - Fixed buffers: great for bounded workloads; combine with
stackFallbackto degrade gracefully.
Exercises
- Implement
splitJoin(allocator, s: []const u8, needle: u8) ![]u8that splits on a byte and rejoins with'-'. Add a variant that writes into a caller buffer. - Rewrite one of your earlier CLI tools to accept an allocator from
mainand plumb it through. TryArenaAllocatorfor transient buffers. 06 - Wrap
FixedBufferAllocatorwithstackFallbackand show how the same function succeeds on small inputs but falls back for larger ones.
Alternatives & Edge Cases
- Alignment-sensitive allocations: use
alignedAlloc(T, .@"16", n)or typed helpers that propagate alignment. - Zero-sized types and zero-length slices are supported by the interface; don’t special-case them.
- C interop: when linking libc, consider
c_allocator/raw_c_allocatorfor matching foreign allocation semantics; otherwise prefer page allocator/GPA.