Description
Go version
go version go1.24.2
What did you do?
I read the current b.Loop API doc and draft b.Loop blog. I also used gopls.
What did you see happen?
Currently, the API docs and the draft b.Loop blog seem to recommend b.Loop as a better approach than the older approach, but seem to do so without mentioning some of the potential drawbacks. gopls seems to also recommend switching over all benchmarks as part of its modernizer work.
I'm also seeing people on places like BlueSky and elsewhere who seem to recommend universally converting all benchmarks over to b.Loop, with other people then agreeing with that recommendation without discussion of pros/cons.
What did you expect to see?
As I understand it, the current implementation stops the inlining of any functions called within the b.Loop for loop, which I think means arguments to those functions need to be treated as function parameters within the non-inlined function (and hence can't be constant folded away, etc.), and similarly the return values within the non-inlined function can't be entirely optimized away (so whatever the function does to generate the return values also can't be optimized away).
However, it seems plausible that it might be desirable to change the exact implementation in the future. For example, if #61179 is accepted, it might be desirable to use testing.Keep in the implementation of b.Loop, or even if not directly used in the implementation, there's at least some chance it might make sense to describe b.Loop in terms of testing.Keep semantics.
Or maybe that's not what ends up happening, but it could be worthwhile to keep that door open.
The current API doc says:
The compiler never optimizes away calls to functions within the body of a "for b.Loop() { ... }" loop. This prevents surprises that can otherwise occur if the compiler determines that the result of a benchmarked function is unused.
I wonder if it might make sense to insert the word "currently" or hint that it might change in the future, or maybe imply that the compiler won't optimize away the work of the function, or otherwise tweak the language slightly. The 1.24 release notes use different language that seems better to me:
Function call parameters and results are kept alive, preventing the compiler from fully optimizing away the loop body.
That said, maybe the current API text is ambiguous enough, including it's not 100% clear to me if that text is intending to specifically describe inlining. (For example, if I was talking to someone and wanted to point to a function that I think would be inlined, and I don't think I would say "the compiler will optimize away calls to that function").
Separately, while I think b.Loop is an excellent step forward, I am hoping the implementation does change a bit. For me personally, I have not yet embraced b.Loop for new or existing benchmarks because of its current implementation.
For example, in many cases, I'm relying on inlining to happen so that something can be stack allocated.
To pick a trivial but concrete case, suppose you have something like:
// Sample function that relies on mid-stack inlining so that the slice is not always heap allocated.
// (See for example https://words.filippo.io/efficient-go-apis-with-the-inliner).
func NewX(x int) []byte {
out := make([]byte, 8)
return newX(out, x)
}
//go:noinline
func newX(out []byte, x int) []byte {
binary.LittleEndian.PutUint64(out, uint64(x))
return out
}
Using the old b.N approach, the benchmark avoids the heap allocation, just as normal usage in non-test code would avoid the heap allocation:
b.Run("b.N", func(b *testing.B) {
for i := 0; i < b.N; i++ {
// 0 heap allocs in benchmark.
// Example of old sink-based approach.
out := NewX(42)
sink = out[0]
}
})
In contrast, just directly using b.Loop causes a heap allocation, which does not reflect what would happen in normal non-test usage:
b.Run("b.Loop-basic", func(b *testing.B) {
for b.Loop() {
// 1 heap alloc in benchmark because b.Loop purposefully prevents NewX from being inlined.
// This seems to be recommended approach according to current docs and draft blog.
NewX(42)
}
})
I can for example move the function-under-test to a closure:
b.Run("b.Loop-closure", func(b *testing.B) {
bench := func(x int) {
// 0 heap allocs in benchmark.
// Closure is not inlined in b.Loop, but no use of return value,
// so in theory could get "overly" optimized.
NewX(x)
}
for b.Loop() {
bench(42)
}
})
But if I just do that, I might open myself up to some of the problems testing.Keep and b.Loop are intended to avoid, so I can use an older sink
style approach with the closure:
b.Run("b.Loop-closure-with-sink", func(b *testing.B) {
bench := func(x int) {
// 0 heap allocs in benchmark.
// Closure with old sink-based approach for the result.
out := NewX(x)
sink = out[0]
}
for b.Loop() {
bench(42)
}
})
Or use runtime.KeepAlive as a stand-in for testing.Keep:
b.Run("b.Loop-closure-with-keepalive", func(b *testing.B) {
bench := func(x int) {
// 0 heap allocs in benchmark.
// KeepAlive requires the result of NewX to be computed, I think, even if NewX is inlined.
runtime.KeepAlive(NewX(x))
}
for b.Loop() {
bench(42)
}
})
All of which is to say:
-
It might be worthwhile to mention somewhere that the older b.N can still be useful in some cases, or alternatively maybe mention some of the drawbacks and workarounds for the current b.Loop implementation.
-
It would be nice for the API docs to at least keep the door open to future adjustments, or roughly equivalently, for someone on the core Go team to say publicly that the current language does not really tie our hands for future possible improvements. (Or maybe that statement has effectively already been made).
As an example alternate implementation, I wonder if the compiler could take what I wrote as:
for b.Loop() {
NewX(42)
}
and automatically transform it to the "b.Loop-closure-with-keepalive" example just above. In that case, the bench
closure would not be inlined inside the b.Loop block, but NewX
could be inlined inside bench
.
(I haven't thought through the implications entirely, but perhaps that would retain roughly all the good aspects of the current approach, including not being an overly complex implementation within cmd/compile, but without forcing extra heap allocations. There would still be a function call overhead introduced by b.Loop compared to the older b.N approach, but that is less dramatic impact and also would happen more uniformly, whereas currently b.Loop can introduce a lot of overhead or a little overhead depending on the functions under test.)
Runnable playground link for above examples: https://go.dev/play/p/1bZGIS9Tcxw
CC @JunyangShao, @aclements, @zeebo