diff options
| author | David Finkel <david.finkel@gmail.com> | 2025-08-31 13:34:47 -0400 |
|---|---|---|
| committer | Damien Neil <dneil@google.com> | 2025-09-11 14:24:47 -0700 |
| commit | 0e1b98993ea6574819813cfad89d7fd1d1c47552 (patch) | |
| tree | c6190a7365fde4b0013fce0ffd1166fd77b6a469 /src/testing/loop_test.go | |
| parent | 84e9ab398438bc728683ca68485c6e89526b0441 (diff) | |
| download | go-0e1b98993ea6574819813cfad89d7fd1d1c47552.tar.xz | |
testing: exit B.Loop early upon saturation
There's a cap of 1 billion benchmark iterations because more than that
is usually not going to give more useful data. Unfortunately, the
existing implementation neglected to check whether the 1e9 cap had
already been exceeded when it adjusted the number of iterations in the
B.Loop slow path (stopOrScaleBLoop), since it's only when that cap is hit
that it needed to terminate early.
As a result, for _very_ cheap benchmarks (e.g. testing assembly
implementations with just a few instructions), the B.Loop would stop
incrementing the number of iterations, but wouldn't terminate early,
making it re-enter the slow-path _every_ iteration until the benchmark
time was exhausted.
This wasn't normally visible with the default -benchtime 2s, but when
raised to 5s, it would cause benchmarks that took <5ns/op to be reported
as exactly 5ns/op. (which looks a bit suspicious)
Notably, one can use -count for larger groupings to compute statistics.
golang.org/x/perf/cmd/benchstat is valuable for coalescing larger
run-counts from -count into more useful statistics.
Add a test which allows for fewer iterations on slow/contended
platforms but guards against reintroducing a bug of this nature.
Fixes #75210
Change-Id: Ie7f0b2e6c737b064448434f3ed565bfef8c4f020
Reviewed-on: https://go-review.googlesource.com/c/go/+/700275
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Damien Neil <dneil@google.com>
Reviewed-by: Sean Liao <sean@liao.dev>
Auto-Submit: Sean Liao <sean@liao.dev>
Diffstat (limited to 'src/testing/loop_test.go')
| -rw-r--r-- | src/testing/loop_test.go | 37 |
1 files changed, 37 insertions, 0 deletions
diff --git a/src/testing/loop_test.go b/src/testing/loop_test.go index 743cbe64f0..71dbd35dc2 100644 --- a/src/testing/loop_test.go +++ b/src/testing/loop_test.go @@ -7,6 +7,7 @@ package testing import ( "bytes" "strings" + "time" ) // See also TestBenchmarkBLoop* in other files. @@ -75,6 +76,42 @@ func TestBenchmarkBLoop(t *T) { } } +func TestBenchmarkBLoopCheapEarlyTerminate(t *T) { + if Short() { + t.Skip("B.Loop test needs to run for > 1s to saturate 1e9 iterations") + } + runCnt := 0 + // Set the benchmark time high enough that we're likely to hit the 1B + // iteration limit even on very slow hardware. + // (on an AMD Ryzen 5900X, this benchmark runs in just over a second) + // + // Notably, the assertions below shouldn't fail if a test-run is slow + // enough that it doesn't saturate the limit. + const maxBenchTime = time.Second * 30 + res := Benchmark(func(b *B) { + // Set the benchmark time _much_ higher than required to hit 1e9 iterations. + b.benchTime.d = maxBenchTime + for b.Loop() { + runCnt++ + } + }) + if runCnt > maxBenchPredictIters { + t.Errorf("loop body ran more than max (%d) times: %d", maxBenchPredictIters, runCnt) + if res.T >= maxBenchTime { + t.Logf("cheap benchmark exhausted time budget: %s; ran for %s", maxBenchTime, res.T) + } + } + + if res.N != runCnt { + t.Errorf("disagreeing loop counts: res.N reported %d, while b.Loop() iterated %d times", res.N, runCnt) + } + + if res.N > maxBenchPredictIters { + t.Errorf("benchmark result claims more runs than max (%d) times: %d", maxBenchPredictIters, res.N) + } + +} + func TestBenchmarkBLoopBreak(t *T) { var bState *B var bLog bytes.Buffer |
