aboutsummaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorMichael Anthony Knyszek <mknyszek@google.com>2025-08-19 19:29:55 +0000
committerGopher Robot <gobot@golang.org>2025-08-19 13:39:08 -0700
commitffa882059cfbfc7cd5f16c83d24775c08d63668f (patch)
tree72a14688df1f38ad04d0e7bcf6d6aae0ae31af5c /src
parent1f2e8e03e48597367e674138e26432345c685b1c (diff)
downloadgo-ffa882059cfbfc7cd5f16c83d24775c08d63668f.tar.xz
unique: deflake TestCanonMap/LoadOrStore/ConcurrentUnsharedKeys
I do not know yet what's causing this flake, but I've debugged it enough to be confident that it's not a serious issue; it seems to be a test flake. There is some path through which the tree nodes or keys might still be transiently reachable, but I don't yet know what that is. Details about what I tried and ruled out are in the code. For #74083. Change-Id: I97cdaf3f97e8c543fcc2ccde8b7e682893ae2f97 Reviewed-on: https://go-review.googlesource.com/c/go/+/697341 Auto-Submit: Michael Knyszek <mknyszek@google.com> Reviewed-by: Carlos Amedee <carlos@golang.org> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Diffstat (limited to 'src')
-rw-r--r--src/unique/canonmap_test.go19
1 files changed, 19 insertions, 0 deletions
diff --git a/src/unique/canonmap_test.go b/src/unique/canonmap_test.go
index e8f56d8e00..9609d7d422 100644
--- a/src/unique/canonmap_test.go
+++ b/src/unique/canonmap_test.go
@@ -108,6 +108,25 @@ func testCanonMap(t *testing.T, newMap func() *canonMap[string]) {
wg.Wait()
}
+ // Run an extra GC cycle to de-flake. Sometimes the cleanups
+ // fail to run in time, despite drainCleanupQueue.
+ //
+ // TODO(mknyszek): Figure out why the extra GC is necessary,
+ // and what is transiently keeping the cleanups live.
+ // * I have confirmed that they are not completely stuck, and
+ // they always eventually run.
+ // * I have also confirmed it's not asynchronous preemption
+ // keeping them around (though that is a possibility).
+ // * I have confirmed that they are not simply sitting on
+ // the queue, and that drainCleanupQueue is just failing
+ // to actually empty the queue.
+ // * I have confirmed that it's not a write barrier that's
+ // keeping it alive, nor is it a weak pointer dereference
+ // (which shades the object during the GC).
+ // The corresponding objects do seem to be transiently truly
+ // reachable, but I have no idea by what path.
+ runtime.GC()
+
// Drain cleanups so everything is deleted.
drainCleanupQueue(t)