aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mbitmap.go
AgeCommit message (Collapse)Author
2025-10-02runtime,net/http/pprof: goroutine leak detection by using the garbage collectorVlad Saioc
Proposal #74609 Change-Id: I97a754b128aac1bc5b7b9ab607fcd5bb390058c8 GitHub-Last-Rev: 60f2a192badf415112246de8bc6c0084085314f6 GitHub-Pull-Request: golang/go#74622 Reviewed-on: https://go-review.googlesource.com/c/go/+/688335 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: t hepudds <thepudds1460@gmail.com> Auto-Submit: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Carlos Amedee <carlos@golang.org>
2025-09-18runtime: make explicit nil check in heapSetTypeSmallHeaderMichael Pratt
This is another case very similar to CL 684015 and #74375. In spans with type headers, mallocgc always writes to the page before returning the allocated memory. This initial write is done by runtime.heapSetTypeSmallHeader. Prior to the write, the compiler inserts a nil check, implemented as a dummy instruction reading from memory. On a freshly mapped page, this read triggers a page fault, mapping the zero page read-only. Immediately afterwards, the write triggers another page fault, copying to a writeable page and performing a TLB flush. This problem is exacerbated as the process scales up. At GOMAXPROCS=6, the tile38 sweet benchmark spends around 0.1% of cycles directly handling these page faults. On the same machine at GOMAXPROCS=192, it spends about 2.7% of cycles directly handling these page faults. Replacing the read with an explicit nil check reduces the direct cost of these page faults down to around 0.1% at GOMAXPROCS=192. There are additional positive side-effects due to reduced contention, so the overall time spent in page faults drops from around 12.8% to 6.8%. Most of the remaining time in page faults is spent on automatic NUMA page migration (completely unrelated to this issue). Impact on the tile38 benchmark results: │ baseline │ cl704755 │ │ sec/op │ sec/op vs base │ Tile38QueryLoad-192 1.638m ± 3% 1.494m ± 5% -8.79% (p=0.002 n=6) │ baseline │ cl704755 │ │ average-RSS-bytes │ average-RSS-bytes vs base │ Tile38QueryLoad-192 5.384Gi ± 3% 5.399Gi ± 3% ~ (p=0.818 n=6) │ baseline │ cl704755 │ │ peak-RSS-bytes │ peak-RSS-bytes vs base │ Tile38QueryLoad-192 5.818Gi ± 1% 5.864Gi ± 2% ~ (p=0.394 n=6) │ baseline │ cl704755 │ │ peak-VM-bytes │ peak-VM-bytes vs base │ Tile38QueryLoad-192 7.121Gi ± 1% 7.180Gi ± 2% ~ (p=0.818 n=6) │ baseline │ cl704755 │ │ p50-latency-sec │ p50-latency-sec vs base │ Tile38QueryLoad-192 343.2µ ± 1% 313.2µ ± 3% -8.73% (p=0.002 n=6) │ baseline │ cl704755 │ │ p90-latency-sec │ p90-latency-sec vs base │ Tile38QueryLoad-192 1.662m ± 2% 1.603m ± 5% ~ (p=0.093 n=6) │ baseline │ cl704755 │ │ p99-latency-sec │ p99-latency-sec vs base │ Tile38QueryLoad-192 41.56m ± 8% 35.26m ± 18% -15.17% (p=0.026 n=6) │ baseline │ cl704755 │ │ ops/s │ ops/s vs base │ Tile38QueryLoad-192 87.89k ± 3% 96.36k ± 4% +9.64% (p=0.002 n=6) Updates #74375. Change-Id: I6a6a636c1a16261b6d5076f2e1b08524a6544d33 Reviewed-on: https://go-review.googlesource.com/c/go/+/704755 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> Auto-Submit: Michael Pratt <mpratt@google.com>
2025-07-29internal/abi: move direct/indirect flag from Kind to TFlagKeith Randall
This info makes more sense in the flags instead of as a high bit of the kind. This makes kind access simpler because we now don't need to mask anything. Cleaned up most direct field accesses to use methods instead. (reflect making new types is the only remaining direct accessor.) IfaceIndir -> !IsDirectIface everywhere. gocore has been updated to handle the new location. So has delve. TODO: any other tools need updating? Change-Id: I123f97a4d4bdd0bff1641ee7e276d1cc0bd7e8eb Reviewed-on: https://go-review.googlesource.com/c/go/+/681936 Reviewed-by: Keith Randall <khr@google.com> Reviewed-by: David Chase <drchase@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-07-25runtime: rename scanobject to scanObjectMichael Anthony Knyszek
This is long overdue. Change-Id: I891b114cb581e82b903c20d1c455bbbdad548fe8 Reviewed-on: https://go-review.googlesource.com/c/go/+/690535 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Michael Pratt <mpratt@google.com> Auto-Submit: Michael Knyszek <mknyszek@google.com>
2025-05-20runtime: prevent unnecessary zeroing of large objects with pointersMichael Anthony Knyszek
CL 614257 refactored mallocgc but lost an optimization: if a span for a large object is already backed by memory fresh from the OS (and thus zeroed), we don't need to zero it. CL 614257 unconditionally zeroed spans for large objects that contain pointers. This change restores the optimization from before CL 614257, which seems to matter in some real-world programs. While we're here, let's also fix a hole with the garbage collector being able to observe uninitialized memory of the large object is observed by the conservative scanner before being published. The gory details are in a comment in heapSetTypeLarge. In short, this change makes span.largeType an atomic variable, such that the GC can only observe initialized memory if span.largeType != nil. Fixes #72991. Change-Id: I2048aeb220ab363d252ffda7d980b8788e9674dc Reviewed-on: https://go-review.googlesource.com/c/go/+/659956 Reviewed-by: Keith Randall <khr@golang.org> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Keith Randall <khr@google.com> Reviewed-by: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
2025-05-20runtime: only update freeIndexForScan outside of the mark phaseMichael Anthony Knyszek
Currently, it's possible for asynchronous preemption to observe a partially initialized object. The sequence of events goes like this: - The GC is in the mark phase. - Thread T1 is allocating object O1. - Thread T1 zeroes the allocation, runs the publication barrier, and updates freeIndexForScan. It has not yet updated the mark bit on O1. - Thread T2 is conservatively scanning some stack frame. That stack frame has a dead pointer with the same address as O1. - T2 picks up the pointer, checks isFree (which checks freeIndexForScan without an import barrier), and sees that O1 is allocated. It marks and queues O1. - T2 then goes to scan O1, and observes uninitialized memory. Although a publication barrier was executed, T2 did not have an import barrier. T2 may thus observe T1's writes to zero the object out-of-order with the write to freeIndexForScan. Normally this would be impossible if T2 got a pointer to O1 from somewhere written by T1. The publication barrier guarantees that if the read side is data-dependent on the write side then we'd necessarily observe all writes to O1 before T1 published it. However, T2 got the pointer 'out of thin air' by scanning a stack frame with a dead pointer on it. One fix to this problem would be to add the import barrier in the conservative scanner. We would then also need to put freeIndexForScan behind the publication barrier, or make the write to freeIndexForScan exactly that barrier. However, there's a simpler way. We don't actually care if conservative scanning observes a stale freeIndexForScan during the mark phase. Newly-allocated memory is always marked at the point of allocation (the allocate-black policy part of the GC's design). So it doesn't actually matter that if the garbage collector scans that memory or not. This change modifies the allocator to only update freeIndexForScan outside the mark phase. This means freeIndexForScan is essentially a snapshot of freeindex at the point the mark phase started. Because there's no more race between conservative scanning and newly-allocated objects, the complicated scenario above is no longer a possibility. One thing we do have to be careful of is other callers of isFree. Previously freeIndexForScan would always track freeindex, now it no longer does. This change thus introduces isFreeOrNewlyAllocated which is used by the conservative scanner, and uses freeIndexForScan. Meanwhile isFree goes back to using freeindex like it used to. This change also documents the requirement on isFree that the caller must have obtained the pointer not 'out of thin air' but after the object was published. isFree is not currently used anywhere particularly sensitive (heap dump and checkmark mode, where the world is stopped in both cases) so using freeindex is both conceptually simple and also safe. Change-Id: If66b8c536b775971203fb4358c17d711c2944723 Reviewed-on: https://go-review.googlesource.com/c/go/+/672340 Reviewed-by: David Chase <drchase@google.com> Reviewed-by: Cherry Mui <cherryyz@google.com> Reviewed-by: Keith Randall <khr@golang.org> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-05-14runtime: improve scan inner loopKeith Randall
On every arch except amd64, it is faster to do x&(x-1) than x^(1<<n). Most archs need 3 instructions for the latter: MOV $1, R; SLL n, R; ANDN R, x. Maybe 4 if there's no ANDN. Most archs need only 2 instructions to do x&(x-1). It takes 3 on x86/amd64 because NEG only works in place. Only amd64 can do x^(1<<n) in a single instruction. (We could on 386 also, but that's currently not implemented.) Change-Id: I3b74b7a466ab972b20a25dbb21b572baf95c3467 Reviewed-on: https://go-review.googlesource.com/c/go/+/672956 Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Keith Randall <khr@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-05-02runtime: mark and scan small objects in whole spans [green tea]Michael Anthony Knyszek
Our current parallel mark algorithm suffers from frequent stalls on memory since its access pattern is essentially random. Small objects are the worst offenders, since each one forces pulling in at least one full cache line to access even when the amount to be scanned is far smaller than that. Each object also requires an independent access to per-object metadata. The purpose of this change is to improve garbage collector performance by scanning small objects in batches to obtain better cache locality than our current approach. The core idea behind this change is to defer marking and scanning small objects, and then scan them in batches localized to a span. This change adds scanned bits to each small object (<=512 bytes) span in addition to mark bits. The scanned bits indicate that the object has been scanned. (One way to think of them is "grey" bits and "black" bits in the tri-color mark-sweep abstraction.) Each of these spans is always 8 KiB and if they contain pointers, the pointer/scalar data is already packed together at the end of the span, allowing us to further optimize the mark algorithm for this specific case. When the GC encounters a pointer, it first checks if it points into a small object span. If so, it is first marked in the mark bits, and then the object is queued on a work-stealing P-local queue. This object represents the whole span, and we ensure that a span can only appear at most once in any queue by maintaining an atomic ownership bit for each span. Later, when the pointer is dequeued, we scan every object with a set mark that doesn't have a corresponding scanned bit. If it turns out that was the only object in the mark bits since the last time we scanned the span, we scan just that object directly, essentially falling back to the existing algorithm. noscan objects have no scan work, so they are never queued. Each span's mark and scanned bits are co-located together at the end of the span. Since the span is always 8 KiB in size, it can be found with simple pointer arithmetic. Next to the marks and scans we also store the size class, eliminating the need to access the span's mspan altogether. The work-stealing P-local queue is a new source of GC work. If this queue gets full, half of it is dumped to a global linked list of spans to scan. The regular scan queues are always prioritized over this queue to allow time for darts to accumulate. Stealing work from other Ps is a last resort. This change also adds a new debug mode under GODEBUG=gctrace=2 that dumps whole-span scanning statistics by size class on every GC cycle. A future extension to this CL is to use SIMD-accelerated scanning kernels for scanning spans with high mark bit density. For #19112. (Deadlock averted in GOEXPERIMENT.) For #73581. Change-Id: I4bbb4e36f376950a53e61aaaae157ce842c341bc Reviewed-on: https://go-review.googlesource.com/c/go/+/658036 Auto-Submit: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-04-23runtime: move some malloc constants to internal/runtime/gcMichael Anthony Knyszek
These constants are needed by some future generator programs. Change-Id: I5dccd009cbb3b2f321523bc0d8eaeb4c82e5df81 Reviewed-on: https://go-review.googlesource.com/c/go/+/655276 Reviewed-by: Cherry Mui <cherryyz@google.com> Auto-Submit: Michael Knyszek <mknyszek@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-11-18runtime: get rid of gc programs for typesKeith Randall
Instead, have the runtime build the gc bitmaps on demand at runtime. Change-Id: If7a245bc62e4bce3ce80972410b0ed307d921abe Reviewed-on: https://go-review.googlesource.com/c/go/+/616255 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com> Reviewed-by: Keith Randall <khr@google.com>
2024-10-25runtime: fix mallocgc for asanMichael Anthony Knyszek
This change finally fully fixes mallocgc for asan after the recent refactoring. Here is everything that changed: Fix the accounting for the alloc header; large objects don't have them. Mask out extra bits set from unrolling the bitmap for slice backing stores in writeHeapBitsSmall. The redzone in asan mode makes it so that dataSize is no longer an exact multiple of typ.Size_ in this case (a new assumption I have recently discovered) but we didn't mask out any extra bits, so we'd accidentally set bits in other allocations. Oops. Move the initHeapBits optimization for the 8-byte scan sizeclass on 64-bit platforms up to mallocgc, out from writeHeapBitsSmall. So, this actually caused a problem with asan when the optimization first landed, but we missed it. The issue was then masked once we started passing the redzone down into writeHeapBitsSmall, since the optimization would no longer erroneously fire on asan. What happened was that dataSize would be 8 (because that was the user-provided alloc size) so we'd skip writing heap bits, but it would turn out the redzone bumped the size class, so we'd actually *have* to write the heap bits for that size class. This is not really a problem now *but* it caused problems for me when debugging, since I would try to remove the red zone from dataSize and this would trigger this bug again. Ultimately, this whole situation is confusing because the check in writeHeapBitsSmall is *not* the same as the check in initHeapBits. By moving this check up to mallocgc, we can make the checks align better by matching on the sizeclass, so this should be less error-prone in the future. Change-Id: I1e9819223be23f722f3bf21e63e812f5fb557194 Reviewed-on: https://go-review.googlesource.com/c/go/+/622041 Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Keith Randall <khr@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-10-21runtime: specialize heapSetTypeMichael Anthony Knyszek
Last CL we separated mallocgc into several specialized paths. Let's split up heapSetType too. This will make the specialized heapSetType functions inlineable and cut out some branches as well as a function call. Microbenchmark results at this point in the stack: │ before.out │ after-5.out │ │ sec/op │ sec/op vs base │ Malloc8-4 13.52n ± 3% 12.15n ± 2% -10.13% (p=0.002 n=6) Malloc16-4 21.49n ± 2% 18.32n ± 4% -14.75% (p=0.002 n=6) MallocTypeInfo8-4 27.12n ± 1% 18.64n ± 2% -31.30% (p=0.002 n=6) MallocTypeInfo16-4 28.71n ± 3% 21.63n ± 5% -24.65% (p=0.002 n=6) geomean 21.81n 17.31n -20.64% Change-Id: I5de9ac5089b9eb49bf563af2a74e6dc564420e05 Reviewed-on: https://go-review.googlesource.com/c/go/+/614795 Auto-Submit: Michael Knyszek <mknyszek@google.com> Reviewed-by: Keith Randall <khr@golang.org> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Keith Randall <khr@google.com>
2024-10-21runtime: optimize 8-byte allocation pointer data writingMichael Anthony Knyszek
This change brings back a minor optimization lost in the Go 1.22 cycle wherein the 8-byte pointer-ful span class spans would have the pointer bitmap written ahead of time in bulk, because there's only one possible pattern. │ before │ after │ │ sec/op │ sec/op vs base │ MallocTypeInfo8-4 25.13n ± 1% 23.59n ± 2% -6.15% (p=0.002 n=6) Change-Id: I135b84bb1d5b7e678b841b56430930bc73c0a038 Reviewed-on: https://go-review.googlesource.com/c/go/+/614256 Reviewed-by: Keith Randall <khr@golang.org> Auto-Submit: Michael Knyszek <mknyszek@google.com> Reviewed-by: Keith Randall <khr@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-10-21runtime: don't call span.heapBits in writeHeapBitsSmallMichael Anthony Knyszek
For whatever reason, span.heapBits is kind of slow. It accounts for about a quarter of the cost of writeHeapBitsSmall, which is absurd. We get a nice speed improvement for small allocations by eliminating this call. │ before │ after │ │ sec/op │ sec/op vs base │ MallocTypeInfo16-4 29.47n ± 1% 27.02n ± 1% -8.31% (p=0.002 n=6) Change-Id: I6270e26902e5a9254cf1503fac81c3c799c59d6a Reviewed-on: https://go-review.googlesource.com/c/go/+/614255 Reviewed-by: Keith Randall <khr@golang.org> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Keith Randall <khr@google.com> Auto-Submit: Michael Knyszek <mknyszek@google.com>
2024-07-23runtime,internal: move runtime/internal/sys to internal/runtime/sysDavid Chase
Cleanup and friction reduction For #65355. Change-Id: Ia14c9dc584a529a35b97801dd3e95b9acc99a511 Reviewed-on: https://go-review.googlesource.com/c/go/+/600436 Reviewed-by: Keith Randall <khr@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Keith Randall <khr@golang.org>
2024-05-23all: document legacy //go:linkname for modules with ≥20,000 dependentsRuss Cox
For #67401. Change-Id: Icc10ede72547d8020c0ba45e89d954822a4b2455 Reviewed-on: https://go-review.googlesource.com/c/go/+/587218 Auto-Submit: Russ Cox <rsc@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-04-09runtime: make zeroing of large objects containing pointers preemptibleMichael Anthony Knyszek
This change makes it possible for the runtime to preempt the zeroing of large objects that contain pointers. It turns out this is fairly straightforward with allocation headers, since we can just temporarily tell the GC that there's nothing to scan for a large object with a single pointer write (as opposed to trying to zero a whole bunch of bits, as we would've had to do once upon a time). Fixes #31222. Change-Id: I10d0dcfa3938c383282a3eb485a6f00070d07bd2 Reviewed-on: https://go-review.googlesource.com/c/go/+/577495 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
2024-04-09runtime: remove the allocheaders GOEXPERIMENTMichael Anthony Knyszek
This change removes the allocheaders, deleting all the old code and merging mbitmap_allocheaders.go back into mbitmap.go. This change also deletes the SetType benchmarks which were already broken in the new GOEXPERIMENT (it's harder to set up than before). We weren't really watching these benchmarks at all, and they don't provide additional test coverage. Change-Id: I135497201c3259087c5cd3722ed3fbe24791d25d Reviewed-on: https://go-review.googlesource.com/c/go/+/567200 Reviewed-by: Keith Randall <khr@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com> Reviewed-by: Keith Randall <khr@golang.org> Auto-Submit: Michael Knyszek <mknyszek@google.com>
2024-04-02all: use kind* of abiqiulaidongfeng
For #59670 Change-Id: Id66e102f13e529dd041b68ce869026a56f0a1b9b GitHub-Last-Rev: 43aa9376f72bc02a9d86518cdc99494a6b2f8573 GitHub-Pull-Request: golang/go#65564 Reviewed-on: https://go-review.googlesource.com/c/go/+/562298 LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Dmitri Shuralyov <dmitshur@google.com> Auto-Submit: Austin Clements <austin@google.com>
2024-03-25runtime: migrate internal/atomic to internal/runtimeAndy Pan
For #65355 Change-Id: I65dd090fb99de9b231af2112c5ccb0eb635db2be Reviewed-on: https://go-review.googlesource.com/c/go/+/560155 Reviewed-by: David Chase <drchase@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Ibrahim Bazoka <ibrahimbazoka729@gmail.com> Auto-Submit: Emmanuel Odeke <emmanuel@orijtech.com>
2023-11-29runtime: docfix countAllocPeter Feichtinger
fix typo in `countAlloc` doc Change-Id: I9f0752412b7a7dfae4915870edeab4ac52e38b2d GitHub-Last-Rev: 6080d3c03ba6cacb1874af9724cfeb7cae27b78f GitHub-Pull-Request: golang/go#64357 Reviewed-on: https://go-review.googlesource.com/c/go/+/544755 Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Hiro Hamada <laciferin@gmail.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> Auto-Submit: Michael Knyszek <mknyszek@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2023-11-09runtime: add the allocation headers GOEXPERIMENT and fork filesMichael Anthony Knyszek
This change adds the allocation headers GOEXPERIMENT which is a no-op. It forks two runtime files temporarily to make the GOEXPERIMENT easier to maintain. The forked files are mbitmap.go and msize.go. Change-Id: I60202c00e614e4517de7dd000029cf80dd0121ef Reviewed-on: https://go-review.googlesource.com/c/go/+/537980 Reviewed-by: Cherry Mui <cherryyz@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Keith Randall <khr@golang.org>
2023-11-07cmd/compile,runtime: dedup writeBarrier neededMauri de Souza Meneguzzo
The writeBarrier "needed" struct member has the exact same value as "enabled", and used interchangeably. I'm not sure if we plan to make a distinction between the two at some point, but today they are effectively the same, so dedup it and keep only "enabled". Change-Id: I65e596f174e1e820dc471a45ff70c0ef4efbc386 GitHub-Last-Rev: f8c805a91606d42c8d5b178ddd7d0bec7aaf9f55 GitHub-Pull-Request: golang/go#63814 Reviewed-on: https://go-review.googlesource.com/c/go/+/538495 Reviewed-by: Keith Randall <khr@google.com> Reviewed-by: Heschi Kreinick <heschi@google.com> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Mauri de Souza Meneguzzo <mauri870@gmail.com> TryBot-Result: Gopher Robot <gobot@golang.org>
2023-11-02runtime: move userArenaHeapBitsSetType into mbitmap.goMichael Anthony Knyszek
This will make the upcoming GOEXPERIMENT easier to implement, since this function relies on a lot of heap bitmap internals. Change-Id: I2ab76e928e7bfd383dcdb5bfe72c9b23c2002a4e Reviewed-on: https://go-review.googlesource.com/c/go/+/537979 Reviewed-by: Cherry Mui <cherryyz@google.com> Auto-Submit: Michael Knyszek <mknyszek@google.com> Reviewed-by: Keith Randall <khr@golang.org> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2023-11-02runtime: split out pointer/scalar metadata from heapArenaMichael Anthony Knyszek
We're going to want to fork this data in the near future for a GOEXPERIMENT, so break it out now. Change-Id: Ia7ded850bb693c443fe439c6b7279dcac638512c Reviewed-on: https://go-review.googlesource.com/c/go/+/537978 Reviewed-by: Keith Randall <khr@golang.org> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com> Reviewed-by: Cherry Mui <cherryyz@google.com> Auto-Submit: Michael Knyszek <mknyszek@google.com>
2023-10-02runtime: use smaller fields for mspan.freeindex and nelemsCherry Mui
mspan.freeindex and nelems can fit into uint16 for all possible values. Use uint16 instead of uintptr. Change-Id: Ifce20751e81d5022be1f6b5cbb5fbe4fd1728b1b Reviewed-on: https://go-review.googlesource.com/c/go/+/451359 Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Matthew Dempsky <mdempsky@google.com> LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2023-08-21runtime: drop stack-allocated pcvalueCachesAustin Clements
Now that pcvalue keeps its cache on the M, we can drop all of the stack-allocated pcvalueCaches and stop carefully passing them around between lots of operations. This significantly simplifies a fair amount of code and makes several structures smaller. This series of changes has no statistically significant effect on any runtime Stack benchmarks. I also experimented with making the cache larger, now that the impact is limited to the M struct, but wasn't able to measure any improvements. This is a re-roll of CL 515277 Change-Id: Ia27529302f81c1c92fb9c3a7474739eca80bfca1 Reviewed-on: https://go-review.googlesource.com/c/go/+/520064 Auto-Submit: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Austin Clements <austin@google.com>
2023-08-07Revert "runtime: drop stack-allocated pcvalueCaches"Austin Clements
This reverts CL 515277 Change-Id: Ie10378eed4993cb69f4a9b43a38af32b9d743155 Reviewed-on: https://go-review.googlesource.com/c/go/+/516855 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2023-08-07runtime: drop stack-allocated pcvalueCachesAustin Clements
Now that pcvalue keeps its cache on the M, we can drop all of the stack-allocated pcvalueCaches and stop carefully passing them around between lots of operations. This significantly simplifies a fair amount of code and makes several structures smaller. This series of changes has no statistically significant effect on any runtime Stack benchmarks. I also experimented with making the cache larger, now that the impact is limited to the M struct, but wasn't able to measure any improvements. Change-Id: I4719ebf347c7150a05e887e75a238e23647c20cd Reviewed-on: https://go-review.googlesource.com/c/go/+/515277 TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Carlos Amedee <carlos@golang.org>
2023-05-19runtime: implement Pinner API for object pinningSven Anderson
Some C APIs require the use or structures that contain pointers to buffers (iovec, io_uring, ...). The pointer passing rules would require that these buffers are allocated in C memory and to process this data with Go libraries it would need to be copied. In order to provide a zero-copy way to use these C APIs, this CL implements a Pinner API that allows to pin Go objects, which guarantees that the garbage collector does not move these objects while pinned. This allows to relax the pointer passing rules so that pinned pointers can be stored in C allocated memory or can be contained in Go memory that is passed to C functions. The Pin() method accepts pointers to objects of any type and unsafe.Pointer. Slices and arrays can be pinned by calling Pin() with the pointer to the first element. Pinning of maps is not supported. If the GC collects unreachable Pinner holding pinned objects it panics. If Pin() is called with the other non-pointer types it panics as well. Performance considerations: This change has no impact on execution time on existing code, because checks are only done in code paths, that would panic otherwise. The memory footprint on existing code is one pointer per memory span. Fixes: #46787 Signed-off-by: Sven Anderson <sven@anderson.de> Change-Id: I110031fe789b92277ae45a9455624687bd1c54f2 Reviewed-on: https://go-review.googlesource.com/c/go/+/367296 Auto-Submit: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Than McIntosh <thanm@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com>
2023-05-11runtime: move per-type types to internal/abiDavid Chase
Change-Id: I1f031f0f83a94bebe41d3978a91a903dc5bcda66 Reviewed-on: https://go-review.googlesource.com/c/go/+/489276 Reviewed-by: Keith Randall <khr@google.com> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gopher Robot <gobot@golang.org>
2023-05-11runtime: redefine _type to abi.Type; add rtype for methods.David Chase
Change-Id: I1c478b704d84811caa209006c657dda82d9c4cf9 Reviewed-on: https://go-review.googlesource.com/c/go/+/488435 Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Keith Randall <khr@google.com>
2023-05-05internal/abi: refactor (basic) type struct into one definitionDavid Chase
This touches a lot of files, which is bad, but it is also good, since there's N copies of this information commoned into 1. The new files in internal/abi are copied from the end of the stack; ultimately this will all end up being used. Change-Id: Ia252c0055aaa72ca569411ef9f9e96e3d610889e Reviewed-on: https://go-review.googlesource.com/c/go/+/462995 TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Carlos Amedee <carlos@golang.org> Run-TryBot: David Chase <drchase@google.com> Reviewed-by: Keith Randall <khr@golang.org>
2023-03-10runtime: replace all callback uses of gentraceback with unwinderAustin Clements
This is a really nice simplification for all of these call sites. It also achieves a nice performance improvement for stack copying: goos: linux goarch: amd64 pkg: runtime cpu: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz │ before │ after │ │ sec/op │ sec/op vs base │ StackCopyPtr-48 89.25m ± 1% 79.78m ± 1% -10.62% (p=0.000 n=20) StackCopy-48 83.48m ± 2% 71.88m ± 1% -13.90% (p=0.000 n=20) StackCopyNoCache-48 2.504m ± 2% 2.195m ± 1% -12.32% (p=0.000 n=20) StackCopyWithStkobj-48 21.66m ± 1% 21.02m ± 2% -2.95% (p=0.000 n=20) geomean 25.21m 22.68m -10.04% Updates #54466. Change-Id: I31715b7b6efd65726940041d3052bb1c0a1186f3 Reviewed-on: https://go-review.googlesource.com/c/go/+/468297 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com>
2023-02-17runtime: remove the restriction that write barrier ptrs come in pairsKeith Randall
Future CLs will remove the invariant that pointers are always put in the write barrier in pairs. The behavior of the assembly code changes a bit, where instead of writing the pointers unconditionally and then checking for overflow, check for overflow first and then write the pointers. Also changed the write barrier flush function to not take the src/dst as arguments. Change-Id: I2ef708038367b7b82ea67cbaf505a1d5904c775c Reviewed-on: https://go-review.googlesource.com/c/go/+/447779 Run-TryBot: Keith Randall <khr@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Bypass: Keith Randall <khr@golang.org>
2023-02-16runtime: reimplement GODEBUG=cgocheck=2 as a GOEXPERIMENTKeith Randall
Move this knob from a binary-startup thing to a build-time thing. This will enable followon optmizations to the write barrier. Change-Id: Ic3323348621c76a7dc390c09ff55016b19c43018 Reviewed-on: https://go-review.googlesource.com/c/go/+/447778 Reviewed-by: Michael Knyszek <mknyszek@google.com> Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com>
2022-11-15runtime: make GC see object as allocated after it is initializedCherry Mui
When the GC is scanning some memory (possibly conservatively), finding a pointer, while concurrently another goroutine is allocating an object at the same address as the found pointer, the GC may see the pointer before the object and/or the heap bits are initialized. This may cause the GC to see bad pointers and possibly crash. To prevent this, we make it that the scanner can only see the object as allocated after the object and the heap bits are initialized. Currently the allocator uses freeindex to find the next available slot, and that code is coupled with updating the free index to a new slot past it. The scanner also uses the freeindex to determine if an object is allocated. This is somewhat racy. This CL makes the scanner use a different field, which is only updated after the object initialization (and a memory barrier). Fixes #54596. Change-Id: I2a57a226369926e7192c253dd0d21d3faf22297c Reviewed-on: https://go-review.googlesource.com/c/go/+/449017 Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> Run-TryBot: Cherry Mui <cherryyz@google.com> TryBot-Result: Gopher Robot <gobot@golang.org>
2022-11-14Revert "runtime: delay incrementing freeindex in malloc"Michael Knyszek
This reverts commit bed2b7cf41471e1521af5a83ae28bd643eb3e038. Reason for revert: I clicked submit by accident on the wrong CL. Change-Id: Iddf128cb62f289d472510eb30466e515068271b2 Reviewed-on: https://go-review.googlesource.com/c/go/+/449501 TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com>
2022-11-11runtime: delay incrementing freeindex in mallocCherry Mui
When the GC is scanning some memory (possibly conservatively), finding a pointer, while concurrently another goroutine is allocating an object at the same address as the found pointer, the GC may see the pointer before the object and/or the heap bits are initialized. This may cause the GC to see bad pointers and possibly crash. To prevent this, we make it that the scanner can only see the object as allocated after the object and the heap bits are initialized. As the scanner uses the freeindex to determine if an object is allocated, we delay the increment of freeindex after the initialization. As currently in some code path finding the next free index and updating the free index to a new slot past it is coupled, this needs a small refactoring. In the new code mspan.nextFreeIndex return the next free index but not update it (although allocCache is updated). mallocgc will update it at a later time. Fixes #54596. Change-Id: I6dd5ccf743f2d2c46a1ed67c6a8237fe09a71260 Reviewed-on: https://go-review.googlesource.com/c/go/+/427619 TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Cherry Mui <cherryyz@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2022-10-26runtime: fix a few function names on commentscui fliter
Change-Id: I4be0b1e612dcc21ca6bb7d4395f1c0aa52480759 GitHub-Last-Rev: 032480c4c9ddb2bedea26b01bb80b8a079bfdcf3 GitHub-Pull-Request: golang/go#55993 Reviewed-on: https://go-review.googlesource.com/c/go/+/437518 Reviewed-by: hopehook <hopehook@golangcn.org> Reviewed-by: Keith Randall <khr@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Carlos Amedee <carlos@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: hopehook <hopehook@golangcn.org>
2022-10-18runtime: replace all uses of CtzXX with TrailingZerosXXYoulin Feng
Replace all uses of Ctz64/32/8 with TrailingZeros64/32/8, because they are the same and maybe duplicated. Also renamed CtzXX functions in 386 assembly code. Change-Id: I19290204858083750f4be589bb0923393950ae6d Reviewed-on: https://go-review.googlesource.com/c/go/+/438935 Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Bryan Mills <bcmills@google.com> Auto-Submit: Keith Randall <khr@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Keith Randall <khr@google.com> Run-TryBot: Keith Randall <khr@golang.org>
2022-10-12runtime: add safe arena support to the runtimeMichael Anthony Knyszek
This change adds an API to the runtime for arenas. A later CL can potentially export it as an experimental API, but for now, just the runtime implementation will suffice. The purpose of arenas is to improve efficiency, primarily by allowing for an application to manually free memory, thereby delaying garbage collection. It comes with other potential performance benefits, such as better locality, a better allocation strategy, and better handling of interior pointers by the GC. This implementation is based on one by danscales@google.com with a few significant differences: * The implementation lives entirely in the runtime (all layers). * Arena chunks are the minimum of 8 MiB or the heap arena size. This choice is made because in practice 64 MiB appears to be way too large of an area for most real-world use-cases. * Arena chunks are not unmapped, instead they're placed on an evacuation list and when there are no pointers left pointing into them, they're allowed to be reused. * Reusing partially-used arena chunks no longer tries to find one used by the same P first; it just takes the first one available. * In order to ensure worst-case fragmentation is never worse than 25%, only types and slice backing stores whose sizes are 1/4th the size of a chunk or less may be used. Previously larger sizes, up to the size of the chunk, were allowed. * ASAN, MSAN, and the race detector are fully supported. * Sets arena chunks to fault that were deferred at the end of mark termination (a non-public patch once did this; I don't see a reason not to continue that). For #51317. Change-Id: I83b1693a17302554cb36b6daa4e9249a81b1644f Reviewed-on: https://go-review.googlesource.com/c/go/+/423359 Reviewed-by: Cherry Mui <cherryyz@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Michael Knyszek <mknyszek@google.com>
2022-09-02runtime: make getStackMap a method of stkframeAustin Clements
This places getStackMap alongside argBytes and argMapInternal as another method of stkframe. For #54466, albeit rather indirectly. Change-Id: I411dda3605dd7f996983706afcbefddf29a68a85 Reviewed-on: https://go-review.googlesource.com/c/go/+/424515 Reviewed-by: Michael Pratt <mpratt@google.com> Reviewed-by: Cherry Mui <cherryyz@google.com> Run-TryBot: Austin Clements <austin@google.com> Auto-Submit: Austin Clements <austin@google.com> TryBot-Result: Gopher Robot <gobot@golang.org>
2022-08-23runtime: initialize pointer bits of noscan spansKeith Randall
Some code paths in the runtime (cgo, heapdump) request heap bits without first checking that the span is !noscan. Instead of trying to find and work around all those cases, just set the pointer bits of noscan spans correctly. It's somewhat safer than ensuring we caught all the possible cases. Fixes #54557 Fixes #54558 Change-Id: Ibd476e6cdea77c962e4d15aad26f29df66fd94e8 Reviewed-on: https://go-review.googlesource.com/c/go/+/425194 Reviewed-by: Michael Knyszek <mknyszek@google.com> Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2022-08-19runtime: add and use runtime/internal/sys.NotInHeapCuong Manh Le
Updates #46731 Change-Id: Ic2208c8bb639aa1e390be0d62e2bd799ecf20654 Reviewed-on: https://go-review.googlesource.com/c/go/+/421878 Reviewed-by: Keith Randall <khr@google.com> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2022-08-17runtime: gofmt -w -shopehook
Change-Id: I1226ff66fd0c64984939793eb8ef96c08d030fa1 Reviewed-on: https://go-review.googlesource.com/c/go/+/424399 Reviewed-by: Robert Griesemer <gri@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> Auto-Submit: Michael Pratt <mpratt@google.com> Run-TryBot: hopehook <hopehook@qq.com> TryBot-Result: Gopher Robot <gobot@golang.org>
2022-08-16runtime: process ptr bitmaps one word at a timeKeith Randall
[This is a retry of CL 407036 + its revert CL 422394. The only content change is the 1-line change in cmd/internal/obj/objfile.go.] Read the bitmaps one uintptr at a time instead of one byte at a time. Performance so far: Allocation heavy, no retention: ~30% faster in heapBitsSetType Scan heavy, ~no allocation: ~even in scanobject Change-Id: I04d899e1dbd23e989e9f552cdc1880318779c14c Reviewed-on: https://go-review.googlesource.com/c/go/+/422635 TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Keith Randall <khr@google.com> Run-TryBot: Keith Randall <khr@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
2022-08-16runtime: redo heap bitmapKeith Randall
[this is a retry of CL 407035 + its revert CL 422395. The content is unchanged] Use just 1 bit per word to record the ptr/nonptr bitmap. Use word-sized operations to manipulate the bitmap, so we can operate on up to 64 ptr/nonptr bits at a time. Use a separate bitmap, one bit per word of the ptr/nonptr bitmap, to encode a no-more-pointers signal. Since we can check 64 ptr/nonptr bits at once, knowing the exact last pointer location is not necessary. As a followon CL, we should make the gcdata bitmap an array of uintptr instead of an array of byte, so we can load 64 bits of it at once. Similarly for the processing of gc programs. Change-Id: Ica5eb622f5b87e647be64f471d67b02732ef8be6 Reviewed-on: https://go-review.googlesource.com/c/go/+/422634 Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Keith Randall <khr@google.com> Run-TryBot: Keith Randall <khr@golang.org>
2022-08-09Revert "runtime: redo heap bitmap"Keith Randall
This reverts commit b589208c8cc6e08239868f47e12c1449cd797bac. Reason for revert: Bug somewhere in this code, causing wasm and maybe linux/386 to fail. Change-Id: I5e1e501d839584e0219271bb937e94348f83c11f Reviewed-on: https://go-review.googlesource.com/c/go/+/422395 Reviewed-by: Than McIntosh <thanm@google.com> Run-TryBot: Keith Randall <khr@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org>
2022-08-09Revert "runtime: process ptr bitmaps one word at a time"Keith Randall
This reverts commit c3833a55433f4b2981253f64444fe5c3d1bc910a. Reason for revert: Bug somewhere in this code, causing wasm and maybe linux/386 to fail. Change-Id: I05f7cfa467598ca0c2c84fd4f752cc4ef117cc51 Reviewed-on: https://go-review.googlesource.com/c/go/+/422394 Run-TryBot: Keith Randall <khr@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Than McIntosh <thanm@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com>