diff options
| author | Michael Pratt <mpratt@google.com> | 2026-04-02 13:17:23 -0400 |
|---|---|---|
| committer | Michael Knyszek <mknyszek@google.com> | 2026-04-02 15:26:52 -0700 |
| commit | 40ec033c33802cf6e1236ea8030d882338a457d5 (patch) | |
| tree | 8f6ddc78666c3c36101a62e09de9314a2956da06 /src/runtime/malloc.go | |
| parent | 4ce2612f21d2c32fc8a6f7bbd2c6c6c5b807f4fe (diff) | |
| download | go-40ec033c33802cf6e1236ea8030d882338a457d5.tar.xz | |
runtime: add sysUnreserve to undo sysReserve
This is inspired by CL 724560 by Bobby Powers, particularly their great
commit message.
When using address sanitizer with leak detection, sysReserve registers
memory regions with LSAN via lsanregisterrootregion. However, several
code paths release this memory using sysFreeOS without first
unregistering from LSAN. This leaves LSAN with stale root region entries
pointing to memory that has been unmapped and may be reallocated for
other purposes.
This bug was latent until glibc 2.42, which changed pthread stack guard
pages from mprotect(PROT_NONE) to madvise(MADV_GUARD_INSTALL). The
difference matters because LSAN filters root region scanning by
intersecting registered regions with readable mappings from
/proc/self/maps:
- mprotect(PROT_NONE) splits the VMA, creating a separate entry with
---p permissions. LSAN's IsReadable() check excludes it from scanning.
- MADV_GUARD_INSTALL operates at the page table level without modifying
the VMA. The region still appears as rw-p in /proc/self/maps, so LSAN
includes it in the scan and crashes with SIGSEGV when accessing the
guard pages.
Address this by adding sysUnreserve to undo sysReserve. sysUnreserve
unregisters the region from LSAN and frees the mapping.
With the addition of sysUnreserve, we have complete coverage of LSAN
unregister in the mem.go abstract: sysFree unregisters Ready memory.
sysUnreserve unregisters Reserved memory. And there is no way to free
Prepared memory at all (it must transition to Ready or Reserved first).
The implementation of lsanunregisterrootregion [1] finds the region by
exact match of start and end address. It therefore does not support
splitting a region, and we must extend this requirement to sysUnreserve
and sysFree. I am not completely confident that we always pass the full
region to sysFree, but LSAN aborts if it can't find the region, so we
must not be blatantly violating this.
sysReserveAligned does need to unreserve a subset of a region, so it
cannot use sysUnreserve directly. Rather than breaking the mem.go
abstract, move sysReserveAligned into mem.go, adding it to the
abstraction.
We should not have any calls to sysFreeOS outside of the mem.go
abstraction. That is now true with this CL.
Fixes #74476.
[1] https://github.com/llvm/llvm-project/blob/3e3e362648fa062038b90ccc21f46a09d6902288/compiler-rt/lib/lsan/lsan_common.cpp#L1157
Change-Id: I8c46a62154b2f23456ffd5086a7b91156a6a6964
Reviewed-on: https://go-review.googlesource.com/c/go/+/762381
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Diffstat (limited to 'src/runtime/malloc.go')
| -rw-r--r-- | src/runtime/malloc.go | 54 |
1 files changed, 1 insertions, 53 deletions
diff --git a/src/runtime/malloc.go b/src/runtime/malloc.go index c08bc7574b..6a4ee5bd42 100644 --- a/src/runtime/malloc.go +++ b/src/runtime/malloc.go @@ -788,7 +788,7 @@ func (h *mheap) sysAlloc(n uintptr, hintList **arenaHint, arenaList *[]arenaIdx) // particular, this is already how Windows behaves, so // it would simplify things there. if v != nil { - sysFreeOS(v, n) + sysUnreserve(v, n) } *hintList = hint.next h.arenaHintAlloc.free(unsafe.Pointer(hint)) @@ -921,58 +921,6 @@ mapped: return } -// sysReserveAligned is like sysReserve, but the returned pointer is -// aligned to align bytes. It may reserve either n or n+align bytes, -// so it returns the size that was reserved. -func sysReserveAligned(v unsafe.Pointer, size, align uintptr, vmaName string) (unsafe.Pointer, uintptr) { - if isSbrkPlatform { - if v != nil { - throw("unexpected heap arena hint on sbrk platform") - } - return sysReserveAlignedSbrk(size, align) - } - // Since the alignment is rather large in uses of this - // function, we're not likely to get it by chance, so we ask - // for a larger region and remove the parts we don't need. - retries := 0 -retry: - p := uintptr(sysReserve(v, size+align, vmaName)) - switch { - case p == 0: - return nil, 0 - case p&(align-1) == 0: - return unsafe.Pointer(p), size + align - case GOOS == "windows": - // On Windows we can't release pieces of a - // reservation, so we release the whole thing and - // re-reserve the aligned sub-region. This may race, - // so we may have to try again. - sysFreeOS(unsafe.Pointer(p), size+align) - p = alignUp(p, align) - p2 := sysReserve(unsafe.Pointer(p), size, vmaName) - if p != uintptr(p2) { - // Must have raced. Try again. - sysFreeOS(p2, size) - if retries++; retries == 100 { - throw("failed to allocate aligned heap memory; too many retries") - } - goto retry - } - // Success. - return p2, size - default: - // Trim off the unaligned parts. - pAligned := alignUp(p, align) - sysFreeOS(unsafe.Pointer(p), pAligned-p) - end := pAligned + size - endLen := (p + size + align) - end - if endLen > 0 { - sysFreeOS(unsafe.Pointer(end), endLen) - } - return unsafe.Pointer(pAligned), size - } -} - // enableMetadataHugePages enables huge pages for various sources of heap metadata. // // A note on latency: for sufficiently small heaps (<10s of GiB) this function will take constant |
