diff options
| author | Michael Pratt <mpratt@google.com> | 2026-04-02 13:17:23 -0400 |
|---|---|---|
| committer | Michael Knyszek <mknyszek@google.com> | 2026-04-02 15:26:52 -0700 |
| commit | 40ec033c33802cf6e1236ea8030d882338a457d5 (patch) | |
| tree | 8f6ddc78666c3c36101a62e09de9314a2956da06 /src/runtime/export_test.go | |
| parent | 4ce2612f21d2c32fc8a6f7bbd2c6c6c5b807f4fe (diff) | |
| download | go-40ec033c33802cf6e1236ea8030d882338a457d5.tar.xz | |
runtime: add sysUnreserve to undo sysReserve
This is inspired by CL 724560 by Bobby Powers, particularly their great
commit message.
When using address sanitizer with leak detection, sysReserve registers
memory regions with LSAN via lsanregisterrootregion. However, several
code paths release this memory using sysFreeOS without first
unregistering from LSAN. This leaves LSAN with stale root region entries
pointing to memory that has been unmapped and may be reallocated for
other purposes.
This bug was latent until glibc 2.42, which changed pthread stack guard
pages from mprotect(PROT_NONE) to madvise(MADV_GUARD_INSTALL). The
difference matters because LSAN filters root region scanning by
intersecting registered regions with readable mappings from
/proc/self/maps:
- mprotect(PROT_NONE) splits the VMA, creating a separate entry with
---p permissions. LSAN's IsReadable() check excludes it from scanning.
- MADV_GUARD_INSTALL operates at the page table level without modifying
the VMA. The region still appears as rw-p in /proc/self/maps, so LSAN
includes it in the scan and crashes with SIGSEGV when accessing the
guard pages.
Address this by adding sysUnreserve to undo sysReserve. sysUnreserve
unregisters the region from LSAN and frees the mapping.
With the addition of sysUnreserve, we have complete coverage of LSAN
unregister in the mem.go abstract: sysFree unregisters Ready memory.
sysUnreserve unregisters Reserved memory. And there is no way to free
Prepared memory at all (it must transition to Ready or Reserved first).
The implementation of lsanunregisterrootregion [1] finds the region by
exact match of start and end address. It therefore does not support
splitting a region, and we must extend this requirement to sysUnreserve
and sysFree. I am not completely confident that we always pass the full
region to sysFree, but LSAN aborts if it can't find the region, so we
must not be blatantly violating this.
sysReserveAligned does need to unreserve a subset of a region, so it
cannot use sysUnreserve directly. Rather than breaking the mem.go
abstract, move sysReserveAligned into mem.go, adding it to the
abstraction.
We should not have any calls to sysFreeOS outside of the mem.go
abstraction. That is now true with this CL.
Fixes #74476.
[1] https://github.com/llvm/llvm-project/blob/3e3e362648fa062038b90ccc21f46a09d6902288/compiler-rt/lib/lsan/lsan_common.cpp#L1157
Change-Id: I8c46a62154b2f23456ffd5086a7b91156a6a6964
Reviewed-on: https://go-review.googlesource.com/c/go/+/762381
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Diffstat (limited to 'src/runtime/export_test.go')
| -rw-r--r-- | src/runtime/export_test.go | 20 |
1 files changed, 14 insertions, 6 deletions
diff --git a/src/runtime/export_test.go b/src/runtime/export_test.go index bc471e50a0..931ec7e540 100644 --- a/src/runtime/export_test.go +++ b/src/runtime/export_test.go @@ -550,7 +550,7 @@ func MapNextArenaHint() (start, end uintptr, ok bool) { if !ok { // We were unable to get the requested reservation. // Release what we did get and fail. - sysFreeOS(got, physPageSize) + sysUnreserve(got, physPageSize) } return } @@ -1091,19 +1091,21 @@ func FreePageAlloc(pp *PageAlloc) { // Free all the mapped space for the summary levels. if pageAlloc64Bit != 0 { for l := 0; l < summaryLevels; l++ { - sysFreeOS(unsafe.Pointer(&p.summary[l][0]), uintptr(cap(p.summary[l]))*pallocSumBytes) + // This isn't quite right, as some of this memory may + // be Ready instead of Reserved. The mappedReady and + // testSysStat adjustments below correct for the + // difference. + sysUnreserve(unsafe.Pointer(&p.summary[l][0]), uintptr(cap(p.summary[l]))*pallocSumBytes) } } else { resSize := uintptr(0) for _, s := range p.summary { resSize += uintptr(cap(s)) * pallocSumBytes } - sysFreeOS(unsafe.Pointer(&p.summary[0][0]), alignUp(resSize, physPageSize)) + // See sysUnreserve comment above. + sysUnreserve(unsafe.Pointer(&p.summary[0][0]), alignUp(resSize, physPageSize)) } - // Free extra data structures. - sysFreeOS(unsafe.Pointer(&p.scav.index.chunks[0]), uintptr(cap(p.scav.index.chunks))*unsafe.Sizeof(atomicScavChunkData{})) - // Subtract back out whatever we mapped for the summaries. // sysUsed adds to p.sysStat and memstats.mappedReady no matter what // (and in anger should actually be accounted for), and there's no other @@ -1111,6 +1113,12 @@ func FreePageAlloc(pp *PageAlloc) { gcController.mappedReady.Add(-int64(p.summaryMappedReady)) testSysStat.add(-int64(p.summaryMappedReady)) + // Free extra data structures. + // + // TODO(prattmic): As above, some of this may be Ready, so we should + // manually adjust mappedReady and testSysStat? + sysUnreserve(unsafe.Pointer(&p.scav.index.chunks[0]), uintptr(cap(p.scav.index.chunks))*unsafe.Sizeof(atomicScavChunkData{})) + // Free the mapped space for chunks. for i := range p.chunks { if x := p.chunks[i]; x != nil { |
