diff options
| author | Michael Anthony Knyszek <mknyszek@google.com> | 2022-08-12 21:40:46 +0000 |
|---|---|---|
| committer | Michael Knyszek <mknyszek@google.com> | 2022-10-12 20:23:30 +0000 |
| commit | 7866538d250e1693bacb6e5a29c74b01588155d5 (patch) | |
| tree | c88c691d49643e604c8fcaf711f7c6e583332a20 /src/runtime/malloc.go | |
| parent | 4c383951b9601b488486add020ad5b7f10fb3d39 (diff) | |
| download | go-7866538d250e1693bacb6e5a29c74b01588155d5.tar.xz | |
runtime: add safe arena support to the runtime
This change adds an API to the runtime for arenas. A later CL can
potentially export it as an experimental API, but for now, just the
runtime implementation will suffice.
The purpose of arenas is to improve efficiency, primarily by allowing
for an application to manually free memory, thereby delaying garbage
collection. It comes with other potential performance benefits, such as
better locality, a better allocation strategy, and better handling of
interior pointers by the GC.
This implementation is based on one by danscales@google.com with a few
significant differences:
* The implementation lives entirely in the runtime (all layers).
* Arena chunks are the minimum of 8 MiB or the heap arena size. This
choice is made because in practice 64 MiB appears to be way too large
of an area for most real-world use-cases.
* Arena chunks are not unmapped, instead they're placed on an evacuation
list and when there are no pointers left pointing into them, they're
allowed to be reused.
* Reusing partially-used arena chunks no longer tries to find one used
by the same P first; it just takes the first one available.
* In order to ensure worst-case fragmentation is never worse than 25%,
only types and slice backing stores whose sizes are 1/4th the size of
a chunk or less may be used. Previously larger sizes, up to the size
of the chunk, were allowed.
* ASAN, MSAN, and the race detector are fully supported.
* Sets arena chunks to fault that were deferred at the end of mark
termination (a non-public patch once did this; I don't see a reason
not to continue that).
For #51317.
Change-Id: I83b1693a17302554cb36b6daa4e9249a81b1644f
Reviewed-on: https://go-review.googlesource.com/c/go/+/423359
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Diffstat (limited to 'src/runtime/malloc.go')
| -rw-r--r-- | src/runtime/malloc.go | 27 |
1 files changed, 24 insertions, 3 deletions
diff --git a/src/runtime/malloc.go b/src/runtime/malloc.go index d651cbc14e..53184615a1 100644 --- a/src/runtime/malloc.go +++ b/src/runtime/malloc.go @@ -452,6 +452,14 @@ func mallocinit() { // // On AIX, mmaps starts at 0x0A00000000000000 for 64-bit. // processes. + // + // Space mapped for user arenas comes immediately after the range + // originally reserved for the regular heap when race mode is not + // enabled because user arena chunks can never be used for regular heap + // allocations and we want to avoid fragmenting the address space. + // + // In race mode we have no choice but to just use the same hints because + // the race detector requires that the heap be mapped contiguously. for i := 0x7f; i >= 0; i-- { var p uintptr switch { @@ -477,9 +485,16 @@ func mallocinit() { default: p = uintptr(i)<<40 | uintptrMask&(0x00c0<<32) } + // Switch to generating hints for user arenas if we've gone + // through about half the hints. In race mode, take only about + // a quarter; we don't have very much space to work with. + hintList := &mheap_.arenaHints + if (!raceenabled && i > 0x3f) || (raceenabled && i > 0x5f) { + hintList = &mheap_.userArena.arenaHints + } hint := (*arenaHint)(mheap_.arenaHintAlloc.alloc()) hint.addr = p - hint.next, mheap_.arenaHints = mheap_.arenaHints, hint + hint.next, *hintList = *hintList, hint } } else { // On a 32-bit machine, we're much more concerned @@ -547,6 +562,14 @@ func mallocinit() { hint := (*arenaHint)(mheap_.arenaHintAlloc.alloc()) hint.addr = p hint.next, mheap_.arenaHints = mheap_.arenaHints, hint + + // Place the hint for user arenas just after the large reservation. + // + // While this potentially competes with the hint above, in practice we probably + // aren't going to be getting this far anyway on 32-bit platforms. + userArenaHint := (*arenaHint)(mheap_.arenaHintAlloc.alloc()) + userArenaHint.addr = p + userArenaHint.next, mheap_.userArena.arenaHints = mheap_.userArena.arenaHints, userArenaHint } } @@ -755,8 +778,6 @@ retry: case p == 0: return nil, 0 case p&(align-1) == 0: - // We got lucky and got an aligned region, so we can - // use the whole thing. return unsafe.Pointer(p), size + align case GOOS == "windows": // On Windows we can't release pieces of a |
