diff options
| author | Michael Anthony Knyszek <mknyszek@google.com> | 2019-11-15 23:30:30 +0000 |
|---|---|---|
| committer | Michael Knyszek <mknyszek@google.com> | 2019-12-11 19:37:19 +0000 |
| commit | 9d78e75a0a55fd5ff3d68b4cba2f0395c4b5dc88 (patch) | |
| tree | ce3494c184835c66c8c280b15d2b2ce9b6a0a7ae /src/runtime/mpagealloc.go | |
| parent | ef3ef8fcdfcd5e8a70b4a8feb2f91a82fee1f603 (diff) | |
| download | go-9d78e75a0a55fd5ff3d68b4cba2f0395c4b5dc88.tar.xz | |
runtime: track ranges of address space which are owned by the heap
This change adds a new inUse field to the allocator which tracks ranges
of addresses that are owned by the heap. It is updated on each heap
growth.
These ranges are tracked in an array which is kept sorted. In practice
this array shouldn't exceed its initial allocation except in rare cases
and thus should be small (ideally exactly 1 element in size).
In a hypothetical worst-case scenario wherein we have a 1 TiB heap and 4
MiB arenas (note that the address ranges will never be at a smaller
granularity than an arena, since arenas are always allocated
contiguously), inUse would use at most 4 MiB of memory if the heap
mappings were completely discontiguous (highly unlikely) with an
additional 2 MiB leaked from previous allocations. Furthermore, the
copies that are done to keep the inUse array sorted will copy at most 4
MiB of memory in such a scenario, which, assuming a conservative copying
rate of 5 GiB/s, amounts to about 800µs.
However, note that in practice:
1) Most 64-bit platforms have 64 MiB arenas.
2) The copies should incur little-to-no page faults, meaning a copy rate
closer to 25-50 GiB/s is expected.
3) Go heaps are almost always mostly contiguous.
Updates #35514.
Change-Id: I3ad07f1c2b5b9340acf59ecc3b9ae09e884814fe
Reviewed-on: https://go-review.googlesource.com/c/go/+/207757
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Diffstat (limited to 'src/runtime/mpagealloc.go')
| -rw-r--r-- | src/runtime/mpagealloc.go | 20 |
1 files changed, 20 insertions, 0 deletions
diff --git a/src/runtime/mpagealloc.go b/src/runtime/mpagealloc.go index f48b9faec3..10d547296e 100644 --- a/src/runtime/mpagealloc.go +++ b/src/runtime/mpagealloc.go @@ -245,6 +245,19 @@ type pageAlloc struct { // currently ready to use. start, end chunkIdx + // inUse is a slice of ranges of address space which are + // known by the page allocator to be currently in-use (passed + // to grow). + // + // This field is currently unused on 32-bit architectures but + // is harmless to track. We care much more about having a + // contiguous heap in these cases and take additional measures + // to ensure that, so in nearly all cases this should have just + // 1 element. + // + // All access is protected by the mheapLock. + inUse addrRanges + // mheap_.lock. This level of indirection makes it possible // to test pageAlloc indepedently of the runtime allocator. mheapLock *mutex @@ -268,6 +281,9 @@ func (s *pageAlloc) init(mheapLock *mutex, sysStat *uint64) { } s.sysStat = sysStat + // Initialize s.inUse. + s.inUse.init(sysStat) + // System-dependent initialization. s.sysInit() @@ -381,6 +397,10 @@ func (s *pageAlloc) grow(base, size uintptr) { if end > s.end { s.end = end } + // Note that [base, limit) will never overlap with any existing + // range inUse because grow only ever adds never-used memory + // regions to the page allocator. + s.inUse.add(addrRange{base, limit}) // A grow operation is a lot like a free operation, so if our // chunk ends up below the (linearized) s.searchAddr, update |
