aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/mpagealloc.go
diff options
context:
space:
mode:
authorMichael Anthony Knyszek <mknyszek@google.com>2022-04-10 20:34:17 +0000
committerMichael Knyszek <mknyszek@google.com>2022-05-03 15:13:53 +0000
commit91f863013e6b5ba870f6bfbfda0b735cf54fb3ca (patch)
treebbb4959f2ef79e59601cdf25fc882ea3ba426977 /src/runtime/mpagealloc.go
parentb4d81147d8dc26c8f7d6822b6249311d569af1de (diff)
downloadgo-91f863013e6b5ba870f6bfbfda0b735cf54fb3ca.tar.xz
runtime: redesign scavenging algorithm
Currently the runtime's scavenging algorithm involves running from the top of the heap address space to the bottom (or as far as it gets) once per GC cycle. Once it treads some ground, it doesn't tread it again until the next GC cycle. This works just fine for the background scavenger, for heap-growth scavenging, and for debug.FreeOSMemory. However, it breaks down in the face of a memory limit for small heaps in the tens of MiB. Basically, because the scavenger never retreads old ground, it's completely oblivious to new memory it could scavenge, and that it really *should* in the face of a memory limit. Also, every time some thread goes to scavenge in the runtime, it reserves what could be a considerable amount of address space, hiding it from other scavengers. This change modifies and simplifies the implementation overall. It's less code with complexities that are much better encapsulated. The current implementation iterates optimistically over the address space looking for memory to scavenge, keeping track of what it last saw. The new implementation does the same, but instead of directly iterating over pages, it iterates over chunks. It maintains an index of chunks (as a bitmap over the address space) that indicate which chunks may contain scavenge work. The page allocator populates this index, while scavengers consume it and iterate over it optimistically. This has a two key benefits: 1. Scavenging is much simpler: find a candidate chunk, and check it, essentially just using the scavengeOne fast path. There's no need for the complexity of iterating beyond one chunk, because the index is lock-free and already maintains that information. 2. If pages are freed to the page allocator (always guaranteed to be unscavenged), the page allocator immediately notifies all scavengers of the new source of work, avoiding the hiding issues of the old implementation. One downside of the new implementation, however, is that it's potentially more expensive to find pages to scavenge. In the past, if a single page would become free high up in the address space, the runtime's scavengers would ignore it. Now that scavengers won't, one or more scavengers may need to iterate potentially across the whole heap to find the next source of work. For the background scavenger, this just means a potentially less reactive scavenger -- overall it should still use the same amount of CPU. It means worse overheads for memory limit scavenging, but that's not exactly something with a baseline yet. In practice, this shouldn't be too bad, hopefully since the chunk index is extremely compact. For a 48-bit address space, the index is only 8 MiB in size at worst, but even just one physical page in the index is able to support up to 128 GiB heaps, provided they aren't terribly sparse. On 32-bit platforms, the index is only 128 bytes in size. For #48409. Change-Id: I72b7e74365046b18c64a6417224c5d85511194fb Reviewed-on: https://go-review.googlesource.com/c/go/+/399474 Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org>
Diffstat (limited to 'src/runtime/mpagealloc.go')
-rw-r--r--src/runtime/mpagealloc.go48
1 files changed, 7 insertions, 41 deletions
diff --git a/src/runtime/mpagealloc.go b/src/runtime/mpagealloc.go
index 3881974742..c85da15ff2 100644
--- a/src/runtime/mpagealloc.go
+++ b/src/runtime/mpagealloc.go
@@ -262,45 +262,20 @@ type pageAlloc struct {
// All access is protected by the mheapLock.
inUse addrRanges
+ _ uint32 // Align scav so it's easier to reason about alignment within scav.
+
// scav stores the scavenger state.
scav struct {
- lock mutex
-
- // inUse is a slice of ranges of address space which have not
- // yet been looked at by the scavenger.
- //
- // Protected by lock.
- inUse addrRanges
-
- // gen is the scavenge generation number.
- //
- // Protected by lock.
- gen uint32
-
- // reservationBytes is how large of a reservation should be made
- // in bytes of address space for each scavenge iteration.
- //
- // Protected by lock.
- reservationBytes uintptr
+ // index is an efficient index of chunks that have pages available to
+ // scavenge.
+ index scavengeIndex
// released is the amount of memory released this generation.
//
// Updated atomically.
released uintptr
- // scavLWM is the lowest (offset) address that the scavenger reached this
- // scavenge generation.
- //
- // Protected by lock.
- scavLWM offAddr
-
- // freeHWM is the highest (offset) address of a page that was freed to
- // the page allocator this scavenge generation.
- //
- // Protected by mheapLock.
- freeHWM offAddr
-
- _ uint32 // Align assistTime for atomics.
+ _ uint32 // Align assistTime for atomics on 32-bit platforms.
// scavengeAssistTime is the time spent scavenging in the last GC cycle.
//
@@ -348,12 +323,6 @@ func (p *pageAlloc) init(mheapLock *mutex, sysStat *sysMemStat) {
// Set the mheapLock.
p.mheapLock = mheapLock
-
- // Initialize p.scav.inUse.
- p.scav.inUse.init(sysStat)
-
- // Initialize scavenge tracking state.
- p.scav.scavLWM = maxSearchAddr
}
// tryChunkOf returns the bitmap data for the given chunk.
@@ -903,10 +872,7 @@ func (p *pageAlloc) free(base, npages uintptr, scavenged bool) {
}
limit := base + npages*pageSize - 1
if !scavenged {
- // Update the free high watermark for the scavenger.
- if offLimit := (offAddr{limit}); p.scav.freeHWM.lessThan(offLimit) {
- p.scav.freeHWM = offLimit
- }
+ p.scav.index.mark(base, limit+1)
}
if npages == 1 {
// Fast path: we're clearing a single bit, and we know exactly