diff options
| author | Michael Anthony Knyszek <mknyszek@google.com> | 2021-10-18 18:22:02 +0000 |
|---|---|---|
| committer | Michael Knyszek <mknyszek@google.com> | 2021-11-04 20:01:22 +0000 |
| commit | 6d1fffac6388d965616520eb23f36885760d5b66 (patch) | |
| tree | 74b13bf960c9d206bb85656f9695dcec1c16d2b6 /src/runtime/mpagecache.go | |
| parent | fc5e8cd6c9de00f8d7da645343934c548e62223e (diff) | |
| download | go-6d1fffac6388d965616520eb23f36885760d5b66.tar.xz | |
runtime: set and clear only the relevant bits in allocToCache
Currently allocToCache ham-handedly calls pageAlloc.allocRange on the
full size of the cache. This is fine as long as scavenged bits are never
set when alloc bits are set. This is true right now, but won't be true
as of the next CL.
This change makes allocToCache more carefully set the bits. Note that in
the allocToCache path, we were also calling update *twice*, erroneously.
The first time, with contig=true! Luckily today there's no correctness
error there because the page cache is small enough that the contig=true
logic doesn't matter, but this should at least improve allocation
performance a little bit.
Change-Id: I3ff9590ac86d251e4c5063cfd633570238b0cdbf
Reviewed-on: https://go-review.googlesource.com/c/go/+/356609
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Diffstat (limited to 'src/runtime/mpagecache.go')
| -rw-r--r-- | src/runtime/mpagecache.go | 12 |
1 files changed, 8 insertions, 4 deletions
diff --git a/src/runtime/mpagecache.go b/src/runtime/mpagecache.go index 4b5c66d8d6..7206e2dbdb 100644 --- a/src/runtime/mpagecache.go +++ b/src/runtime/mpagecache.go @@ -123,9 +123,10 @@ func (p *pageAlloc) allocToCache() pageCache { } c := pageCache{} ci := chunkIndex(p.searchAddr.addr()) // chunk index + var chunk *pallocData if p.summary[len(p.summary)-1][ci] != 0 { // Fast path: there's free pages at or near the searchAddr address. - chunk := p.chunkOf(ci) + chunk = p.chunkOf(ci) j, _ := chunk.find(1, chunkPageIndex(p.searchAddr.addr())) if j == ^uint(0) { throw("bad summary data") @@ -146,7 +147,7 @@ func (p *pageAlloc) allocToCache() pageCache { return pageCache{} } ci := chunkIndex(addr) - chunk := p.chunkOf(ci) + chunk = p.chunkOf(ci) c = pageCache{ base: alignDown(addr, 64*pageSize), cache: ^chunk.pages64(chunkPageIndex(addr)), @@ -154,8 +155,11 @@ func (p *pageAlloc) allocToCache() pageCache { } } - // Set the bits as allocated and clear the scavenged bits. - p.allocRange(c.base, pageCachePages) + // Set the page bits as allocated and clear the scavenged bits, but + // be careful to only set and clear the relevant bits. + cpi := chunkPageIndex(c.base) + chunk.allocPages64(cpi, c.cache) + chunk.scavenged.clearBlock64(cpi, c.cache&c.scav /* free and scavenged */) // Update as an allocation, but note that it's not contiguous. p.update(c.base, pageCachePages, false, true) |
