diff options
| author | Austin Clements <austin@google.com> | 2016-06-23 14:25:50 -0600 |
|---|---|---|
| committer | Austin Clements <austin@google.com> | 2018-02-15 21:12:16 +0000 |
| commit | 29e9c4d4a4064fcd5edcb47d4782bd96082a068e (patch) | |
| tree | 36da49ce46d6fef82bdcdcc23d9d0cbde7135cfa /src/runtime/malloc.go | |
| parent | 4de468621a54bc7816ae978a55cb347b6f60352d (diff) | |
| download | go-29e9c4d4a4064fcd5edcb47d4782bd96082a068e.tar.xz | |
runtime: lay out heap bitmap forward in memory
Currently the heap bitamp is laid in reverse order in memory relative
to the heap itself. This was originally done out of "excessive
cleverness" so that computing a bitmap pointer could load only the
arena_start field and so that heaps could be more contiguous by
growing the arena and the bitmap out from a common center point.
However, this appears to have no actual performance benefit, it
complicates nearly every use of the bitmap, and it makes already
confusing code more confusing. Furthermore, it's still possible to use
a single field (the new bitmap_delta) for the bitmap pointer
computation by employing slightly different excessive cleverness.
Hence, this CL puts the bitmap into forward order.
This is a (very) updated version of CL 9404.
Change-Id: I743587cc626c4ecd81e660658bad85b54584108c
Reviewed-on: https://go-review.googlesource.com/85881
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Diffstat (limited to 'src/runtime/malloc.go')
| -rw-r--r-- | src/runtime/malloc.go | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/src/runtime/malloc.go b/src/runtime/malloc.go index 72b8f40b96..4122b7ba23 100644 --- a/src/runtime/malloc.go +++ b/src/runtime/malloc.go @@ -369,7 +369,7 @@ func mallocinit() { spansStart := p1 p1 += spansSize - mheap_.bitmap = p1 + bitmapSize + mheap_.bitmap_start = p1 p1 += bitmapSize if sys.PtrSize == 4 { // Set arena_start such that we can accept memory @@ -383,6 +383,19 @@ func mallocinit() { mheap_.arena_alloc = p1 mheap_.arena_reserved = reserved + // Pre-compute the value heapBitsForAddr can use to directly + // map a heap address to a bitmap address. The obvious + // computation is: + // + // bitp = bitmap_start + (addr - arena_start)/ptrSize/4 + // + // We can shuffle this to + // + // bitp = (bitmap_start - arena_start/ptrSize/4) + addr/ptrSize/4 + // + // bitmap_delta is the value of the first term. + mheap_.bitmap_delta = mheap_.bitmap_start - mheap_.arena_start/heapBitmapScale + if mheap_.arena_start&(_PageSize-1) != 0 { println("bad pagesize", hex(p), hex(p1), hex(spansSize), hex(bitmapSize), hex(_PageSize), "start", hex(mheap_.arena_start)) throw("misrounded allocation in mallocinit") |
