diff options
| author | Austin Clements <austin@google.com> | 2016-02-09 17:53:07 -0500 |
|---|---|---|
| committer | Austin Clements <austin@google.com> | 2017-04-28 22:50:31 +0000 |
| commit | 1a033b1a70668eb8b3832bd06512d0a8d2e59f57 (patch) | |
| tree | 057cb53dc298374cde8df697ac280ebb3b06025d /src/runtime/mcentral.go | |
| parent | 390fdead0be0087d10e2e4faff7cb0a12b6a3ec8 (diff) | |
| download | go-1a033b1a70668eb8b3832bd06512d0a8d2e59f57.tar.xz | |
runtime: separate spans of noscan objects
Currently, we mix objects with pointers and objects without pointers
("noscan" objects) together in memory. As a result, for every object
we grey, we have to check that object's heap bits to find out if it's
noscan, which adds to the per-object cost of GC. This also hurts the
TLB footprint of the garbage collector because it decreases the
density of scannable objects at the page level.
This commit improves the situation by using separate spans for noscan
objects. This will allow a much simpler noscan check (in a follow up
CL), eliminate the need to clear the bitmap of noscan objects (in a
follow up CL), and improves TLB footprint by increasing the density of
scannable objects.
This is also a step toward eliminating dead bits, since the current
noscan check depends on checking the dead bit of the first word.
This has no effect on the heap size of the garbage benchmark.
We'll measure the performance change of this after the follow-up
optimizations.
This is a cherry-pick from dev.garbage commit d491e550c3. The only
non-trivial merge conflict was in updatememstats in mstats.go, where
we now have to separate the per-spanclass stats from the per-sizeclass
stats.
Change-Id: I13bdc4869538ece5649a8d2a41c6605371618e40
Reviewed-on: https://go-review.googlesource.com/41251
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Diffstat (limited to 'src/runtime/mcentral.go')
| -rw-r--r-- | src/runtime/mcentral.go | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/src/runtime/mcentral.go b/src/runtime/mcentral.go index 5302dd8e3d..eaabcb9c29 100644 --- a/src/runtime/mcentral.go +++ b/src/runtime/mcentral.go @@ -19,7 +19,7 @@ import "runtime/internal/atomic" //go:notinheap type mcentral struct { lock mutex - sizeclass int32 + spanclass spanClass nonempty mSpanList // list of spans with a free object, ie a nonempty free list empty mSpanList // list of spans with no free objects (or cached in an mcache) @@ -30,8 +30,8 @@ type mcentral struct { } // Initialize a single central free list. -func (c *mcentral) init(sizeclass int32) { - c.sizeclass = sizeclass +func (c *mcentral) init(spc spanClass) { + c.spanclass = spc c.nonempty.init() c.empty.init() } @@ -39,7 +39,7 @@ func (c *mcentral) init(sizeclass int32) { // Allocate a span to use in an MCache. func (c *mcentral) cacheSpan() *mspan { // Deduct credit for this span allocation and sweep if necessary. - spanBytes := uintptr(class_to_allocnpages[c.sizeclass]) * _PageSize + spanBytes := uintptr(class_to_allocnpages[c.spanclass.sizeclass()]) * _PageSize deductSweepCredit(spanBytes, 0) lock(&c.lock) @@ -225,11 +225,11 @@ func (c *mcentral) freeSpan(s *mspan, preserve bool, wasempty bool) bool { // grow allocates a new empty span from the heap and initializes it for c's size class. func (c *mcentral) grow() *mspan { - npages := uintptr(class_to_allocnpages[c.sizeclass]) - size := uintptr(class_to_size[c.sizeclass]) + npages := uintptr(class_to_allocnpages[c.spanclass.sizeclass()]) + size := uintptr(class_to_size[c.spanclass.sizeclass()]) n := (npages << _PageShift) / size - s := mheap_.alloc(npages, c.sizeclass, false, true) + s := mheap_.alloc(npages, c.spanclass, false, true) if s == nil { return nil } |
