| Age | Commit message (Collapse) | Author |
|
When growing slice take into account size of the allocated memory block.
Also apply the same optimization to string->[]byte conversion.
Fixes #6307.
benchmark old ns/op new ns/op delta
BenchmarkAppendGrowByte 4541036 4434108 -2.35%
BenchmarkAppendGrowString 59885673 44813604 -25.17%
LGTM=khr
R=khr
CC=golang-codereviews, iant, rsc
https://golang.org/cl/53340044
|
|
Combine NoScan allocations < 16 bytes into a single memory block.
Reduces number of allocations on json/garbage benchmarks by 10+%.
json-1
allocated 8039872 7949194 -1.13%
allocs 105774 93776 -11.34%
cputime 156200000 100700000 -35.53%
gc-pause-one 4908873 3814853 -22.29%
gc-pause-total 2748969 2899288 +5.47%
rss 52674560 43560960 -17.30%
sys-gc 3796976 3256304 -14.24%
sys-heap 43843584 35192832 -19.73%
sys-other 5589312 5310784 -4.98%
sys-stack 393216 393216 +0.00%
sys-total 53623088 44153136 -17.66%
time 156193436 100886714 -35.41%
virtual-mem 256548864 256540672 -0.00%
garbage-1
allocated 2996885 2932982 -2.13%
allocs 62904 55200 -12.25%
cputime 17470000 17400000 -0.40%
gc-pause-one 932757485 925806143 -0.75%
gc-pause-total 4663787 4629030 -0.75%
rss 1151074304 1133670400 -1.51%
sys-gc 66068352 65085312 -1.49%
sys-heap 1039728640 1024065536 -1.51%
sys-other 38038208 37485248 -1.45%
sys-stack 8650752 8781824 +1.52%
sys-total 1152485952 1135417920 -1.48%
time 17478088 17418005 -0.34%
virtual-mem 1343709184 1324204032 -1.45%
LGTM=iant, bradfitz
R=golang-codereviews, dave, iant, rsc, bradfitz
CC=golang-codereviews, khr
https://golang.org/cl/38750047
|
|
Breaks darwin and freebsd.
««« original CL description
runtime: increase page size to 8K
Tcmalloc uses 8K, 32K and 64K pages, and in custom setups 256K pages.
Only Chromium uses 4K pages today (in "slow but small" configuration).
The general tendency is to increase page size, because it reduces
metadata size and DTLB pressure.
This change reduces GC pause by ~10% and slightly improves other metrics.
json-1
allocated 8037492 8038689 +0.01%
allocs 105762 105573 -0.18%
cputime 158400000 155800000 -1.64%
gc-pause-one 4412234 4135702 -6.27%
gc-pause-total 2647340 2398707 -9.39%
rss 54923264 54525952 -0.72%
sys-gc 3952624 3928048 -0.62%
sys-heap 46399488 46006272 -0.85%
sys-other 5597504 5290304 -5.49%
sys-stack 393216 393216 +0.00%
sys-total 56342832 55617840 -1.29%
time 158478890 156046916 -1.53%
virtual-mem 256548864 256593920 +0.02%
garbage-1
allocated 2991113 2986259 -0.16%
allocs 62844 62652 -0.31%
cputime 16330000 15860000 -2.88%
gc-pause-one 789108229 725555211 -8.05%
gc-pause-total 3945541 3627776 -8.05%
rss 1143660544 1132253184 -1.00%
sys-gc 65609600 65806208 +0.30%
sys-heap 1032388608 1035599872 +0.31%
sys-other 37501632 22777664 -39.26%
sys-stack 8650752 8781824 +1.52%
sys-total 1144150592 1132965568 -0.98%
time 16364602 15891994 -2.89%
virtual-mem 1327296512 1313746944 -1.02%
R=golang-codereviews, dave, khr, rsc, khr
CC=golang-codereviews
https://golang.org/cl/45770044
»»»
R=golang-codereviews
CC=golang-codereviews
https://golang.org/cl/56060043
|
|
Tcmalloc uses 8K, 32K and 64K pages, and in custom setups 256K pages.
Only Chromium uses 4K pages today (in "slow but small" configuration).
The general tendency is to increase page size, because it reduces
metadata size and DTLB pressure.
This change reduces GC pause by ~10% and slightly improves other metrics.
json-1
allocated 8037492 8038689 +0.01%
allocs 105762 105573 -0.18%
cputime 158400000 155800000 -1.64%
gc-pause-one 4412234 4135702 -6.27%
gc-pause-total 2647340 2398707 -9.39%
rss 54923264 54525952 -0.72%
sys-gc 3952624 3928048 -0.62%
sys-heap 46399488 46006272 -0.85%
sys-other 5597504 5290304 -5.49%
sys-stack 393216 393216 +0.00%
sys-total 56342832 55617840 -1.29%
time 158478890 156046916 -1.53%
virtual-mem 256548864 256593920 +0.02%
garbage-1
allocated 2991113 2986259 -0.16%
allocs 62844 62652 -0.31%
cputime 16330000 15860000 -2.88%
gc-pause-one 789108229 725555211 -8.05%
gc-pause-total 3945541 3627776 -8.05%
rss 1143660544 1132253184 -1.00%
sys-gc 65609600 65806208 +0.30%
sys-heap 1032388608 1035599872 +0.31%
sys-other 37501632 22777664 -39.26%
sys-stack 8650752 8781824 +1.52%
sys-total 1144150592 1132965568 -0.98%
time 16364602 15891994 -2.89%
virtual-mem 1327296512 1313746944 -1.02%
R=golang-codereviews, dave, khr, rsc, khr
CC=golang-codereviews
https://golang.org/cl/45770044
|
|
R=khr, dvyukov
CC=golang-codereviews
https://golang.org/cl/51830043
|
|
record finalizers and heap profile info. Enables
removing the special bit from the heap bitmap. Also
provides a generic mechanism for annotating occasional
heap objects.
finalizers
overhead per obj
old 680 B 80 B avg
new 16 B/span 48 B
profile
overhead per obj
old 32KB 24 B + hash tables
new 16 B/span 24 B
R=cshapiro, khr, dvyukov, gobot
CC=golang-codereviews
https://golang.org/cl/13314053
|
|
On the plus side, we don't need to change the bits when mallocing
pointerless objects. On the other hand, we need to mark objects in the
free lists during GC. But the free lists are small at GC time, so it
should be a net win.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 40 33 -17.65%
BenchmarkMalloc16 45 38 -15.72%
BenchmarkMallocTypeInfo8 58 59 +0.85%
BenchmarkMallocTypeInfo16 63 64 +1.10%
R=golang-dev, rsc, dvyukov
CC=cshapiro, golang-dev
https://golang.org/cl/41040043
|
|
Output for an allocation and free (sweep) follows
MProf_Malloc(p=0xc2100210a0, size=0x50, type=0x0 <single object>)
#0 0x46ee15 runtime.mallocgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:141
#1 0x47004f runtime.settype_flush /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:612
#2 0x45f92c gc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2071
#3 0x45f89e mgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2050
#4 0x45258b runtime.mcall /usr/local/google/home/cshapiro/go/src/pkg/runtime/asm_amd64.s:179
MProf_Free(p=0xc2100210a0, size=0x50)
#0 0x46ee15 runtime.mallocgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:141
#1 0x47004f runtime.settype_flush /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:612
#2 0x45f92c gc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2071
#3 0x45f89e mgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2050
#4 0x45258b runtime.mcall /usr/local/google/home/cshapiro/go/src/pkg/runtime/asm_amd64.s:179
R=golang-dev, dvyukov, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/21990045
|
|
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes #5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
|
|
Remove all hashtable-specific GC code.
Fixes bug 6119.
R=cshapiro, dvyukov, khr
CC=golang-dev
https://golang.org/cl/13078044
|
|
the use of the flag, especially for objects which actually do have
pointers but we don't want the GC to scan them.
R=golang-dev, cshapiro
CC=golang-dev
https://golang.org/cl/13181045
|
|
Originally the requirement was f(x) where f's argument is
exactly x's type.
CL 11858043 relaxed the requirement in a non-standard
way: f's argument must be exactly x's type or interface{}.
If we're going to relax the requirement, it should be done
in a way consistent with the rest of Go. This CL allows f's
argument to have any type for which x is assignable;
that's the same requirement the compiler would impose
if compiling f(x) directly.
Fixes #5368.
R=dvyukov, bradfitz, pieter
CC=golang-dev
https://golang.org/cl/12895043
|
|
Fixes #5584.
R=golang-dev, chaishushan, alex.brainman
CC=golang-dev
https://golang.org/cl/12720043
|
|
Remove dead code related to allocation of type metadata with SysAlloc.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12311045
|
|
Fixes #5368.
R=golang-dev, dvyukov
CC=golang-dev, rsc
https://golang.org/cl/11858043
|
|
Make it accept type, combine flags.
Several reasons for the change:
1. mallocgc and settype must be atomic wrt GC
2. settype is called from only one place now
3. it will help performance (eventually settype
functionality must be combined with markallocated)
4. flags are easier to read now (no mallocgc(sz, 0, 1, 0) anymore)
R=golang-dev, iant, nightlyone, rsc, dave, khr, bradfitz, r
CC=golang-dev
https://golang.org/cl/10136043
|
|
Also reduce FixAlloc allocation granulatiry from 128k to 16k,
small programs do not need that much memory for MCache's and MSpan's.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/10140044
|
|
Count only number of frees, everything else is derivable
and does not need to be counted on every malloc.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 66 -3.07%
BenchmarkMalloc16 75 70 -6.48%
BenchmarkMallocTypeInfo8 102 97 -4.80%
BenchmarkMallocTypeInfo16 108 105 -2.78%
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/9776043
|
|
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10010046
|
|
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
Reincarnation of committed and rolled back https://golang.org/cl/9805043
The latent bugs that it revealed are fixed:
https://golang.org/cl/9837049
https://golang.org/cl/9778048
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9778049
|
|
as was dicussed in cl/9791044
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9853046
|
|
This depends on: 9791044: runtime: allocate page table lazily
Once page table is moved out of heap, the heap becomes small.
This removes unnecessary dereferences during heap access.
No logical changes.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9802043
|
|
This removes the 256MB memory allocation at startup,
which conflicts with ulimit.
Also will allow to eliminate an unnecessary memory dereference in GC,
because the page table is usually mapped at known address.
Update #5049.
Update #5236.
R=golang-dev, khr, r, khr, rsc
CC=golang-dev
https://golang.org/cl/9791044
|
|
multiple failures on amd64
««« original CL description
runtime: introduce helper persistentalloc() function
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/9822043
|
|
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 62 -8.63%
BenchmarkMalloc16 75 69 -7.94%
BenchmarkMallocTypeInfo8 102 98 -3.73%
BenchmarkMallocTypeInfo16 108 103 -4.63%
R=golang-dev, dave, khr
CC=golang-dev
https://golang.org/cl/9790043
|
|
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
|
|
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/9648044
|
|
Currently per-sizeclass stats are lost for destroyed MCache's. This patch fixes this.
Also, only update mstats.heap_alloc on heap operations, because that's the only
stat that needs to be promptly updated. Everything else needs to be up-to-date only in ReadMemStats().
R=golang-dev, remyoudompheng, dave, iant
CC=golang-dev
https://golang.org/cl/9207047
|
|
The nlistmin/size thresholds are copied from tcmalloc,
but are unnecesary for Go malloc. We do not do explicit
frees into MCache. For sparse cases when we do (mainly hashmap),
simpler logic will do.
R=rsc, dave, iant
CC=gobot, golang-dev, r, remyoudompheng
https://golang.org/cl/9373043
|
|
Finer-grained transfers were relevant with per-M caches,
with per-P caches they are not relevant and harmful for performance.
For few small size classes where it makes difference,
it's fine to grab the whole span (4K).
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.45%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/9374043
|
|
Also change table type from int32[] to int8[] to save space in L1$.
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.68%
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/9199044
|
|
Update #5236
Update #5402
This CL reduces gofmt's committed memory from 545864 KiB to 139568 KiB.
Note: Go 1.0.3 uses about 70MiB.
R=golang-dev, r, iant, nightlyone
CC=golang-dev
https://golang.org/cl/9245043
|
|
Unions break precise GC.
Update #5193.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8368044
|
|
This changeset adds a mostly-precise garbage collection of channels.
The garbage collection support code in the linker isn't recognizing
channel types yet.
Fixes issue http://stackoverflow.com/questions/14712586/memory-consumption-skyrocket
R=dvyukov, rsc, bradfitz
CC=dave, golang-dev, minux.ma, remyoudompheng
https://golang.org/cl/7307086
|
|
Step 1 of http://golang.org/s/go11func.
R=golang-dev, r, daniel.morsing, remyoudompheng
CC=golang-dev
https://golang.org/cl/7393045
|
|
Before, the mheap structure was in the bss,
but it's quite large (today, 256 MB, much of
which is never actually paged in), and it makes
Go binaries run afoul of exec-time bss size
limits on some BSD systems.
Fixes #4447.
R=golang-dev, dave, minux.ma, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/7307122
|
|
Fixes #4090.
R=golang-dev, iant, bradfitz, dsymonds
CC=golang-dev
https://golang.org/cl/7229070
|
|
For gccgo runtime and Darwin where -fno-common is the default.
R=iant, dave
CC=golang-dev
https://golang.org/cl/7094061
|
|
R=rsc, dvyukov, remyoudompheng, dave, minux.ma, bradfitz
CC=golang-dev
https://golang.org/cl/6945069
|
|
Incorporates code from CL 6828055.
Fixes #2142.
R=golang-dev, iant, devon.odell
CC=golang-dev
https://golang.org/cl/6826088
|
|
R=golang-dev
CC=golang-dev, remyoudompheng, rsc
https://golang.org/cl/6780059
|
|
R=golang-dev, minux.ma, dave, rsc
CC=golang-dev
https://golang.org/cl/6643046
|
|
R=rsc
CC=golang-dev
https://golang.org/cl/6569057
|
|
PauseNs is a circular buffer of recent pause times, and the
most recent one is at [((NumGC-1)+256)%256].
Also fix comments cross-linking the Go and C definition of
various structs.
R=golang-dev, rsc, bradfitz
CC=golang-dev
https://golang.org/cl/6657047
|
|
R=rsc
CC=golang-dev
https://golang.org/cl/6554060
|
|
This CL makes the runtime understand that the type of
the len or cap of a map, slice, or string is 'int', not 'int32',
and it is also careful to distinguish between function arguments
and results of type 'int' vs type 'int32'.
In the runtime, the new typedefs 'intgo' and 'uintgo' refer
to Go int and uint. The C types int and uint continue to be
unavailable (cause intentional compile errors).
This CL does not change the meaning of int, but it should make
the eventual change of the meaning of int on amd64 a bit
smoother.
Update #2188.
R=iant, r, dave, remyoudompheng
CC=golang-dev
https://golang.org/cl/6551067
|
|
It will be required for scheduler that maintains
GOMAXPROCS MCache's.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/6350062
|
|
linux/arm OMAP4 pandaboard
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 68723297000 37026214000 -46.12%
BenchmarkFannkuch11 34962402000 35958435000 +2.85%
BenchmarkGobDecode 137298600 124182150 -9.55%
BenchmarkGobEncode 60717160 60006700 -1.17%
BenchmarkGzip 5647156000 5550873000 -1.70%
BenchmarkGunzip 1196350000 1198670000 +0.19%
BenchmarkJSONEncode 863012800 782898000 -9.28%
BenchmarkJSONDecode 3312989000 2781800000 -16.03%
BenchmarkMandelbrot200 45727540 45703120 -0.05%
BenchmarkParse 74781800 59990840 -19.78%
BenchmarkRevcomp 140043650 139462300 -0.42%
BenchmarkTemplate 6467682000 5832153000 -9.83%
benchmark old MB/s new MB/s speedup
BenchmarkGobDecode 5.59 6.18 1.11x
BenchmarkGobEncode 12.64 12.79 1.01x
BenchmarkGzip 3.44 3.50 1.02x
BenchmarkGunzip 16.22 16.19 1.00x
BenchmarkJSONEncode 2.25 2.48 1.10x
BenchmarkJSONDecode 0.59 0.70 1.19x
BenchmarkParse 0.77 0.97 1.26x
BenchmarkRevcomp 18.15 18.23 1.00x
BenchmarkTemplate 0.30 0.33 1.10x
darwin/386 core duo
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 10591616577 9678245733 -8.62%
BenchmarkFannkuch11 10758473315 10749303846 -0.09%
BenchmarkGobDecode 34379785 34121250 -0.75%
BenchmarkGobEncode 23523721 23475750 -0.20%
BenchmarkGzip 2486191492 2446539568 -1.59%
BenchmarkGunzip 444179328 444250293 +0.02%
BenchmarkJSONEncode 221138507 219757826 -0.62%
BenchmarkJSONDecode 1056034428 1048975133 -0.67%
BenchmarkMandelbrot200 19862516 19868346 +0.03%
BenchmarkRevcomp 3742610872 3724821662 -0.48%
BenchmarkTemplate 960283112 944791517 -1.61%
benchmark old MB/s new MB/s speedup
BenchmarkGobDecode 22.33 22.49 1.01x
BenchmarkGobEncode 32.63 32.69 1.00x
BenchmarkGzip 7.80 7.93 1.02x
BenchmarkGunzip 43.69 43.68 1.00x
BenchmarkJSONEncode 8.77 8.83 1.01x
BenchmarkJSONDecode 1.84 1.85 1.01x
BenchmarkRevcomp 67.91 68.24 1.00x
BenchmarkTemplate 2.02 2.05 1.01x
R=rsc, 0xe2.0x9a.0x9b, mirtchovski
CC=golang-dev, minux.ma
https://golang.org/cl/6297047
|
|
Also bump MaxGcproc to 8.
benchmark old ns/op new ns/op delta
Parser 3796323000 3763880000 -0.85%
Parser-2 3591752500 3518560250 -2.04%
Parser-4 3423825250 3334955250 -2.60%
Parser-8 3304585500 3267014750 -1.14%
Parser-16 3313615750 3286160500 -0.83%
Tree 984128500 942501166 -4.23%
Tree-2 932564444 883266222 -5.29%
Tree-4 835831000 799912777 -4.30%
Tree-8 819238500 789717333 -3.73%
Tree-16 880837833 837840055 -5.13%
Tree2 604698100 579716900 -4.13%
Tree2-2 372414500 356765200 -4.20%
Tree2-4 187488100 177455900 -5.56%
Tree2-8 136315300 102086700 -25.11%
Tree2-16 93725900 76705800 -22.18%
ParserPause 157441210 166202783 +5.56%
ParserPause-2 93842650 85199900 -9.21%
ParserPause-4 56844404 53535684 -5.82%
ParserPause-8 35739446 30767613 -16.15%
ParserPause-16 32718255 27212441 -16.83%
TreePause 29610557 29787725 +0.60%
TreePause-2 24001659 20674421 -13.86%
TreePause-4 15114887 12842781 -15.03%
TreePause-8 13128725 10741747 -22.22%
TreePause-16 16131360 12506901 -22.47%
Tree2Pause 2673350920 2651045280 -0.83%
Tree2Pause-2 1796999200 1709350040 -4.88%
Tree2Pause-4 1163553320 1090706480 -6.67%
Tree2Pause-8 987032520 858916360 -25.11%
Tree2Pause-16 864758560 809567480 -6.81%
ParserLastPause 280537000 289047000 +3.03%
ParserLastPause-2 183030000 166748000 -8.90%
ParserLastPause-4 105817000 91552000 -13.48%
ParserLastPause-8 65127000 53288000 -18.18%
ParserLastPause-16 45258000 38334000 -15.30%
TreeLastPause 45072000 51449000 +12.39%
TreeLastPause-2 39269000 37866000 -3.57%
TreeLastPause-4 23564000 20649000 -12.37%
TreeLastPause-8 20881000 15807000 -24.30%
TreeLastPause-16 23297000 17309000 -25.70%
Tree2LastPause 6046912000 5797120000 -4.13%
Tree2LastPause-2 3724034000 3567592000 -4.20%
Tree2LastPause-4 1874831000 1774524000 -5.65%
Tree2LastPause-8 1363108000 1020809000 -12.79%
Tree2LastPause-16 937208000 767019000 -22.18%
R=rsc, 0xe2.0x9a.0x9b
CC=golang-dev
https://golang.org/cl/6223050
|
|
Parallel GC needs to know in advance how many helper threads will be there.
Hopefully it's the last patch before I can tackle parallel sweep phase.
The benchmarks are unaffected.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6200064
|