| Age | Commit message (Collapse) | Author |
|
In ASAN mode, extra memory allocations alter the object size and impact the scansize computation. Skip this test when ASAN is enabled.
Fixes #77857.
Fixes #77858.
Fixes #77859.
Change-Id: I21027ff20c411a0fda7a50446b2202871baa2262
Reviewed-on: https://go-review.googlesource.com/c/go/+/750140
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
When allocating arrays, scan size should be the position of the last pointer in the last object.
Small array allocations containing only pointers (for example, the backing store for a []*int)
have a fast path in the allocator for unrolling their bits. However, this fast path doesn't correctly
calculate the scan size for the object, and the result is that only the first pointer is actually counted.
This might lead the GC to think it has less work to do than it actually does.
Fixes #77573
Change-Id: I4b9db6db069e3fb64e4fcbbbc89b68585a71d483
Reviewed-on: https://go-review.googlesource.com/c/go/+/744840
Reviewed-by: Mark Freeman <markfreeman@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Wayne Zuo <wdvxdr1123@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This seems failure mode seems to have become more common on Windows. I
suspect the randomized heap base address has something to do with it,
but I'm not 100% sure.
What's definitely certain is that we're running out of hints, since
we're seeing failures that mheap_.arenaHints is nil and GetNextArenaHint
doesn't actually check that.
At the very least we can check that and skip. We know that in this case
there's not that much we can do.
Fixes #76566.
Change-Id: I8ccc8994806b6c95e3157eb296b09705637564b3
Reviewed-on: https://go-review.googlesource.com/c/go/+/726527
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the smallNoScanStub stub in malloc_stubs.go to reuse
heap objects that have been freed by runtime.freegc calls, and generates
the corresponding size-specialized code in malloc_generated.go.
This CL only adds support in the specialized mallocs for noscan
heap objects (objects without pointers). A later CL handles objects
with pointers.
While we are here, we leave a couple of breadcrumbs in mkmalloc.go on
how to do the generation.
Updates #74299
Change-Id: I2657622601a27211554ee862fce057e101767a70
Reviewed-on: https://go-review.googlesource.com/c/go/+/715761
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL adds a better test of assist credit handling when heap objects
are being reused after a runtime.freegc call.
The main approach is bracketing alloc/free pairs with measurements
of the assist credit before and after, and hoping to see a net zero
change in the assist credit.
However, validating the desired behavior is perhaps a bit subtle.
To help stabilize the measurements, we do acquirem in the test code
to avoid being preempted during the measurements to reduce other code's
ability to adjust the assist credit while we are measuring, and
we also reduce GOMAXPROCS to 1.
This test currently does fail if we deliberately introduce bugs
in the runtime.freegc implementation such as if we:
- never adjust the assist credit when reusing an object, or
- always adjust the assist credit when reusing an object, or
- deliberately mishandle internal fragmentation.
The two main cases of current interest for testing runtime.freegc
are when over the course of our bracketed measurements gcBlackenEnable
is either true or false. The test attempts to exercise both of those
case by running the GC continually in the background (which we can see
seems effective based on logging and by how our deliberate bugs fail).
This passes ~10K test executions locally via stress.
A small note to the future: a previous incarnation of this test (circa
patchset 11 of this CL) did not do acquirem but had an approach of
ignoring certain measurements, which also was able to pass ~10K runs
via stress. The current version in this CL is simpler, but
recording the existence of the prior version here in case it is
useful in the future. (Hopefully not.)
Updates #74299
Change-Id: I46c7e0295d125f5884fee0cc3d3d31aedc7e5ff4
Reviewed-on: https://go-review.googlesource.com/c/go/+/717520
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL adds runtime.freegc:
func freegc(ptr unsafe.Pointer, uintptr size, noscan bool)
Memory freed via runtime.freegc is made immediately reusable for
the next allocation in the same size class, without waiting for a
GC cycle, and hence can dramatically reduce pressure on the GC. A sample
microbenchmark included below shows strings.Builder operating roughly
2x faster.
An experimental modification to reflect to use runtime.freegc
and then using that reflect with json/v2 gave reported memory
allocation reductions of -43.7%, -32.9%, -21.9%, -22.0%, -1.0%
for the 5 official real-world unmarshalling benchmarks from
go-json-experiment/jsonbench by the authors of json/v2, covering
the CanadaGeometry through TwitterStatus datasets.
Note: there is no intent to modify the standard library to have
explicit calls to runtime.freegc, and of course such an ability
would never be exposed to end-user code.
Later CLs in this stack teach the compiler how to automatically
insert runtime.freegc calls when it can prove it is safe to do so.
(The reflect modification and other experimental changes to
the standard library were just that -- experiments. It was
very helpful while initially developing runtime.freegc to see
more complex uses and closer-to-real-world benchmark results
prior to updating the compiler.)
This CL only addresses noscan span classes (heap objects without
pointers), such as the backing memory for a []byte or string. A
follow-on CL adds support for heap objects with pointers.
If we update strings.Builder to explicitly call runtime.freegc on its
internal buf after a resize operation (but without freeing the usually
final incarnation of buf that will be returned to the user as a string),
we can see some nice benchmark results on the existing strings
benchmarks that call Builder.Write N times and then call Builder.String.
Here, the (uncommon) case of a single Builder.Write is not helped (given
it never resizes after first alloc if there is only one Write), but the
impact grows such that it is up to ~2x faster as there are more resize
operations due to more strings.Builder.Write calls:
│ disabled.out │ new-free-20.txt │
│ sec/op │ sec/op vs base │
BuildString_Builder/1Write_36Bytes_NoGrow-4 55.82n ± 2% 55.86n ± 2% ~ (p=0.794 n=20)
BuildString_Builder/2Write_36Bytes_NoGrow-4 125.2n ± 2% 115.4n ± 1% -7.86% (p=0.000 n=20)
BuildString_Builder/3Write_36Bytes_NoGrow-4 224.0n ± 1% 188.2n ± 2% -16.00% (p=0.000 n=20)
BuildString_Builder/5Write_36Bytes_NoGrow-4 239.1n ± 9% 205.1n ± 1% -14.20% (p=0.000 n=20)
BuildString_Builder/8Write_36Bytes_NoGrow-4 422.8n ± 3% 325.4n ± 1% -23.04% (p=0.000 n=20)
BuildString_Builder/10Write_36Bytes_NoGrow-4 436.9n ± 2% 342.3n ± 1% -21.64% (p=0.000 n=20)
BuildString_Builder/100Write_36Bytes_NoGrow-4 4.403µ ± 1% 2.381µ ± 2% -45.91% (p=0.000 n=20)
BuildString_Builder/1000Write_36Bytes_NoGrow-4 48.28µ ± 2% 21.38µ ± 2% -55.71% (p=0.000 n=20)
See the design document for more discussion of the strings.Builder case.
For testing, we add tests that attempt to exercise different aspects
of the underlying freegc and mallocgc behavior on the reuse path.
Validating the assist credit manipulations turned out to be subtle,
so a test for that is added in the next CL. There are also
invariant checks added, controlled by consts (primarily the
doubleCheckReusable const currently).
This CL also adds support in runtime.freegc for GODEBUG=clobberfree=1
to immediately overwrite freed memory with 0xdeadbeef, which
can help a higher-level test fail faster in the event of a bug,
and also the GC specifically looks for that pattern and throws
a fatal error if it unexpectedly finds it.
A later CL (currently experimental) adds GODEBUG=clobberfree=2,
which uses mprotect (or VirtualProtect on Windows) to set
freed memory to fault if read or written, until the runtime
later unprotects the memory on the mallocgc reuse path.
For the cases where a normal allocation is happening without any reuse,
some initial microbenchmarks suggest the impact of these changes could
be small to negligible (at least with GOAMD64=v3):
goos: linux
goarch: amd64
pkg: runtime
cpu: AMD EPYC 7B13
│ base-512M-v3.bench │ ps16-512M-goamd64-v3.bench │
│ sec/op │ sec/op vs base │
Malloc8-16 11.01n ± 1% 10.94n ± 1% -0.68% (p=0.038 n=20)
Malloc16-16 17.15n ± 1% 17.05n ± 0% -0.55% (p=0.007 n=20)
Malloc32-16 18.65n ± 1% 18.42n ± 0% -1.26% (p=0.000 n=20)
MallocTypeInfo8-16 18.63n ± 0% 18.36n ± 0% -1.45% (p=0.000 n=20)
MallocTypeInfo16-16 22.32n ± 0% 22.65n ± 0% +1.50% (p=0.000 n=20)
MallocTypeInfo32-16 23.37n ± 0% 23.89n ± 0% +2.23% (p=0.000 n=20)
geomean 18.02n 18.01n -0.05%
These last benchmark results include the runtime updates to support
span classes with pointers (which was originally part of this CL,
but later split out for ease of review).
Updates #74299
Change-Id: Icceaa0f79f85c70cd1a718f9a4e7f0cf3d77803c
Reviewed-on: https://go-review.googlesource.com/c/go/+/673695
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
|
|
This CL adds a generator function in runtime/_mkmalloc to generate
specialized mallocgc functions for sizes up throuht 512 bytes. (That's
the limit where it's possible to end up in the no header case when there
are scan bits, and where the benefits of the specialized functions
significantly diminish according to microbenchmarks). If the
specializedmalloc GOEXPERIMENT is turned on, mallocgc will call one of
these functions in the no header case.
malloc_generated.go is the generated file containing the specialized
malloc functions.
malloc_stubs.go contains the templates that will be stamped to create
the specialized malloc functions.
malloc_tables_generated contains the tables that mallocgc will use to
select the specialized function to call.
I've had to update the two stdlib_test.go files to account for the new
submodule mkmalloc is in. mprof_test accounts for the changes in the
stacks since different functions can be called in some cases.
I still need to investigate heapsampling.go.
Change-Id: Ia0f68dccdf1c6a200554ae88657cf4d686ace819
Reviewed-on: https://go-review.googlesource.com/c/go/+/665835
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Matloob <matloob@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
In test files, using testenv.Executable is more reliable than
os.Executable or os.Args[0].
Change-Id: I88e577efeabc20d02ada27bf706ae4523129128e
Reviewed-on: https://go-review.googlesource.com/c/go/+/651955
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
|
|
First, skip all the allocation count tests.
In some cases this aligns with existing skips for -race, but in others
we've got new issues. These are debug modes, so some performance loss is
expected, and this is clearly no worse than today where the tests fail.
Next, skip internal linking and static linking tests for msan and asan.
With asan we get an explicit failure that neither are supported by the C
and/or Go compilers. With msan, we only get the Go compiler telling us
internal linking is unavailable. With static linking, we segfault
instead. Filed #70080 to track that.
Next, skip some malloc tests with asan that don't quite work because of
the redzone.
This is because of some sizeclass assumptions that get broken with the
redzone and the fact that the tiny allocator is effectively disabled
(again, due to the redzone).
Next, skip some runtime/pprof tests with asan, because of extra
allocations.
Next, skip some malloc tests with asan that also fail because of extra
allocations.
Next, fix up memstats accounting for arenas when asan is enabled. There
is a bug where more is added to the stats than subtracted. This also
simplifies the accounting a little.
Next, skip race tests with msan or asan enabled; they're mutually
incompatible.
Fixes #70054.
Fixes #64256.
Fixes #64257.
For #70079.
For #70080.
Change-Id: I99c02a0b9d621e44f1f918b307aa4a4944c3ec60
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-asan-clang15,gotip-linux-amd64-msan-clang15
Reviewed-on: https://go-review.googlesource.com/c/go/+/622855
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Bypass: Michael Knyszek <mknyszek@google.com>
|
|
Use ^ and $ in the -run flag regular expression value when the intention
is to invoke a single named test. This removes the reliance on there not
being another similarly named test to achieve the intended result.
In particular, package syscall has tests named TestUnshareMountNameSpace
and TestUnshareMountNameSpaceChroot that both trigger themselves setting
GO_WANT_HELPER_PROCESS=1 to run alternate code in a helper process. As a
consequence of overlap in their test names, the former was inadvertently
triggering one too many helpers.
Spotted while reviewing CL 525196. Apply the same change in other places
to make it easier for code readers to see that said tests aren't running
extraneous tests. The unlikely cases of -run=TestSomething intentionally
being used to run all tests that have the TestSomething substring in the
name can be better written as -run=^.*TestSomething.*$ or with a comment
so it is clear it wasn't an oversight.
Change-Id: Iba208aba3998acdbf8c6708e5d23ab88938bfc1e
Reviewed-on: https://go-review.googlesource.com/c/go/+/524948
Reviewed-by: Tobias Klauser <tobias.klauser@gmail.com>
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Kirill Kolyshkin <kolyshkin@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
The compiler is too clever so the allocations are currently
avoided. Rewrite to make them actually allocate.
Change-Id: I9542e1365120b2ace318360883b0b01ed5670da7
Reviewed-on: https://go-review.googlesource.com/c/go/+/449476
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
If TestArenaCollision cannot reserve the address range it expects to
reserve, it currently fails somewhat mysteriously. Detect this case
and skip the test. This could lead to test rot if we wind up always
skipping this test, but it's not clear that there's a better answer.
If the test does fail, we now also log what it thinks it reserved so
the failure message is more useful in debugging any issues.
Fixes #49415
Fixes #54597
Change-Id: I05cf27258c1c0a7a3ac8d147f36bf8890820d59b
Reviewed-on: https://go-review.googlesource.com/c/go/+/446877
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
There are several tests in the runtime that need to force various
things to escape to the heap. This CL centralizes this functionality
into runtime.Escape, defined in export_test.
Change-Id: I2de2519661603ad46c372877a9c93efef8e7a857
Reviewed-on: https://go-review.googlesource.com/c/go/+/402178
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
|
|
And then revert the bootstrap cmd directories and certain testdata.
And adjust tests as needed.
Not reverting the changes in std that are bootstrapped,
because some of those changes would appear in API docs,
and we want to use any consistently.
Instead, rewrite 'any' to 'interface{}' in cmd/dist for those directories
when preparing the bootstrap copy.
A few files changed as a result of running gofmt -w
not because of interface{} -> any but because they
hadn't been updated for the new //go:build lines.
Fixes #49884.
Change-Id: Ie8045cba995f65bd79c694ec77a1b3d1fe01bb09
Reviewed-on: https://go-review.googlesource.com/c/go/+/368254
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
|
|
TestTinyAllocIssue37262 assumes that all of its allocations will come
from the same tiny allocator (that is, the same P), and that nothing
else will allocate from that tiny allocator while it's running. It can
fail incorrectly if these assumptions aren't met.
Fix this potential test flakiness by disabling preemption during this
test.
As far as I know, this has never happened on the builders. It was
found by mayMoreStackPreempt.
Change-Id: I59f993e0bdbf46a9add842d0e278415422c3f804
Reviewed-on: https://go-review.googlesource.com/c/go/+/366994
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
Top align allocations in tinyalloc buckets when in race mode.
This will make checkptr checks more reliable, because any code
that modifies a pointer past the end of the object will trigger
a checkptr error.
No test, because we need -race for this to actually kick in. We could
add it to the race detector tests, but the race detector tests are all
geared towards race detector reports, not checkptr reports. Mucking
with parsing reports is more than a test is worth.
Fixes #38872
Change-Id: Ie56f0fbd1a9385539f6631fd1ac40c3de5600154
Reviewed-on: https://go-review.googlesource.com/c/go/+/315029
Trust: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
|
|
Currently on 32-bit systems 8-byte fields in a struct have an alignment
of 4 bytes, which means that atomic instructions may fault. This issue
is tracked in #36606.
Our current workaround is to allocate memory and put any such atomically
accessed fields at the beginning of the object. This workaround fails
because the tiny allocator might not align the object right. This case
specifically only happens with 12-byte objects because a type's size is
rounded up to its alignment. So if e.g. we have a type like:
type obj struct {
a uint64
b byte
}
then its size will be 12 bytes, because "a" will require a 4 byte
alignment. This argument may be extended to all objects of size 9-15
bytes.
So, make this workaround work by specifically aligning such objects to 8
bytes on 32-bit systems. This change leaves a TODO to remove the code
once #36606 gets resolved. It also adds a test which will presumably no
longer be necessary (the compiler should enforce the right alignment)
when it gets resolved as well.
Fixes #37262.
Change-Id: I3a34e5b014b3c37ed2e5e75e62d71d8640aa42bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/254057
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com>
|
|
Go 1.14 will drop support for macOS 10.10, see #23011
This reverts CL 155097
Updates #26475
Updates #29340
Change-Id: I64d0275141407313b73068436ee81d13eacc4c76
Reviewed-on: https://go-review.googlesource.com/c/go/+/214058
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
|
|
This change adds a per-p free page cache which the page allocator may
allocate out of without a lock. The change also introduces a completely
lockless page allocator fast path.
Although the cache contains at most 64 pages (and usually less), the
vast majority (85%+) of page allocations are exactly 1 page in size.
Updates #35112.
Change-Id: I170bf0a9375873e7e3230845eb1df7e5cf741b78
Reviewed-on: https://go-review.googlesource.com/c/go/+/195701
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
|
|
This change removes the old page allocator from the runtime.
Updates #35112.
Change-Id: Ib20e1c030f869b6318cd6f4288a9befdbae1b771
Reviewed-on: https://go-review.googlesource.com/c/go/+/195700
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
|
|
This change integrates all the bits and pieces of the new page allocator
into the runtime, behind a global constant.
Updates #35112.
Change-Id: I6696bde7bab098a498ab37ed2a2caad2a05d30ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/201764
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
|
|
This change disables the test TestArenaCollision on Darwin in race mode
to deal with the fact that Darwin 10.10 must use MAP_FIXED in race mode
to ensure we retain our heap in a particular portion of the address
space which the race detector needs. The test specifically checks to
make sure a manually mapped region's space isn't re-used, which is
definitely possible with MAP_FIXED because it replaces whatever mapping
already exists at a given address.
This change then also makes it so that MAP_FIXED is only used in race
mode and on Darwin, not all BSDs, because using MAP_FIXED breaks this
test for FreeBSD in addition to Darwin.
Updates #26475.
Fixes #29340.
Change-Id: I1c59349408ccd7eeb30c4bf2593f48316b23ab2f
Reviewed-on: https://go-review.googlesource.com/c/155097
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
|
|
This change introduces a test to malloc_test which checks for overuse
of physical memory in the large object treap. Due to fragmentation,
there may be many pages of physical memory that are sitting unused in
large-object space.
For #14045.
Change-Id: I3722468f45063b11246dde6301c7ad02ae34be55
Reviewed-on: https://go-review.googlesource.com/c/138918
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
|
|
Fixes #22696
Change-Id: Ibe4628f71d64a2b36b655ea69710a925924b12a3
Reviewed-on: https://go-review.googlesource.com/122586
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
|
|
Currently, the runtime falls back to asking for any address the OS can
offer for the heap when it runs out of hint addresses. However, the
race detector assumes the heap lives in [0x00c000000000,
0x00e000000000), and will fail in a non-obvious way if we go outside
this region.
Fix this by actively throwing a useful error if we run out of heap
hints in race mode.
This problem is currently being triggered by TestArenaCollision, which
intentionally triggers this fallback behavior. Fix the test to look
for the new panic message in race mode.
Fixes #24670.
Updates #24133.
Change-Id: I57de6d17a3495dc1f1f84afc382cd18a6efc2bf7
Reviewed-on: https://go-review.googlesource.com/104717
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
|
|
Replace the test for nacl with testenv.MustHaveExec to also skip
test on iOS.
Change-Id: I6822714f6d71533d1b18bbb7894f6ad339d8aea1
Reviewed-on: https://go-review.googlesource.com/94755
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
|
|
This replaces the contiguous heap arena mapping with a potentially
sparse mapping that can support heap mappings anywhere in the address
space.
This has several advantages over the current approach:
* There is no longer any limit on the size of the Go heap. (Currently
it's limited to 512GB.) Hence, this fixes #10460.
* It eliminates many failures modes of heap initialization and
growing. In particular it eliminates any possibility of panicking
with an address space conflict. This can happen for many reasons and
even causes a low but steady rate of TSAN test failures because of
conflicts with the TSAN runtime. See #16936 and #11993.
* It eliminates the notion of "non-reserved" heap, which was added
because creating huge address space reservations (particularly on
64-bit) led to huge process VSIZE. This was at best confusing and at
worst conflicted badly with ulimit -v. However, the non-reserved
heap logic is complicated, can race with other mappings in non-pure
Go binaries (e.g., #18976), and requires that the entire heap be
either reserved or non-reserved. We currently maintain the latter
property, but it's quite difficult to convince yourself of that, and
hence difficult to keep correct. This logic is still present, but
will be removed in the next CL.
* It fixes problems on 32-bit where skipping over parts of the address
space leads to mapping huge (and never-to-be-used) metadata
structures. See #19831.
This also completely rewrites and significantly simplifies
mheap.sysAlloc, which has been a source of many bugs. E.g., #21044,
#20259, #18651, and #13143 (and maybe #23222).
This change also makes it possible to allocate individual objects
larger than 512GB. As a result, a few tests that expected huge
allocations to fail needed to be changed to make even larger
allocations. However, at the moment attempting to allocate a humongous
object may cause the program to freeze for several minutes on Linux as
we fall back to probing every page with addrspace_free. That logic
(and this failure mode) will be removed in the next CL.
Fixes #10460.
Fixes #22204 (since it rewrites the code involved).
This slightly slows down compilebench and the x/benchmarks garbage
benchmark.
name old time/op new time/op delta
Template 184ms ± 1% 185ms ± 1% ~ (p=0.065 n=10+9)
Unicode 86.9ms ± 3% 86.3ms ± 1% ~ (p=0.631 n=10+10)
GoTypes 599ms ± 0% 602ms ± 0% +0.56% (p=0.000 n=10+9)
Compiler 2.87s ± 1% 2.89s ± 1% +0.51% (p=0.002 n=9+10)
SSA 7.29s ± 1% 7.25s ± 1% ~ (p=0.182 n=10+9)
Flate 118ms ± 2% 118ms ± 1% ~ (p=0.113 n=9+9)
GoParser 147ms ± 1% 148ms ± 1% +1.07% (p=0.003 n=9+10)
Reflect 401ms ± 1% 404ms ± 1% +0.71% (p=0.003 n=10+9)
Tar 175ms ± 1% 175ms ± 1% ~ (p=0.604 n=9+10)
XML 209ms ± 1% 210ms ± 1% ~ (p=0.052 n=10+10)
(https://perf.golang.org/search?q=upload:20171231.4)
name old time/op new time/op delta
Garbage/benchmem-MB=64-12 2.23ms ± 1% 2.25ms ± 1% +0.84% (p=0.000 n=19+19)
(https://perf.golang.org/search?q=upload:20171231.3)
Relative to the start of the sparse heap changes (starting at and
including "runtime: fix various contiguous bitmap assumptions"),
overall slowdown is roughly 1% on GC-intensive benchmarks:
name old time/op new time/op delta
Template 183ms ± 1% 185ms ± 1% +1.32% (p=0.000 n=9+9)
Unicode 84.9ms ± 2% 86.3ms ± 1% +1.65% (p=0.000 n=9+10)
GoTypes 595ms ± 1% 602ms ± 0% +1.19% (p=0.000 n=9+9)
Compiler 2.86s ± 0% 2.89s ± 1% +0.91% (p=0.000 n=9+10)
SSA 7.19s ± 0% 7.25s ± 1% +0.75% (p=0.000 n=8+9)
Flate 117ms ± 1% 118ms ± 1% +1.10% (p=0.000 n=10+9)
GoParser 146ms ± 2% 148ms ± 1% +1.48% (p=0.002 n=10+10)
Reflect 398ms ± 1% 404ms ± 1% +1.51% (p=0.000 n=10+9)
Tar 173ms ± 1% 175ms ± 1% +1.17% (p=0.000 n=10+10)
XML 208ms ± 1% 210ms ± 1% +0.62% (p=0.011 n=10+10)
[Geo mean] 369ms 373ms +1.17%
(https://perf.golang.org/search?q=upload:20180101.2)
name old time/op new time/op delta
Garbage/benchmem-MB=64-12 2.22ms ± 1% 2.25ms ± 1% +1.51% (p=0.000 n=20+19)
(https://perf.golang.org/search?q=upload:20180101.3)
Change-Id: I5daf4cfec24b252e5a57001f0a6c03f22479d0f0
Reviewed-on: https://go-review.googlesource.com/85887
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
|
|
These functions all serve essentially the same purpose. mlookup is
used in only one place and findObject in only three. Use
heapBitsForObject instead, which is the most optimized implementation.
(This may seem slightly silly because none of these uses care about
the heap bits, but we're about to split up the functionality of
heapBitsForObject anyway. At that point, findObject will rise from the
ashes.)
Change-Id: I906468c972be095dd23cf2404a7d4434e802f250
Reviewed-on: https://go-review.googlesource.com/85877
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
|
|
This no longer appears to be reproducible on windows/386. Try putting
it back and we'll see if the builders still don't like it.
Fixes #19319.
Change-Id: Ia47b034e18d0a5a1951125c00542b021aacd5e8d
Reviewed-on: https://go-review.googlesource.com/47936
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
|
|
Now that we have a nice predicate system, improve the tests performed
by TestMemStats. We add some more non-zero checks (now that we force a
GC, things like NumGC must be non-zero), checks for trivial boolean
fields, and a few more range checks.
Change-Id: I6da46d33fa0ce5738407ee57d587825479413171
Reviewed-on: https://go-review.googlesource.com/37513
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
|
|
Currently most TestMemStats failures dump the whole MemStats object if
anything is amiss without telling you what is amiss, or even which
field is wrong. This makes it hard to figure out what the actual
problem is.
Replace this with a reflection walk over MemStats and a map of
predicates to check. If one fails, we can construct a detailed and
descriptive error message. The predicates are a direct translation of
the current tests.
Change-Id: I5a7cafb8e6a1eeab653d2e18bb74e2245eaa5444
Reviewed-on: https://go-review.googlesource.com/37512
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
|
|
This adds a counter for the number of times the application forced a
GC by, e.g., calling runtime.GC(). This is useful for detecting
applications that are overusing/abusing runtime.GC() or
debug.FreeOSMemory().
Fixes #18217.
Change-Id: I990ab7a313c1b3b7a50a3d44535c460d7c54f47d
Reviewed-on: https://go-review.googlesource.com/34067
Reviewed-by: Russ Cox <rsc@golang.org>
|
|
TestMemStats currently requires that NumGC != 0, but GC may
legitimately not have run (for example, if this test runs first, or
GOGC is set high, etc). Accept NumGC == 0 and instead sanity check
NumGC by making sure that all pause times after NumGC are 0.
Fixes #11989.
Change-Id: I4203859fbb83292d59a509f2eeb24d6033e7aabc
Reviewed-on: https://go-review.googlesource.com/17830
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
|
|
When a new tiny block is allocated because we're allocating an object
that won't fit into the current block, mallocgc saves the new block if
it has more space leftover than the old block. However, the logic for
this was subtly broken in golang.org/cl/2814, resulting in never
saving (or consequently reusing) a tiny block.
Change-Id: Ib5f6769451fb82877ddeefe75dfe79ed4a04fd40
Reviewed-on: https://go-review.googlesource.com/16330
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
|
|
These memstats are currently being computed by gcMark, which was
appropriate in Go 1.4, but gcMark is now just one part of a bigger
picture. In particular, it can't account for the sweep termination
pause time, it can't account for all of the mark termination pause
time, and the reported "pause end" and "last GC" times will be
slightly earlier than they really are.
Lift computing of these statistics into func gc, which has the
appropriate visibility into the process to compute them correctly.
Fixes one of the issues in #10323. This does not add new statistics
appropriate to the concurrent collector; it simply fixes existing
statistics that are being misreported.
Change-Id: I670cb16594a8641f6b27acf4472db15b6e8e086e
Reviewed-on: https://go-review.googlesource.com/11794
Reviewed-by: Russ Cox <rsc@golang.org>
|
|
Currently we report MemStats.PauseEnd in nanoseconds, but with no
particular 0 time. On Linux, the 0 time is when the host started. On
Darwin, it's the UNIX epoch. This is also inconsistent with the other
absolute time in MemStats, LastGC, which is always reported in
nanoseconds since 1970.
Fix PauseEnd so it's always reported in nanoseconds since 1970, like
LastGC.
Fixes one of the issues raised in #10323.
Change-Id: Ie2fe3169d45113992363a03b764f4e6c47e5c6a8
Reviewed-on: https://go-review.googlesource.com/11801
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
|
|
Consider the following code:
s := "(" + string(byteSlice) + ")"
Currently we allocate a new string during []byte->string conversion,
and pass it to concatstrings. String allocation is unnecessary in
this case, because concatstrings does memorize the strings for later use.
This change uses slicebytetostringtmp to construct temp string directly
from []byte buffer and passes it to concatstrings.
I've found few such cases in std lib:
s += string(msg[off:off+c]) + "."
buf.WriteString("Sec-WebSocket-Accept: " + string(c.accept) + "\r\n")
bw.WriteString("Sec-WebSocket-Key: " + string(nonce) + "\r\n")
err = xml.Unmarshal([]byte("<Top>"+string(data)+"</Top>"), &logStruct)
d.err = d.syntaxError("invalid XML name: " + string(b))
return m, ProtocolError("malformed MIME header line: " + string(kv))
But there are much more in our internal code base.
Change-Id: I42f401f317131237ddd0cb9786b0940213af16fb
Reviewed-on: https://go-review.googlesource.com/3163
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
|
|
Preparation was in CL 134570043.
This CL contains only the effect of 'hg mv src/pkg/* src'.
For more about the move, see golang.org/s/go14nopkg.
|