aboutsummaryrefslogtreecommitdiff
path: root/src/runtime
AgeCommit message (Collapse)Author
2015-07-29runtime: set invalidptr=1 by default, as documentedgo1.5beta3Russ Cox
Also make invalidptr control the recently added GC pointer check, as documented. Change-Id: Iccfdf49480219d12be8b33b8f03d8312d8ceabed Reviewed-on: https://go-review.googlesource.com/12857 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Rob Pike <r@golang.org>
2015-07-29runtime/trace: remove existing SkipsRuss Cox
The skips added in CL 12579, based on incorrect time stamps, should be sufficient to identify and exclude all the time-related flakiness on these systems. If there is other flakiness, we want to find out. For #10512. Change-Id: I5b588ac1585b2e9d1d18143520d2d51686b563e3 Reviewed-on: https://go-review.googlesource.com/12746 Reviewed-by: Austin Clements <austin@google.com>
2015-07-29runtime/trace: record event sequence numbers explicitlyRuss Cox
Nearly all the flaky failures we've seen in trace tests have been due to the use of time stamps to determine relative event ordering. This is tricky for many reasons, including: - different cores might not have exactly synchronized clocks - VMs are worse than real hardware - non-x86 chips have different timer resolution than x86 chips - on fast systems two events can end up with the same time stamp Stop trying to make time reliable. It's clearly not going to be for Go 1.5. Instead, record an explicit event sequence number for ordering. Using our own counter solves all of the above problems. The trace still contains time stamps, of course. The sequence number is just used for ordering. Should alleviate #10554 somewhat. Then tickDiv can be chosen to be a useful time unit instead of having to be exact for ordering. Separating ordering and time stamps lets the trace parser diagnose systems where the time stamp order and actual order do not match for one reason or another. This CL adds that check to the end of trace.Parse, after all other sequence order-based checking. If that error is found, we skip the test instead of failing it. Putting the check in trace.Parse means that cmd/trace will pick up the same check, refusing to display a trace where the time stamps do not match actual ordering. Using net/http's BenchmarkClientServerParallel4 on various CPU counts, not tracing vs tracing: name old time/op new time/op delta ClientServerParallel4 50.4µs ± 4% 80.2µs ± 4% +59.06% (p=0.000 n=10+10) ClientServerParallel4-2 33.1µs ± 7% 57.8µs ± 5% +74.53% (p=0.000 n=10+10) ClientServerParallel4-4 18.5µs ± 4% 32.6µs ± 3% +75.77% (p=0.000 n=10+10) ClientServerParallel4-6 12.9µs ± 5% 24.4µs ± 2% +89.33% (p=0.000 n=10+10) ClientServerParallel4-8 11.4µs ± 6% 21.0µs ± 3% +83.40% (p=0.000 n=10+10) ClientServerParallel4-12 14.4µs ± 4% 23.8µs ± 4% +65.67% (p=0.000 n=10+10) Fixes #10512. Change-Id: I173eecf8191e86feefd728a5aad25bf1bc094b12 Reviewed-on: https://go-review.googlesource.com/12579 Reviewed-by: Austin Clements <austin@google.com>
2015-07-29runtime: ignore arguments in cgocallback_gofunc frameRuss Cox
Otherwise the GC may see uninitialized memory there, which might be old pointers that are retained, or it might trigger the invalid pointer check. Fixes #11907. Change-Id: I67e306384a68468eef45da1a8eb5c9df216a77c0 Reviewed-on: https://go-review.googlesource.com/12852 Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2015-07-29runtime: fix darwin/amd64 assembly frame sizesRuss Cox
Change-Id: I2f0ecdc02ce275feadf07e402b54f988513e9b49 Reviewed-on: https://go-review.googlesource.com/12855 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-29runtime: reenable bad pointer check in GCRuss Cox
The last time we tried this, linux/arm64 broke. The series of CLs leading to this one fixes that problem. Let's try again. Fixes #9880. Change-Id: I67bc1d959175ec972d4dcbe4aa6f153790f74251 Reviewed-on: https://go-review.googlesource.com/12849 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2015-07-29runtime, reflect: use correctly aligned stack frame sizes on arm64Russ Cox
arm64 requires either no stack frame or a frame with a size that is 8 mod 16 (adding the saved LR will make it 16-aligned). The cmd/internal/obj/arm64 has been silently aligning frames, but it led to a terrible bug when the compiler and obj disagreed on the frame size, and it's just generally confusing, so we're going to make misaligned frames an error instead of something that is silently changed. This CL prepares by updating assembly files. Note that the changes in this CL are already being done silently by cmd/internal/obj/arm64, so there is no semantic effect here, just a clarity effect. For #9880. Change-Id: Ibd6928dc5fdcd896c2bacd0291bf26b364591e28 Reviewed-on: https://go-review.googlesource.com/12845 Reviewed-by: Austin Clements <austin@google.com>
2015-07-29runtime: report GC CPU utilization in MemStatsAustin Clements
This adds a GCCPUFraction field to MemStats that reports the cumulative fraction of the program's execution time spent in the garbage collector. This is equivalent to the utilization percent shown in the gctrace output and makes this available programmatically. This does make one small effect on the gctrace output: we now report the duration of mark termination up to just before the final start-the-world, rather than up to just after. However, unlike stop-the-world, I don't believe there's any way that start-the-world can block, so it should take negligible time. While there are many statistics one might want to expose via MemStats, this is one of the few that will undoubtedly remain meaningful regardless of future changes to the memory system. The diff for this change is larger than the actual change. Mostly it lifts the code for computing the GC CPU utilization out of the debug.gctrace path. Updates #10323. Change-Id: I0f7dc3fdcafe95e8d1233ceb79de606b48acd989 Reviewed-on: https://go-review.googlesource.com/12844 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-29runtime: always capture GC phase transition timesAustin Clements
Currently we only capture GC phase transition times if debug.gctrace>0, but we're about to compute GC CPU utilization regardless of whether debug.gctrace is set, so we need these regardless of debug.gctrace. Change-Id: If3acf16505a43d416e9a99753206f03287180660 Reviewed-on: https://go-review.googlesource.com/12843 Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
2015-07-29runtime: avoid race between SIGPROF traceback and stack barriersAustin Clements
The following sequence of events can lead to the runtime attempting an out-of-bounds access on a stack barrier slice: 1. A SIGPROF comes in on a thread while the G on that thread is in _Gsyscall. The sigprof handler calls gentraceback, which saves a local copy of the G's stkbar slice. Currently the G has no stack barriers, so this slice is empty. 2. On another thread, the GC concurrently scans the stack of the goroutine being profiled (it considers it stopped because it's in _Gsyscall) and installs stack barriers. 3. Back on the sigprof thread, gentraceback comes across a stack barrier in the stack and attempts to look it up in its (zero length) copy of G's old stkbar slice, which causes an out-of-bounds access. This commit fixes this by adding a simple cas spin to synchronize the SIGPROF handler with stack barrier insertion. In general I would prefer that this synchronization be done through the G status, since that's how stack scans are otherwise synchronized, but adding a new lock is a much smaller change and G statuses are full of subtlety. Fixes #11863. Change-Id: Ie89614a6238bb9c6a5b1190499b0b48ec759eaf7 Reviewed-on: https://go-review.googlesource.com/12748 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-29runtime: force mutator to give work buffer to GCRick Hudson
The scheduler, work buffer's dispose, and write barriers can conspire to hide the a pointer from the GC's concurent mark phase. If this pointer is the only path to a large amount of marking the STW mark termination phase may take a lot of time. Consider the following: 1) dispose places a work buffer on the partial queue 2) the GC is busy so it does not immediately remove and process the work buffer 3) the scheduler runs a mutator whose write barrier dequeues the work buffer from the partial queue so the GC won't see it This repeats until the GC reaches the mark termination phase where the GC finally discovers the pointer along with a lot of work to do. This CL fixes the problem by having the mutator dispose of the buffer to the full queue instead of the partial queue. Since the write buffer never asks for full buffers the conspiracy described above is not possible. Updates #11694. Change-Id: I2ce832f9657a7570f800e8ce4459cd9e304ef43b Reviewed-on: https://go-review.googlesource.com/12840 Reviewed-by: Austin Clements <austin@google.com>
2015-07-28runtime: fix out-of-bounds in stack debuggingDmitry Vyukov
Currently stackDebug=4 crashes as: panic: runtime error: index out of range fatal error: panic on system stack runtime stack: runtime.throw(0x607470, 0x15) src/runtime/panic.go:527 +0x96 runtime.gopanic(0x5ada00, 0xc82000a1d0) src/runtime/panic.go:354 +0xb9 runtime.panicindex() src/runtime/panic.go:12 +0x49 runtime.adjustpointers(0xc820065ac8, 0x7ffe58b56100, 0x7ffe58b56318, 0x0) src/runtime/stack1.go:428 +0x5fb runtime.adjustframe(0x7ffe58b56200, 0x7ffe58b56318, 0x1) src/runtime/stack1.go:542 +0x780 runtime.gentraceback(0x487760, 0xc820065ac0, 0x0, 0xc820001080, 0x0, 0x0, 0x7fffffff, 0x6341b8, 0x7ffe58b56318, 0x0, ...) src/runtime/traceback.go:336 +0xa7e runtime.copystack(0xc820001080, 0x1000) src/runtime/stack1.go:616 +0x3b1 runtime.newstack() src/runtime/stack1.go:801 +0xdde Change-Id: If2d60960231480a9dbe545d87385fe650d6db808 Reviewed-on: https://go-review.googlesource.com/12763 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-28runtime: use 64k page rounding on arm64Russ Cox
Fixes #11886. Change-Id: I9392fd2ef5951173ae275b3ab42db4f8bd2e1d7a Reviewed-on: https://go-review.googlesource.com/12747 Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-07-28runtime: fix x86 stack trace for call to heap memory on Plan 9David du Colombier
Russ Cox fixed this issue for other systems in CL 12026, but the Plan 9 part was forgotten. Fixes #11656. Change-Id: I91c033687987ba43d13ad8f42e3fe4c7a78e6075 Reviewed-on: https://go-review.googlesource.com/12762 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-28runtime: don't define libc_getpid in os3_solaris.goIan Lance Taylor
The function is already defined between syscall_solaris.go and syscall2_solaris.go. Change-Id: I034baf7c8531566bebfdbc5a4061352cbcc31449 Reviewed-on: https://go-review.googlesource.com/12773 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-07-28runtime: fix definitions of getpid and kill on SolarisIan Lance Taylor
A further attempt to fix raiseproc on Solaris. Change-Id: I8d8000d6ccd0cd9f029ebe1f211b76ecee230cd0 Reviewed-on: https://go-review.googlesource.com/12771 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-07-28runtime: correct implementation of raiseproc on SolarisIan Lance Taylor
I forgot that the libc raise function only sends the signal to the current thread. We need to actually use kill and getpid here, as we do on other systems. Change-Id: Iac34af822c93468bf68cab8879db3ee20891caaf Reviewed-on: https://go-review.googlesource.com/12704 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime/cgo: remove TMPDIR logic for iOSDavid Crawshaw
Seems like the simplest solution for 1.5. All the parts of the test suite I can run on my current device (for which my exception handler fix no longer works, apparently) pass without this code. I'll move it into x/mobile/app. Fixes #11884 Change-Id: I2da40c8c7b48a4c6970c4d709dd7c148a22e8727 Reviewed-on: https://go-review.googlesource.com/12721 Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-27runtime: close window that hides GC work from concurrent markAustin Clements
Currently we enter mark 2 by first flushing all existing gcWork caches and then setting gcBlackenPromptly, which disables further gcWork caching. However, if a worker or assist pulls a work buffer in to its gcWork cache after that cache has been flushed but before caching is disabled, that work may remain in that cache until mark termination. If that work represents a heap bottleneck (e.g., a single pointer that is the only way to reach a large amount of the heap), this can force mark termination to do a large amount of work, resulting in a long STW. Fix this by reversing the order of these steps: first disable caching, then flush all existing caches. Rick Hudson <rlh> did the hard work of tracking this down. This CL combined with CL 12672 and CL 12646 distills the critical parts of his fix from CL 12539. Fixes #11694. Change-Id: Ib10d0a21e3f6170a80727d0286f9990df049fed2 Reviewed-on: https://go-review.googlesource.com/12688 Reviewed-by: Rick Hudson <rlh@golang.org>
2015-07-27runtime: enable GC assists ASAPAustin Clements
Currently the GC coordinator enables GC assists at the same time it enables background mark workers, after the concurrent scan phase is done. However, this means a rapidly allocating mutator has the entire scan phase during which to allocate beyond the heap trigger and potentially beyond the heap goal with no back-pressure from assists. This prevents the feedback system that's supposed to keep the heap size under the heap goal from doing its job. Fix this by enabling mutator assists during the scan phase. This is safe because the write barrier is already enabled and globally acknowledged at this point. There's still a very small window between when the heap size reaches the heap trigger and when the GC coordinator is able to stop the world during which the mutator can allocate unabated. This allows *very* rapidly allocator mutators like TestTraceStress to still occasionally exceed the heap goal by a small amount (~20 MB at most for TestTraceStress). However, this seems like a corner case. Fixes #11677. Change-Id: I0f80d949ec82341cd31ca1604a626efb7295a819 Reviewed-on: https://go-review.googlesource.com/12674 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: allow GC drain whenever write barrier is enabledAustin Clements
Currently we hand-code a set of phases when draining is allowed. However, this set of phases is conservative. The critical invariant is simply that the write barrier must be enabled if we're draining. Shortly we're going to enable mutator assists during the scan phase, which means we may drain during the scan phase. In preparation, this commit generalizes these assertions to check the fundamental condition that the write barrier is enabled, rather than checking that we're in any particular phase. Change-Id: I0e1bec1ca823d4a697a0831ec4c50f5dd3f2a893 Reviewed-on: https://go-review.googlesource.com/12673 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: don't start workers between mark 1 & 2Austin Clements
Currently we clear both the mark 1 and mark 2 signals at the beginning of concurrent mark. If either if these is clear, it acts as a signal to the scheduler that it should start background workers. However, this means that in the interim *between* mark 1 and mark 2, the scheduler basically loops starting up new workers only to have them return with nothing to do. In addition to harming performance and delaying mutator work, this approach has a race where workers started for mark 1 can mistakenly signal mark 2, causing it to complete prematurely. This approach also interferes with starting assists earlier to fix #11677. Fix this by initially setting both mark 1 and mark 2 to "signaled". The scheduler will not start background mark workers, though assists can still run. When we're ready to enter mark 1, we clear the mark 1 signal and wait for it. Then, when we're ready to enter mark 2, we clear the mark 2 signal and wait for it. This structure also lets us deal cleanly with the situation where all work is drained *prior* to the mark 2 wait, meaning that there may be no workers to signal completion. Currently we deal with this using a racy (and possibly incorrect) check for work in the coordinator itself to skip the mark 2 wait if there's no work. This change makes the coordinator unconditionally wait for mark completion and makes the scheduler itself signal completion by slightly extending the logic it already has to determine that there's no work and hence no use in starting a new worker. This is a prerequisite to fixing the remaining component of #11677, which will require enabling assists during the scan phase. However, we don't want to enable background workers until the mark phase because they will compete with the scan. This change lets us use bgMark1 and bgMark2 to indicate when it's okay to start background workers independent of assists. This is also a prerequisite to fixing #11694. It significantly reduces the occurrence of long mark termination pauses in #11694 (from 64 out of 1000 to 2 out of 1000 in one experiment). Coincidentally, this also reduces the final heap size (and hence run time) of TestTraceStress from ~100 MB and ~1.9 seconds to ~14 MB and ~0.4 seconds because it significantly shortens concurrent mark duration. Rick Hudson <rlh> did the hard work of tracking this down. Change-Id: I12ea9ee2db9a0ae9d3a90dde4944a75fcf408f4c Reviewed-on: https://go-review.googlesource.com/12672 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: retry GC assist until debt is paid offAustin Clements
Currently, there are three ways to satisfy a GC assist: 1) the mutator steals credit from background GC, 2) the mutator actually does GC work, and 3) there is no more work available. 3 was never really intended as a way to satisfy an assist, and it causes problems: there are periods when it's expected that the GC won't have any work, such as when transitioning from mark 1 to mark 2 and from mark 2 to mark termination. During these periods, there's no back-pressure on rapidly allocating mutators, which lets them race ahead of the heap goal. For example, test/init1.go and the runtime/trace test both have small reachable heaps and contain loops that rapidly allocate large garbage byte slices. This bug lets these tests exceed the heap goal by several orders of magnitude. Fix this by forcing the assist (and hence the allocation) to block until it can satisfy its debt via either 1 or 2, or the GC cycle terminates. This fixes one the causes of #11677. It's still possible to overshoot the GC heap goal, but with this change the overshoot is almost exactly by the amount of allocation that happens during the concurrent scan phase, between when the heap passes the GC trigger and when the GC enables assists. Change-Id: I5ef4edcb0d2e13a1e432e66e8245f2bd9f8995be Reviewed-on: https://go-review.googlesource.com/12671 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: yield to GC coordinator after assist completionAustin Clements
Currently it's possible for the GC assist to signal completion of the mark phase, which puts the GC coordinator goroutine on the current P's run queue, and then return to mutator code that delays until the next forced preemption before actually yielding control to the GC coordinator, dragging out completion of the mark phase. This delay can be further exacerbated if the mutator makes other goroutines runnable before yielding control, since this will push the GC coordinator on the back of the P's run queue. To fix this, this adds a Gosched to the assist if it completed the mark phase. This immediately and directly yields control to the GC coordinator. This already happens implicitly in the background mark workers because they park immediately after completing the mark. This is one of the reasons completion of the mark phase is being dragged out and allowing the mutator to allocate without assisting, leading to the large heap goal overshoot in issue #11677. This is also a prerequisite to making the assist block when it can't pay off its debt. Change-Id: I586adfbecb3ca042a37966752c1dc757f5c7fc78 Reviewed-on: https://go-review.googlesource.com/12670 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: disallow GC assists in non-preemptible contextsAustin Clements
Currently it's possible to perform GC work on a system stack or when locks are held if there's an allocation that triggers an assist. This is generally a bad idea because of the fragility of these contexts, and it's incompatible with two changes we're about to make: one is to yield after signaling mark completion (which we can't do from a non-preemptible context) and the other is to make assists block if there's no other way for them to pay off the assist debt. This commit simply skips the assist if it's called from a non-preemptible context. The allocation will still count toward the assist debt, so it will be paid off by a later assist. There should be little allocation from non-preemptible contexts, so this shouldn't harm the overall assist mechanism. Change-Id: I7bf0e6c73e659fe6b52f27437abf39d76b245c79 Reviewed-on: https://go-review.googlesource.com/12649 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: make notetsleep_internal nowritebarrierAustin Clements
When notetsleep_internal is called from notetsleepg, notetsleepg has just given up the P, so write barriers are not allowed in notetsleep_internal. Change-Id: I1b214fa388b1ea05b8ce2dcfe1c0074c0a3c8870 Reviewed-on: https://go-review.googlesource.com/12647 Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: fix mark 2 completion in fractional/idle workersAustin Clements
Currently fractional and idle mark workers dispose of their gcWork cache during mark 2 after incrementing work.nwait and after checking whether there are any workers or any work available. This creates a window for two races: 1) If the only remaining work is in this worker's gcWork cache, it will see that there are no more workers and no more work on the global lists (since it has not yet flushed its own cache) and prematurely signal mark 2 completion. 2) After this worker has incremented work.nwait but before it has flushed its cache, another worker may observe that there are no more workers and no more work and prematurely signal mark 2 completion. We can fix both of these by simply moving the cache flush above the increment of nwait and the test of the completion condition. This is probably contributing to #11694, though this alone is not enough to fix it. Change-Id: Idcf9656e5c460c5ea0d23c19c6c51e951f7716c3 Reviewed-on: https://go-review.googlesource.com/12646 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: steal the correct amount of GC assist creditAustin Clements
GC assists are supposed to steal at most the amount of background GC credit available so that background GC credit doesn't go negative. However, they are instead stealing the *total* amount of their debt but only claiming up to the amount of credit that was available. This results in draining the background GC credit pool too quickly, which results in unnecessary assist work. The fix is trivial: steal the amount of work we meant to steal (which is already computed). Change-Id: I837fe60ed515ba91c6baf363248069734a7895ef Reviewed-on: https://go-review.googlesource.com/12643 Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: document gctrace formatAustin Clements
Fixes #10348. Change-Id: I3eea9738e3f6fdc1998d04a601dc9b556dd2db72 Reviewed-on: https://go-review.googlesource.com/12453 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: always report starting heap size in gctraceAustin Clements
Currently the gctrace output reports the trigger heap size, rather than the actual heap size at the beginning of GC. Often these are the same, or at least very close. However, it's possible for the heap to already have exceeded this trigger when we first check the trigger and start GC; in this case, this output is very misleading. We've encountered this confusion a few times when debugging and this behavior is difficult to document succinctly. Change the gctrace output to report the actual heap size when GC starts, rather than the trigger. Change-Id: I246b3ccae4c4c7ea44c012e70d24a46878d7601f Reviewed-on: https://go-review.googlesource.com/12452 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: remove # from gctrace lineAustin Clements
Whenever someone pastes gctrace output into GitHub, it helpfully turns the GC cycle number into a link to some unrelated issue. Prevent this by removing the pound before the cycle number. The fact that this is a cycle number is probably more obvious at a glance than most of the other numbers. Change-Id: Ifa5fc7fe6c715eac50e639f25bc36c81a132ffea Reviewed-on: https://go-review.googlesource.com/12413 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime: log all thread stack traces during GODEBUG=crash on UnixIan Lance Taylor
This extends https://golang.org/cl/2811, which only applied to Darwin and GNU/Linux, to all Unix systems. Fixes #9591. Change-Id: Iec3fb438564ba2924b15b447c0480f87c0bfd009 Reviewed-on: https://go-review.googlesource.com/12661 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-27runtime/pprof: document content of heap profileRuss Cox
Fixes #11343. Change-Id: I46efc24b687b9d060ad864fbb238c74544348e38 Reviewed-on: https://go-review.googlesource.com/12556 Reviewed-by: Rob Pike <r@golang.org>
2015-07-27runtime/cgo: move TMPDIR magic out of osRuss Cox
It's not clear this really belongs anywhere at all, but this is a better place for it than package os. This way package os can avoid importing "C". Fixes #10455. Change-Id: Ibe321a93bf26f478951c3a067d75e22f3d967eb7 Reviewed-on: https://go-review.googlesource.com/12574 Reviewed-by: David Crawshaw <crawshaw@golang.org> Reviewed-by: Dave Cheney <dave@cheney.net>
2015-07-27runtime: pass a smaller buffer to sched_getaffinity on ARMMichael Hudson-Doyle
The system stack is only around 8kb on ARM so one can't put an 8kb buffer on the stack. More than 1024 ARM cores seems sufficiently unlikely for the foreseeable future. Fixes #11853 Change-Id: I7cb27c1250a6153f86e269c172054e9dfc218c72 Reviewed-on: https://go-review.googlesource.com/12622 Reviewed-by: Austin Clements <austin@google.com>
2015-07-24runtime: require gdb version 7.9 for gdb testIan Lance Taylor
Issue 11214 reports problems with older versions of gdb. It does work with gdb 7.9 on my Ubuntu Trusty system, so take that as the minimum required version. Fixes #11214. Change-Id: I61b732895506575be7af595f81fc1bcf696f58c2 Reviewed-on: https://go-review.googlesource.com/12626 Reviewed-by: Austin Clements <austin@google.com>
2015-07-24runtime: fix runtime·raise for dragonfly amd64Ian Lance Taylor
Fixes #11847. Change-Id: I21736a4c6f6fb2f61aec1396ce2c965e3e329e92 Reviewed-on: https://go-review.googlesource.com/12621 Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
2015-07-23runtime: make pcln table check not trigger next to foreign codeRuss Cox
Foreign code can be arbitrarily aligned, so the function before it can have arbitrarily much padding. We can't call pcvalue on values in the padding. Fixes #11653. Change-Id: I7d57f813ae5a2409d1520fcc909af3eeef2da131 Reviewed-on: https://go-review.googlesource.com/12550 Reviewed-by: Rob Pike <r@golang.org>
2015-07-23runtime/trace: fix TestTraceSymbolize networkingRuss Cox
We use 127.0.0.1 instead of localhost in Go networking tests. The reporter of #11774 has localhost defined to be 120.192.83.162, for reasons unknown. Also, if TestTraceSymbolize calls Fatalf (for example because Listen fails) then we need to stop the trace for future tests to work. See failure log in #11774. Fixes #11774. Change-Id: Iceddb03a72d31e967acd2d559ecb78051f9c14b7 Reviewed-on: https://go-review.googlesource.com/12521 Reviewed-by: Rob Pike <r@golang.org>
2015-07-22runtime: handle linux CPU masks up to 64k CPUsRuss Cox
Fixes #11823. Change-Id: Ic949ccb9657478f8ca34fdf1a6fe88f57db69f24 Reviewed-on: https://go-review.googlesource.com/12535 Reviewed-by: Austin Clements <austin@google.com>
2015-07-22runtime/cgo: make compatible with race detectorRuss Cox
Some routines run without and m or g and cannot invoke the race detector runtime. They must be opaque to the runtime. That used to be true because they were written in C. Now that they are written in Go, disable the race detector annotations for those functions explicitly. Add test. Fixes #10874. Change-Id: Ia8cc28d51e7051528f9f9594b75634e6bb66a785 Reviewed-on: https://go-review.googlesource.com/12534 Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-22runtime/pprof: ignore too few samples on Windows testRuss Cox
Fixes #10842. Change-Id: I7de98f3073a47911863a252b7a74d8fdaa48c86f Reviewed-on: https://go-review.googlesource.com/12529 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-07-22runtime: if we don't handle a signal on a non-Go thread, raise itIan Lance Taylor
In the past badsignal would crash the program. In https://golang.org/cl/10757044 badsignal was changed to call sigsend, to fix issue #3250. The effect of this was that when a non-Go thread received a signal, and os/signal.Notify was not being used to check for occurrences of the signal, the signal was ignored. This changes the code so that if os/signal.Notify is not being used, then the signal handler is reset to what it was, and the signal is raised again. This lets non-Go threads handle the signal as they wish. In particular, it means that a segmentation violation in a non-Go thread will ordinarily crash the process, as it should. Fixes #10139. Update #11794. Change-Id: I2109444aaada9d963ad03b1d071ec667760515e5 Reviewed-on: https://go-review.googlesource.com/12503 Reviewed-by: Russ Cox <rsc@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org>
2015-07-22runtime: disable TestGoroutineParallelism on uniprocessorRuss Cox
It's a bad test and it's worst on uniprocessors. Fixes #11143. Change-Id: I0164231ada294788d7eec251a2fc33e02a26c13b Reviewed-on: https://go-review.googlesource.com/12522 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-07-22runtime: fix comments referring to trace functions in runtime/pprofAustin Clements
ae1ea2a moved trace-related functions from runtime/pprof to runtime/trace, but missed a doc comment and a code comment. Update these to reflect the move. Change-Id: I6e1e8861e5ede465c08a2e3f80b976145a8b32d8 Reviewed-on: https://go-review.googlesource.com/12525 Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-07-22runtime/trace: add new packageDmitry Vyukov
Move tracing functions from runtime/pprof to the new runtime/trace package. Fixes #9710 Change-Id: I718bcb2ae3e5959d9f72cab5e6708289e5c8ebd5 Reviewed-on: https://go-review.googlesource.com/12511 Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-20cmd/compile: define func value symbols at declarationMichael Hudson-Doyle
This is mostly Russ's https://golang.org/cl/12145 but with some extra fixes to account for the fact that function declarations without implementations now break shared libraries, and including my test case. Fixes #11480. Change-Id: Iabdc2934a0378e5025e4e7affadb535eaef2c8f1 Reviewed-on: https://go-review.googlesource.com/12340 Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-19runtime: clarify runtime.GC blocking behaviorAustin Clements
The runtime.GC documentation was rewritten in df2809f to make it clear that it blocks until GC is complete, but the re-rewrite in ed9a4c9 and e28a679 lost this property when clarifying that it may also block the entire program and not just the caller. Try to arrive at wording that conveys both of these properties. Change-Id: I1e255322aa28a21a548556ecf2a44d8d8ac524ef Reviewed-on: https://go-review.googlesource.com/12392 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Rob Pike <r@golang.org>
2015-07-18runtime: check for findmoduledatap returning nilIan Lance Taylor
The findmoduledatap function will not return nil in ordinary use, but check for nil to try to avoid crashing when we are already crashing. Update #11783. Change-Id: If7b1adb51efab13b4c1a37b6f3c9ad22641a0b56 Reviewed-on: https://go-review.googlesource.com/12391 Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-07-18runtime: skip TestReturnAfterStackGrowInCallback if gcc is not foundAlex Brainman
Fixes #11754 Change-Id: Ifa423ca6eea46d1500278db290498724a9559d14 Reviewed-on: https://go-review.googlesource.com/12347 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>