| Age | Commit message (Collapse) | Author |
|
mp.isExtraInC is intended to indicate that this M has no Go frames at
all; it is entirely executing in C.
If there was a cgocallback to Go and then a cgocall to C, such that the
leaf frames are C, that is fine. e.g., traceback can handle this fine
with SetCgoTraceback (or by simply skipping the C frames).
However, we currently mismanage isExtraInC, unconditionally setting it
on return from cgocallback. This means that if there are two levels of
cgocallback, we end up running Go code with isExtraInC set.
1. C-created thread calls into Go function 1 (via cgocallback).
2. Go function 1 calls into C function 1 (via cgocall).
3. C function 1 calls into Go function 2 (via cgocallback).
4. Go function 2 returns back to C function 1 (returning via the remainder of cgocallback).
5. C function 1 returns back to Go function 1 (returning via the remainder of cgocall).
6. Go function 1 is now running with mp.isExtraInC == true.
The fix is simple; only set isExtraInC on return from cgocallback if
there are no more Go frames. There can't be more Go frames unless there
is an active cgocall out of the Go frames.
Fixes #72870.
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest
Change-Id: I6a6a636c4e7ba75a29639d7036c5af3738033467
Reviewed-on: https://go-review.googlesource.com/c/go/+/658035
Reviewed-by: Cherry Mui <cherryyz@google.com>
Commit-Queue: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
CL 652276 reduced the m struct by 8 bytes, which has changed the
allocation class on 64 bit OpenBSD platforms. This results in build
failures due to:
M structure uses sizeclass 1792/0x700 bytes; incompatible with mutex flag mask 0x3ff
Add 128 bytes of padding when spinbitmutex is enabled on 64 bit
architectures, moving the size to the half point between the
1792 and 2048 allocation size.
Change-Id: I71623a1f75714543c302217e619d20cf0e717aeb
Reviewed-on: https://go-review.googlesource.com/c/go/+/653335
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
|
|
It's not used for anything.
Change-Id: I031b3cdfe52b6b1cff4b3cb6713ffe588084542f
Reviewed-on: https://go-review.googlesource.com/c/go/+/652276
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Change the output printed when crashing with a reraised panic value
to not duplicate that value.
Changes output of panicking with "PANIC", recovering, and reraising
from:
panic: PANIC [recovered]
panic: PANIC
to:
panic: PANIC [recovered, reraised]
Fixes #71517
Change-Id: Id59938c4ea0df555b851ffc650fe6f94c0845499
Reviewed-on: https://go-review.googlesource.com/c/go/+/645916
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Add an internal (for now) implementation of testing/synctest.
The synctest.Run function executes a tree of goroutines in an
isolated environment using a fake clock. The synctest.Wait function
allows a test to wait for all other goroutines within the test
to reach a blocking point.
For #67434
For #69687
Change-Id: Icb39e54c54cece96517e58ef9cfb18bf68506cfc
Reviewed-on: https://go-review.googlesource.com/c/go/+/591997
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
The tri-state mutex implementation (unlocked, locked, sleeping) avoids
sleep/wake syscalls when contention is low or absent, but its
performance degrades when many threads are contending for a mutex to
execute a fast critical section.
A fast critical section means frequent unlock2 calls. Each of those
finds the mutex in the "sleeping" state and so wakes a sleeping thread,
even if many other threads are already awake and in the spin loop of
lock2 attempting to acquire the mutex for themselves. Many spinning
threads means wasting energy and CPU time that could be used by other
processes on the machine. Many threads all spinning on the same cache
line leads to performance collapse.
Merge the futex- and semaphore-based mutex implementations by using a
semaphore abstraction for futex platforms. Then, add a bit to the mutex
state word that communicates whether one of the waiting threads is awake
and spinning. When threads in lock2 see the new "spinning" bit, they can
sleep immediately. In unlock2, the "spinning" bit means we can save a
syscall and not wake a sleeping thread.
This brings up the real possibility of starvation: waiting threads are
able to enter a deeper sleep than before, since one of their peers can
volunteer to be the sole "spinning" thread and thus cause unlock2 to
skip the semawakeup call. Additionally, the waiting threads form a LIFO
stack so any wakeups that do occur will target threads that have gone to
sleep most recently. Counteract those effects by periodically waking the
thread at the bottom of the stack and allowing it to spin.
Exempt sched.lock from most of the new behaviors; it's often used by
several threads in sequence to do thread-specific work, so low-latency
handoff is a priority over improved throughput.
Gate use of this implementation behind GOEXPERIMENT=spinbitmutex, so
it's easy to disable. Enable it by default on supported platforms (the
most efficient implementation requires atomic.Xchg8).
Fixes #68578
goos: linux
goarch: amd64
pkg: runtime
cpu: 13th Gen Intel(R) Core(TM) i7-13700H
│ old │ new │
│ sec/op │ sec/op vs base │
MutexContention 17.82n ± 0% 17.74n ± 0% -0.42% (p=0.000 n=10)
MutexContention-2 22.17n ± 9% 19.85n ± 12% ~ (p=0.089 n=10)
MutexContention-3 26.14n ± 14% 20.81n ± 13% -20.41% (p=0.000 n=10)
MutexContention-4 29.28n ± 8% 21.19n ± 10% -27.62% (p=0.000 n=10)
MutexContention-5 31.79n ± 2% 21.98n ± 10% -30.83% (p=0.000 n=10)
MutexContention-6 34.63n ± 1% 22.58n ± 5% -34.79% (p=0.000 n=10)
MutexContention-7 44.16n ± 2% 23.14n ± 7% -47.59% (p=0.000 n=10)
MutexContention-8 53.81n ± 3% 23.66n ± 6% -56.04% (p=0.000 n=10)
MutexContention-9 65.58n ± 4% 23.91n ± 9% -63.54% (p=0.000 n=10)
MutexContention-10 77.35n ± 3% 26.06n ± 9% -66.31% (p=0.000 n=10)
MutexContention-11 89.62n ± 1% 25.56n ± 9% -71.47% (p=0.000 n=10)
MutexContention-12 102.45n ± 2% 25.57n ± 7% -75.04% (p=0.000 n=10)
MutexContention-13 111.95n ± 1% 24.59n ± 8% -78.04% (p=0.000 n=10)
MutexContention-14 123.95n ± 3% 24.42n ± 6% -80.30% (p=0.000 n=10)
MutexContention-15 120.80n ± 10% 25.54n ± 6% -78.86% (p=0.000 n=10)
MutexContention-16 128.10n ± 25% 26.95n ± 4% -78.96% (p=0.000 n=10)
MutexContention-17 139.80n ± 18% 24.96n ± 5% -82.14% (p=0.000 n=10)
MutexContention-18 141.35n ± 7% 25.05n ± 8% -82.27% (p=0.000 n=10)
MutexContention-19 151.35n ± 18% 25.72n ± 6% -83.00% (p=0.000 n=10)
MutexContention-20 153.30n ± 20% 24.75n ± 6% -83.85% (p=0.000 n=10)
MutexHandoff/Solo-20 13.54n ± 1% 13.61n ± 4% ~ (p=0.206 n=10)
MutexHandoff/FastPingPong-20 141.3n ± 209% 164.8n ± 49% ~ (p=0.436 n=10)
MutexHandoff/SlowPingPong-20 1.572µ ± 16% 1.804µ ± 19% +14.76% (p=0.015 n=10)
geomean 74.34n 30.26n -59.30%
goos: darwin
goarch: arm64
pkg: runtime
cpu: Apple M1
│ old │ new │
│ sec/op │ sec/op vs base │
MutexContention 13.86n ± 3% 12.09n ± 3% -12.73% (p=0.000 n=10)
MutexContention-2 15.88n ± 1% 16.50n ± 2% +3.94% (p=0.001 n=10)
MutexContention-3 18.45n ± 2% 16.88n ± 2% -8.54% (p=0.000 n=10)
MutexContention-4 20.01n ± 2% 18.94n ± 18% ~ (p=0.469 n=10)
MutexContention-5 22.60n ± 1% 17.51n ± 9% -22.50% (p=0.000 n=10)
MutexContention-6 23.93n ± 2% 17.35n ± 2% -27.48% (p=0.000 n=10)
MutexContention-7 24.69n ± 1% 17.15n ± 3% -30.54% (p=0.000 n=10)
MutexContention-8 25.01n ± 1% 17.33n ± 2% -30.69% (p=0.000 n=10)
MutexHandoff/Solo-8 13.96n ± 4% 12.04n ± 4% -13.78% (p=0.000 n=10)
MutexHandoff/FastPingPong-8 68.89n ± 4% 64.62n ± 2% -6.20% (p=0.000 n=10)
MutexHandoff/SlowPingPong-8 9.698µ ± 22% 9.646µ ± 35% ~ (p=0.912 n=10)
geomean 38.20n 32.53n -14.84%
Change-Id: I0058c75eadf282d08eea7fce0d426f0518039f7c
Reviewed-on: https://go-review.googlesource.com/c/go/+/620435
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Junyang Shao <shaojunyang@google.com>
Auto-Submit: Rhys Hiltner <rhys.hiltner@gmail.com>
|
|
Implement sema{create,sleep,wakeup} in terms of the futex syscall when
available. Split the lock2/unlock2 implementations out of lock_sema.go
and lock_futex.go (which they shared with runtime.note) to allow
swapping in new implementations of those.
Let futex-based platforms use the semaphore-based mutex implementation.
Control that via the new "spinbitmutex" GOEXPERMENT value, disabled by
default.
This lays the groundwork for a "spinbit" mutex implementation; it does
not include the new mutex implementation.
For #68578.
Change-Id: I091289c85124212a87abec7079ecbd9e610b4270
Reviewed-on: https://go-review.googlesource.com/c/go/+/622996
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently it's possible for weak->strong conversions to create more GC
work during mark termination. When a weak->strong conversion happens
during the mark phase, we need to mark the newly-strong pointer, since
it may now be the only pointer to that object. In other words, the
object could be white.
But queueing new white objects creates GC work, and if this happens
during mark termination, we could end up violating mark termination
invariants. In the parlance of the mark termination algorithm, the
weak->strong conversion is a non-monotonic source of GC work, unlike the
write barriers (which will eventually only see black objects).
This change fixes the problem by forcing weak->strong conversions to
block during mark termination. We can do this efficiently by setting a
global flag before the ragged barrier that is checked at each
weak->strong conversion. If the flag is set, then the conversions block.
The ragged barrier ensures that all Ps have observed the flag and that
any weak->strong conversions which completed before the ragged barrier
have their newly-minted strong pointers visible in GC work queues if
necessary. We later unset the flag and wake all the blocked goroutines
during the mark termination STW.
There are a few subtleties that we need to account for. For one, it's
possible that a goroutine which blocked in a weak->strong conversion
wakes up only to find it's mark termination time again, so we need to
recheck the global flag on wake. We should also stay non-preemptible
while performing the check, so that if the check *does* appear as true,
it cannot switch back to false while we're actively trying to block. If
it switches to false while we try to block, then we'll be stuck in the
queue until the following GC.
All-in-all, this CL is more complicated than I would have liked, but
it's the only idea so far that is clearly correct to me at a high level.
This change adds a test which is somewhat invasive as it manipulates
mark termination, but hopefully that infrastructure will be useful for
debugging, fixing, and regression testing mark termination whenever we
do fix it.
Fixes #69803.
Change-Id: Ie314e6fd357c9e2a07a9be21f217f75f7aba8c4a
Reviewed-on: https://go-review.googlesource.com/c/go/+/623615
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
Currently, at a cgo callback where there is already a Go frame on
the stack (i.e. C->Go->C->Go), we require that at the inner Go
callback the SP is within the g0's stack bounds set by a previous
callback. This is to prevent that the C code switches stack while
having a Go frame on the stack, which we don't really support. But
this could also happen when we cannot get accurate stack bounds,
e.g. when pthread_getattr_np is not available. Since the stack
bounds are just estimates based on the current SP, if there are
multiple C->Go callbacks with various stack depth, it is possible
that the SP of a later callback falls out of a previous call's
estimate. This leads to runtime throw in a seemingly reasonable
program.
This CL changes it to save the old g0 stack bounds at cgocallback,
update the bounds, and restore the old bounds at return. So each
callback will get its own stack bounds based on the current SP,
and when it returns, the outer callback has the its old stack
bounds restored.
Also, at a cgo callback when there is no Go frame on the stack,
we currently always get new stack bounds. We do this because if
we can only get estimated bounds based on the SP, and the stack
depth varies a lot between two C->Go calls, the previous
estimates may be off and we fall out or nearly fall out of the
previous bounds. But this causes a performance problem: the
pthread API to get accurate stack bounds (pthread_getattr_np) is
very slow when called on the main thread. Getting the stack bounds
every time significantly slows down repeated C->Go calls on the
main thread.
This CL fixes it by "caching" the stack bounds if they are
accurate. I.e. at the second time Go calls into C, if the previous
stack bounds are accurate, and the current SP is in bounds, we can
be sure it is the same stack and we don't need to update the bounds.
This avoids the repeated calls to pthread_getattr_np. If we cannot
get the accurate bounds, we continue to update the stack bounds
based on the SP, and that operation is very cheap.
On a Linux/AMD64 machine with glibc:
name old time/op new time/op delta
CgoCallbackMainThread-8 96.4µs ± 3% 0.1µs ± 2% -99.92% (p=0.000 n=10+9)
Fixes #68285.
Fixes #68587.
Change-Id: I3422badd5ad8ff63e1a733152d05fb7a44d5d435
Reviewed-on: https://go-review.googlesource.com/c/go/+/600296
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
|
|
Placed the fipsIndicator field in some 64-bit alignment padding in the g
struct to avoid growing per-goroutine memory requirements on 64-bit
targets.
Fixes #69911
Updates #69536
Change-Id: I176419d0e3814574758cb88a47340a944f405604
Reviewed-on: https://go-review.googlesource.com/c/go/+/620795
Reviewed-by: Roland Shoemaker <roland@golang.org>
Reviewed-by: Daniel McCarney <daniel@binaryparadox.net>
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Filippo Valsorda <filippo@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Derek Parker <parkerderek86@gmail.com>
|
|
notes are used in sensitive locations in the runtime, such as those with
write barriers forbidden. Maps aren't designed for this sort of internal
use.
Notably, newm -> notewakeup doesn't allow write barriers, but mapaccess1
-> panic contains write barriers. The js runtime only builds right now
because the map access is optimized to mapaccess1_fast64, which happens
to not have a panic call.
The initial swisstable map implementation doesn't have a fast64 variant.
While we could add one, it is a bad idea in general to use a map in such
a fragile location. Simplify the implementation by storing the metadata
directly in the note, and using a linked list for checkTimeouts.
For #54766.
Cq-Include-Trybots: luci.golang.try:gotip-js-wasm
Change-Id: Ib9d39f064ae4ad32dcc873f799428717eb6c2d5a
Reviewed-on: https://go-review.googlesource.com/c/go/+/595558
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
Cleanup and friction reduction
For #65355.
Change-Id: Ia14c9dc584a529a35b97801dd3e95b9acc99a511
Reviewed-on: https://go-review.googlesource.com/c/go/+/600436
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
|
|
This reverts commit be0b569caa0eab1a7f30edf64e550bbf5f6ff235 (CL 585635).
Reason for revert: This is part of a patch series that changed the
handling of contended lock2/unlock2 calls, reducing the maximum
throughput of contended runtime.mutex values, and causing a performance
regression on applications where that is (or became) the bottleneck.
Updates #66999
Updates #67585
Change-Id: I7843ccaecbd273b7ceacfa0f420dd993b4b15a0a
Reviewed-on: https://go-review.googlesource.com/c/go/+/589117
Auto-Submit: Rhys Hiltner <rhys.hiltner@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
|
|
Add linknames for most modules with ≥50 dependents.
Add linknames for a few other modules that we know
are important but are below 50.
Remove linknames from badlinkname.go that do not merit
inclusion (very small number of dependents).
We can add them back later if the need arises.
Fixes #67401. (For now.)
Change-Id: I1e49fec0292265256044d64b1841d366c4106002
Reviewed-on: https://go-review.googlesource.com/c/go/+/587756
Auto-Submit: Russ Cox <rsc@golang.org>
TryBot-Bypass: Russ Cox <rsc@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
Move the nextwaitm field into a small struct, in preparation for
additional metadata to track how long Ms need to wait for locks.
For #66999
Change-Id: Ib40e43c15cde22f7e35922641107973d99439ecd
Reviewed-on: https://go-review.googlesource.com/c/go/+/585635
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Rhys Hiltner <rhys.hiltner@gmail.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
|
|
CL 544455, which added atomic And/Or APIs, raced with CL 585556, which
enabled stricter linkname checking. This caused linkname-related
failures on ARM and MIPS. Fix this by adding the necessary linknames.
We fix one other linkname that got overlooked in CL 585556.
Updates #61395.
Change-Id: I454f0767ce28188e550a61bc39b7e398239bc10e
Reviewed-on: https://go-review.googlesource.com/c/go/+/586516
Reviewed-by: Mauri de Souza Meneguzzo <mauri870@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Austin Clements <austin@google.com>
|
|
The page tracer's functionality is now captured by the regular execution
tracer as an experimental GODEBUG variable. This is a lot more usable
and maintainable than the page tracer, which is likely to have bitrotted
by this point. There's also no tooling available for the page tracer.
Change-Id: I2408394555e01dde75a522e9a489b7e55cf12c8e
Reviewed-on: https://go-review.googlesource.com/c/go/+/583379
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Move profiling pc buffers from being stack allocated to an m field.
This is motivated by the next patch, which will increase the default
stack depth to 128, which might lead to undesirable stack growth for
goroutines that produce profiling events.
Additionally, this change paves the way to make the stack depth
configurable via GODEBUG.
Change-Id: Ifa407f899188e2c7c0a81de92194fdb627cb4b36
Reviewed-on: https://go-review.googlesource.com/c/go/+/574699
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
|
|
During my research of pahole with Go structs, I've found couple of
structs in runtime/ pkg where we can reduce several structs' sizes
highligted by pahole tool which detect byte holes and paddings.
Overall, there are 80 bytes reduced.
Change-Id: I398e5ed6f5b199394307741981cb5ad5b875e98f
Reviewed-on: https://go-review.googlesource.com/c/go/+/578795
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Joedian Reid <joedian@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently the runtime only tracks the PC and SP upon entering a syscall,
but not the FP (BP). This is mainly for historical reasons, and because
the tracer (which uses the frame pointer unwinder) does not need it.
Until it did, of course, in CL 567076, where the tracer tries to take a
stack trace of a goroutine that's in a syscall from afar. It tries to
use gp.sched.bp and lots of things go wrong. It *really* should be using
the equivalent of gp.syscallbp, which doesn't exist before this CL.
This change introduces gp.syscallbp and tracks it. It also introduces
getcallerfp which is nice for simplifying some code. Because we now have
gp.syscallbp, we can also delete the frame skip count computation in
traceLocker.GoSysCall, because it's now the same regardless of whether
frame pointer unwinding is used.
Fixes #66889.
Change-Id: Ib6d761c9566055e0a037134138cb0f81be73ecf7
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-nocgo
Reviewed-on: https://go-review.googlesource.com/c/go/+/580255
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
The previous CL, CL 570257, made it so that STW time no longer
overlapped with other CPU time tracking. However, what we lost was
insight into the CPU time spent _stopping_ the world, which can be just
as important. There's pretty much no easy way to measure this
indirectly, so this CL implements a direct measurement: whenever a P
enters _Pgcstop, it writes down what time it did so. stopTheWorld then
accumulates all the time deltas between when it finished stopping the
world and each P's stop time into a total additional pause time. The GC
pause cases then accumulate this number into the metrics.
This should cause minimal additional overhead in stopping the world. GC
STWs already take on the order of 10s to 100s of microseconds. Even for
100 Ps, the extra `nanotime` call per P is only 1500ns of additional CPU
time. This is likely to be much less in actual pause latency, since it
all happens concurrently.
Change-Id: Icf190ffea469cd35ebaf0b2587bf6358648c8554
Reviewed-on: https://go-review.googlesource.com/c/go/+/574215
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Nicolas Hillegeer <aktau@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
Currently, the execution tracer may attempt to take a stack trace of a
goroutine whose stack it does not own. For example, if the goroutine is
in _Grunnable or _Gwaiting. This is easily fixed in all cases by simply
moving the emission of GoStop and GoBlock events to before the
casgstatus happens. The goroutine status is what is used to signal stack
ownership, and the GC may shrink a goroutine's stack if it can acquire
the scan bit.
Although this is easily fixed, the interaction here is very subtle,
because stack ownership is only implicit in the goroutine's scan status.
To make this invariant more maintainable and less error-prone in the
future, this change adds a GODEBUG setting that checks, at the point of
taking a stack trace, whether the caller owns the goroutine. This check
is not quite perfect because there's no way for the stack tracing code
to know that the _Gscan bit was acquired by the caller, so for
simplicity it assumes that it was the caller that acquired the scan bit.
In all other cases however, we can check for ownership precisely. At the
very least, this check is sufficient to catch the issue this change is
fixing.
To make sure this debug check doesn't bitrot, it's always enabled during
trace testing. This new mode has actually caught a few other issues
already, so this change fixes them.
One issue that this debug mode caught was that it's not safe to take a
stack trace of a _Gwaiting goroutine that's being unparked.
Another much bigger issue this debug mode caught was the fact that the
execution tracer could try to take a stack trace of a G that was in
_Gwaiting solely to avoid a deadlock in the GC. The execution tracer
already has a partial list of these cases since they're modeled as the
goroutine just executing as normal in the tracer, but this change takes
the list and makes it more formal. In this specific case, we now prevent
the GC from shrinking the stacks of goroutines in this state if tracing
is enabled. The stack traces from these scenarios are too useful to
discard, but there is indeed a race here between the tracer and any
attempt to shrink the stack by the GC.
Change-Id: I019850dabc8cede202fd6dcc0a4b1f16764209fb
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest,gotip-linux-amd64-longtest-race
Reviewed-on: https://go-review.googlesource.com/c/go/+/573155
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
Windows syscall.SyscallN currently calls lockOSThread for every syscall.
This can be expensive and produce unnecessary context switches,
especially when the syscall is called frequently under high contention.
The lockOSThread was necessary to ensure that cgocall wouldn't
reschedule the goroutine to a different M, as the syscall return values
are reported back in the M struct.
This CL instructs cgocall to copy the syscall return values into the
the M that will see the caller on return, so the caller no longer needs
to call lockOSThread.
Updates #58336.
Cq-Include-Trybots: luci.golang.try:gotip-windows-arm64,gotip-windows-amd64-longtest
Change-Id: If6644fd111dbacab74e7dcee2afa18ca146735da
Reviewed-on: https://go-review.googlesource.com/c/go/+/562915
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Auto-Submit: Emmanuel Odeke <emmanuel@orijtech.com>
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Emmanuel Odeke <emmanuel@orijtech.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
For #65355
Change-Id: I65dd090fb99de9b231af2112c5ccb0eb635db2be
Reviewed-on: https://go-review.googlesource.com/c/go/+/560155
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ibrahim Bazoka <ibrahimbazoka729@gmail.com>
Auto-Submit: Emmanuel Odeke <emmanuel@orijtech.com>
|
|
The timers had evolved to the point where the state was stored as follows:
if timer in heap:
state has timerHeaped set
if heap timer is stale:
heap deadline in t.when
real deadline in t.nextWhen
state has timerNextWhen set
else:
real deadline in t.when
t.nextWhen unset
else:
real deadline in t.when
t.nextWhen unset
That made it hard to find the real deadline and just hard to think about everything.
The new state is:
real deadline in t.when (always)
if timer in heap:
state has timerHeaped set
heap deadline in t.whenHeap
if heap timer is stale:
state has timerModified set
Separately, the 'state' word itself was being used as a lock
and state bits because the code started with CAS loops,
which we abstracted into the lock/unlock methods step by step.
At this point, we can switch to a real lock, making sure to
publish the one boolean needed by timers fast paths
at each unlock.
All this simplifies various logic considerably.
Change-Id: I35766204f7a26d999206bd56cc0db60ad1b17cbe
Reviewed-on: https://go-review.googlesource.com/c/go/+/570335
Auto-Submit: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
|
|
The comment in updateTimerPMask is wrong. It says:
// Looks like there are no timers, however another P
// may be adding one at this very moment.
// Take the lock to synchronize.
This was my incorrect simplification of the original comment
from CL 264477 when I was renaming all the things it mentioned:
// Looks like there are no timers, however another P may transiently
// decrement numTimers when handling a timerModified timer in
// checkTimers. We must take timersLock to serialize with these changes.
updateTimerPMask is being called by pidleput, so the P in question
is not in use. And other P's cannot add to this P.
As the original comment more precisely noted, the problem was
that other P's might be calling timers.check, which updates ts.len
occasionally while ts is locked, and one of those updates might
"leak" an ephemeral len==0 even when the heap is not going to
be empty when the P is finally unlocked. The lock/unlock in
updateTimerPMask synchronizes to avoid that. But this defeats
most of the purpose of using ts.len in the first place.
Instead of requiring that synchronization, we can arrange that
ts.len only ever shows a "publishable" length, meaning the len(ts.heap)
we leave behind during ts.unlock.
Having done that, updateTimerPMask can be inlined into pidleput.
The big comment on updateTimerPMask explaining how timerpMask
works is better placed as the doc comment for timerpMask itself,
so move it there.
Change-Id: I5442c9bb7f1473b5fd37c43165429d087012e73f
Reviewed-on: https://go-review.googlesource.com/c/go/+/568336
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Russ Cox <rsc@golang.org>
|
|
Continuing conversion from C to Go, introduce type timers
encapsulating all timer heap state, with methods for operations.
This should at least be easier to think about, instead of having
these fields strewn through the P struct. It should also be easier
to test.
I am skeptical about the pair of atomic int64 deadlines:
I think there are missed wakeups lurking.
Having the code in an abstracted API should make it easier
to reason through and fix if needed.
[This is one CL in a refactoring stack making very small changes
in each step, so that any subtle bugs that we miss can be more
easily pinpointed to a small change.]
Change-Id: If5ea3e0b946ca14076f44c85cbb4feb9eddb4f95
Reviewed-on: https://go-review.googlesource.com/c/go/+/564132
Reviewed-by: Austin Clements <austin@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Russ Cox <rsc@golang.org>
|
|
Change-Id: I44293452764dc4bc4de8d386153c6402a9cbe409
Reviewed-on: https://go-review.googlesource.com/c/go/+/549435
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
|
|
due to a recent change, this experiment does not compile at all. This
simply fixes to pass in the new required parameter.
Change-Id: Idce0e72fa436a7acf4923717913deb3a37847fe2
Reviewed-on: https://go-review.googlesource.com/c/go/+/551415
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
The exported API is only available with GOEXPERIMENT=rangefunc.
This will let Go 1.22 users who want to experiment with rangefuncs
access an efficient implementation of iter.Pull and iter.Pull2.
For #61897.
Change-Id: I6ef5fa8f117567efe4029b7b8b0f4d9b85697fb7
Reviewed-on: https://go-review.googlesource.com/c/go/+/543319
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Move ChaCha8 code into internal/chacha8rand and use it to implement
runtime.rand, which is used for the unseeded global source for
both math/rand and math/rand/v2. This also affects the calculation of
the start point for iteration over very very large maps (when the
32-bit fastrand is not big enough).
The benefit is that misuse of the global random number generators
in math/rand and math/rand/v2 in contexts where non-predictable
randomness is important for security reasons is no longer a
security problem, removing a common mistake among programmers
who are unaware of the different kinds of randomness.
The cost is an extra 304 bytes per thread stored in the m struct
plus 2-3ns more per random uint64 due to the more sophisticated
algorithm. Using PCG looks like it would cost about the same,
although I haven't benchmarked that.
Before this, the math/rand and math/rand/v2 global generator
was wyrand (https://github.com/wangyi-fudan/wyhash).
For math/rand, using wyrand instead of the Mitchell/Reeds/Thompson
ALFG was justifiable, since the latter was not any better.
But for math/rand/v2, the global generator really should be
at least as good as one of the well-studied, specific algorithms
provided directly by the package, and it's not.
(Wyrand is still reasonable for scheduling and cache decisions.)
Good randomness does have a cost: about twice wyrand.
Also rationalize the various runtime rand references.
goos: linux
goarch: amd64
pkg: math/rand/v2
cpu: AMD Ryzen 9 7950X 16-Core Processor
│ bbb48afeb7.amd64 │ 5cf807d1ea.amd64 │
│ sec/op │ sec/op vs base │
ChaCha8-32 1.862n ± 2% 1.861n ± 2% ~ (p=0.825 n=20)
PCG_DXSM-32 1.471n ± 1% 1.460n ± 2% ~ (p=0.153 n=20)
SourceUint64-32 1.636n ± 2% 1.582n ± 1% -3.30% (p=0.000 n=20)
GlobalInt64-32 2.087n ± 1% 3.663n ± 1% +75.54% (p=0.000 n=20)
GlobalInt64Parallel-32 0.1042n ± 1% 0.2026n ± 1% +94.48% (p=0.000 n=20)
GlobalUint64-32 2.263n ± 2% 3.724n ± 1% +64.57% (p=0.000 n=20)
GlobalUint64Parallel-32 0.1019n ± 1% 0.1973n ± 1% +93.67% (p=0.000 n=20)
Int64-32 1.771n ± 1% 1.774n ± 1% ~ (p=0.449 n=20)
Uint64-32 1.863n ± 2% 1.866n ± 1% ~ (p=0.364 n=20)
GlobalIntN1000-32 3.134n ± 3% 4.730n ± 2% +50.95% (p=0.000 n=20)
IntN1000-32 2.489n ± 1% 2.489n ± 1% ~ (p=0.683 n=20)
Int64N1000-32 2.521n ± 1% 2.516n ± 1% ~ (p=0.394 n=20)
Int64N1e8-32 2.479n ± 1% 2.478n ± 2% ~ (p=0.743 n=20)
Int64N1e9-32 2.530n ± 2% 2.514n ± 2% ~ (p=0.193 n=20)
Int64N2e9-32 2.501n ± 1% 2.494n ± 1% ~ (p=0.616 n=20)
Int64N1e18-32 3.227n ± 1% 3.205n ± 1% ~ (p=0.101 n=20)
Int64N2e18-32 3.647n ± 1% 3.599n ± 1% ~ (p=0.019 n=20)
Int64N4e18-32 5.135n ± 1% 5.069n ± 2% ~ (p=0.034 n=20)
Int32N1000-32 2.657n ± 1% 2.637n ± 1% ~ (p=0.180 n=20)
Int32N1e8-32 2.636n ± 1% 2.636n ± 1% ~ (p=0.763 n=20)
Int32N1e9-32 2.660n ± 2% 2.638n ± 1% ~ (p=0.358 n=20)
Int32N2e9-32 2.662n ± 2% 2.618n ± 2% ~ (p=0.064 n=20)
Float32-32 2.272n ± 2% 2.239n ± 2% ~ (p=0.194 n=20)
Float64-32 2.272n ± 1% 2.286n ± 2% ~ (p=0.763 n=20)
ExpFloat64-32 3.762n ± 1% 3.744n ± 1% ~ (p=0.171 n=20)
NormFloat64-32 3.706n ± 1% 3.655n ± 2% ~ (p=0.066 n=20)
Perm3-32 32.93n ± 3% 34.62n ± 1% +5.13% (p=0.000 n=20)
Perm30-32 202.9n ± 1% 204.0n ± 1% ~ (p=0.482 n=20)
Perm30ViaShuffle-32 115.0n ± 1% 114.9n ± 1% ~ (p=0.358 n=20)
ShuffleOverhead-32 112.8n ± 1% 112.7n ± 1% ~ (p=0.692 n=20)
Concurrent-32 2.107n ± 0% 3.725n ± 1% +76.75% (p=0.000 n=20)
goos: darwin
goarch: arm64
pkg: math/rand/v2
│ bbb48afeb7.arm64 │ 5cf807d1ea.arm64 │
│ sec/op │ sec/op vs base │
ChaCha8-8 2.480n ± 0% 2.429n ± 0% -2.04% (p=0.000 n=20)
PCG_DXSM-8 2.531n ± 0% 2.530n ± 0% ~ (p=0.877 n=20)
SourceUint64-8 2.534n ± 0% 2.533n ± 0% ~ (p=0.732 n=20)
GlobalInt64-8 2.172n ± 1% 4.794n ± 0% +120.67% (p=0.000 n=20)
GlobalInt64Parallel-8 0.4320n ± 0% 0.9605n ± 0% +122.32% (p=0.000 n=20)
GlobalUint64-8 2.182n ± 0% 4.770n ± 0% +118.58% (p=0.000 n=20)
GlobalUint64Parallel-8 0.4307n ± 0% 0.9583n ± 0% +122.51% (p=0.000 n=20)
Int64-8 4.107n ± 0% 4.104n ± 0% ~ (p=0.416 n=20)
Uint64-8 4.080n ± 0% 4.080n ± 0% ~ (p=0.052 n=20)
GlobalIntN1000-8 2.814n ± 2% 5.643n ± 0% +100.50% (p=0.000 n=20)
IntN1000-8 4.141n ± 0% 4.139n ± 0% ~ (p=0.140 n=20)
Int64N1000-8 4.140n ± 0% 4.140n ± 0% ~ (p=0.313 n=20)
Int64N1e8-8 4.140n ± 0% 4.139n ± 0% ~ (p=0.103 n=20)
Int64N1e9-8 4.139n ± 0% 4.140n ± 0% ~ (p=0.761 n=20)
Int64N2e9-8 4.140n ± 0% 4.140n ± 0% ~ (p=0.636 n=20)
Int64N1e18-8 5.266n ± 0% 5.326n ± 1% +1.14% (p=0.001 n=20)
Int64N2e18-8 6.052n ± 0% 6.167n ± 0% +1.90% (p=0.000 n=20)
Int64N4e18-8 8.826n ± 0% 9.051n ± 0% +2.55% (p=0.000 n=20)
Int32N1000-8 4.127n ± 0% 4.132n ± 0% +0.12% (p=0.000 n=20)
Int32N1e8-8 4.126n ± 0% 4.131n ± 0% +0.12% (p=0.000 n=20)
Int32N1e9-8 4.127n ± 0% 4.132n ± 0% +0.12% (p=0.000 n=20)
Int32N2e9-8 4.132n ± 0% 4.131n ± 0% ~ (p=0.017 n=20)
Float32-8 4.109n ± 0% 4.105n ± 0% ~ (p=0.379 n=20)
Float64-8 4.107n ± 0% 4.106n ± 0% ~ (p=0.867 n=20)
ExpFloat64-8 5.339n ± 0% 5.383n ± 0% +0.82% (p=0.000 n=20)
NormFloat64-8 5.735n ± 0% 5.737n ± 1% ~ (p=0.856 n=20)
Perm3-8 26.65n ± 0% 26.80n ± 1% +0.58% (p=0.000 n=20)
Perm30-8 194.8n ± 1% 197.0n ± 0% +1.18% (p=0.000 n=20)
Perm30ViaShuffle-8 156.6n ± 0% 157.6n ± 1% +0.61% (p=0.000 n=20)
ShuffleOverhead-8 124.9n ± 0% 125.5n ± 0% +0.52% (p=0.000 n=20)
Concurrent-8 2.434n ± 3% 5.066n ± 0% +108.09% (p=0.000 n=20)
goos: linux
goarch: 386
pkg: math/rand/v2
cpu: AMD Ryzen 9 7950X 16-Core Processor
│ bbb48afeb7.386 │ 5cf807d1ea.386 │
│ sec/op │ sec/op vs base │
ChaCha8-32 11.295n ± 1% 4.748n ± 2% -57.96% (p=0.000 n=20)
PCG_DXSM-32 7.693n ± 1% 7.738n ± 2% ~ (p=0.542 n=20)
SourceUint64-32 7.658n ± 2% 7.622n ± 2% ~ (p=0.344 n=20)
GlobalInt64-32 3.473n ± 2% 7.526n ± 2% +116.73% (p=0.000 n=20)
GlobalInt64Parallel-32 0.3198n ± 0% 0.5444n ± 0% +70.22% (p=0.000 n=20)
GlobalUint64-32 3.612n ± 0% 7.575n ± 1% +109.69% (p=0.000 n=20)
GlobalUint64Parallel-32 0.3168n ± 0% 0.5403n ± 0% +70.51% (p=0.000 n=20)
Int64-32 7.673n ± 2% 7.789n ± 1% ~ (p=0.122 n=20)
Uint64-32 7.773n ± 1% 7.827n ± 2% ~ (p=0.920 n=20)
GlobalIntN1000-32 6.268n ± 1% 9.581n ± 1% +52.87% (p=0.000 n=20)
IntN1000-32 10.33n ± 2% 10.45n ± 1% ~ (p=0.233 n=20)
Int64N1000-32 10.98n ± 2% 11.01n ± 1% ~ (p=0.401 n=20)
Int64N1e8-32 11.19n ± 2% 10.97n ± 1% ~ (p=0.033 n=20)
Int64N1e9-32 11.06n ± 1% 11.08n ± 1% ~ (p=0.498 n=20)
Int64N2e9-32 11.10n ± 1% 11.01n ± 2% ~ (p=0.995 n=20)
Int64N1e18-32 15.23n ± 2% 15.04n ± 1% ~ (p=0.973 n=20)
Int64N2e18-32 15.89n ± 1% 15.85n ± 1% ~ (p=0.409 n=20)
Int64N4e18-32 18.96n ± 2% 19.34n ± 2% ~ (p=0.048 n=20)
Int32N1000-32 10.46n ± 2% 10.44n ± 2% ~ (p=0.480 n=20)
Int32N1e8-32 10.46n ± 2% 10.49n ± 2% ~ (p=0.951 n=20)
Int32N1e9-32 10.28n ± 2% 10.26n ± 1% ~ (p=0.431 n=20)
Int32N2e9-32 10.50n ± 2% 10.44n ± 2% ~ (p=0.249 n=20)
Float32-32 13.80n ± 2% 13.80n ± 2% ~ (p=0.751 n=20)
Float64-32 23.55n ± 2% 23.87n ± 0% ~ (p=0.408 n=20)
ExpFloat64-32 15.36n ± 1% 15.29n ± 2% ~ (p=0.316 n=20)
NormFloat64-32 13.57n ± 1% 13.79n ± 1% +1.66% (p=0.005 n=20)
Perm3-32 45.70n ± 2% 46.99n ± 2% +2.81% (p=0.001 n=20)
Perm30-32 399.0n ± 1% 403.8n ± 1% +1.19% (p=0.006 n=20)
Perm30ViaShuffle-32 349.0n ± 1% 350.4n ± 1% ~ (p=0.909 n=20)
ShuffleOverhead-32 322.3n ± 1% 323.8n ± 1% ~ (p=0.410 n=20)
Concurrent-32 3.331n ± 1% 7.312n ± 1% +119.50% (p=0.000 n=20)
For #61716.
Change-Id: Ibdddeed85c34d9ae397289dc899e04d4845f9ed2
Reviewed-on: https://go-review.googlesource.com/c/go/+/516860
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Filippo Valsorda <filippo@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Change-Id: I3ad7a50707486ebdbbd676b3581df6e3ed0fd3a1
Reviewed-on: https://go-review.googlesource.com/c/go/+/543476
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Run-TryBot: shuang cui <imcusg@gmail.com>
|
|
Add runtime-internal locks to the mutex contention profile.
Store up to one call stack responsible for lock contention on the M,
until it's safe to contribute its value to the mprof table. Try to use
that limited local storage space for a relatively large source of
contention, and attribute any contention in stacks we're not able to
store to a sentinel _LostContendedLock function.
Avoid ballooning lock contention while manipulating the mprof table by
attributing to that sentinel function any lock contention experienced
while reporting lock contention.
Guard collecting real call stacks with GODEBUG=profileruntimelocks=1,
since the available data has mixed semantics; we can easily capture an
M's own wait time, but we'd prefer for the profile entry of each
critical section to describe how long it made the other Ms wait. It's
too late in the Go 1.22 cycle to make the required changes to
futex-based locks. When not enabled, attribute the time to the sentinel
function instead.
Fixes #57071
This is a roll-forward of https://go.dev/cl/528657, which was reverted
in https://go.dev/cl/543660
Reason for revert: de-flakes tests (reduces dependence on fine-grained
timers, correctly identifies contention on big-endian futex locks,
attempts to measure contention in the semaphore implementation but only
uses that secondary measurement to finish the test early, skips tests on
single-processor systems)
Change-Id: I31389f24283d85e46ad9ba8d4f514cb9add8dfb0
Reviewed-on: https://go-review.googlesource.com/c/go/+/544195
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Auto-Submit: Rhys Hiltner <rhys@justin.tv>
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
|
|
This change introduces new options to set the floating point
mode on ARM targets. The GOARM version number can optionally be
followed by ',hardfloat' or ',softfloat' to select whether to
use hardware instructions or software emulation for floating
point computations, respectively. For example,
GOARM=7,softfloat.
Previously, software floating point support was limited to
GOARM=5. With these options, software floating point is now
extended to all ARM versions, including GOARM=6 and 7. This
change also extends hardware floating point to GOARM=5.
GOARM=5 defaults to softfloat and GOARM=6 and 7 default to
hardfloat.
For #61588
Change-Id: I23dc86fbd0733b262004a2ed001e1032cf371e94
Reviewed-on: https://go-review.googlesource.com/c/go/+/514907
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
This reverts commit go.dev/cl/528657.
Reason for revert: broke a lot of builders.
Change-Id: I70c33062020e997c4df67b3eaa2e886cf0da961e
Reviewed-on: https://go-review.googlesource.com/c/go/+/543660
Reviewed-by: Than McIntosh <thanm@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Add runtime-internal locks to the mutex contention profile.
Store up to one call stack responsible for lock contention on the M,
until it's safe to contribute its value to the mprof table. Try to use
that limited local storage space for a relatively large source of
contention, and attribute any contention in stacks we're not able to
store to a sentinel _LostContendedLock function.
Avoid ballooning lock contention while manipulating the mprof table by
attributing to that sentinel function any lock contention experienced
while reporting lock contention.
Guard collecting real call stacks with GODEBUG=profileruntimelocks=1,
since the available data has mixed semantics; we can easily capture an
M's own wait time, but we'd prefer for the profile entry of each
critical section to describe how long it made the other Ms wait. It's
too late in the Go 1.22 cycle to make the required changes to
futex-based locks. When not enabled, attribute the time to the sentinel
function instead.
Fixes #57071
Change-Id: I3eee0ccbfc20f333b56f20d8725dfd7f3a526b41
Reviewed-on: https://go-review.googlesource.com/c/go/+/528657
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Rhys Hiltner <rhys@justin.tv>
Reviewed-by: Than McIntosh <thanm@google.com>
|
|
This CL adds four new time histogram metrics:
/sched/pauses/stopping/gc:seconds
/sched/pauses/stopping/other:seconds
/sched/pauses/total/gc:seconds
/sched/pauses/total/other:seconds
The "stopping" metrics measure the time taken to start a stop-the-world
pause. i.e., how long it takes stopTheWorldWithSema to stop all Ps.
This can be used to detect STW struggling to preempt Ps.
The "total" metrics measure the total duration of a stop-the-world
pause, from starting to stop-the-world until the world is started again.
This includes the time spent in the "start" phase.
The "gc" metrics are used for GC-related STW pauses. The "other" metrics
are used for all other STW pauses.
All of these metrics start timing in stopTheWorldWithSema only after
successfully acquiring sched.lock, thus excluding lock contention on
sched.lock. The reasoning behind this is that while waiting on
sched.lock the world is not stopped at all (all other Ps can run), so
the impact of this contention is primarily limited to the goroutine
attempting to stop-the-world. Additionally, we already have some
visibility into sched.lock contention via contention profiles (#57071).
/sched/pauses/total/gc:seconds is conceptually equivalent to
/gc/pauses:seconds, so the latter is marked as deprecated and returns
the same histogram as the former.
In the implementation, there are a few minor differences:
* For both mark and sweep termination stops, /gc/pauses:seconds started
timing prior to calling startTheWorldWithSema, thus including lock
contention.
These details are minor enough, that I do not believe the slight change
in reporting will matter. For mark termination stops, moving timing stop
into startTheWorldWithSema does have the side effect of requiring moving
other GC metric calculations outside of the STW, as they depend on the
same end time.
Fixes #63340
Change-Id: Iacd0bab11bedab85d3dcfb982361413a7d9c0d05
Reviewed-on: https://go-review.googlesource.com/c/go/+/534161
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This change mostly implements the design described in #60773 and
includes a new scalable parser for the new trace format, available in
internal/trace/v2. I'll leave this commit message short because this is
clearly an enormous CL with a lot of detail.
This change does not hook up the new tracer into cmd/trace yet. A
follow-up CL will handle that.
For #60773.
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest,gotip-linux-amd64-longtest-race
Change-Id: I5d2aca2cc07580ed3c76a9813ac48ec96b157de0
Reviewed-on: https://go-review.googlesource.com/c/go/+/494187
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently any thread that tries to get the attention of all Ps (e.g.
stopTheWorldWithSema and forEachP) ends up in a non-preemptible state
waiting to preempt another thread. Thing is, that other thread might
also be in a non-preemptible state, trying to preempt the first thread,
resulting in a deadlock.
This is a general problem, but in practice it only boils down to one
specific scenario: a thread in GC is blocked trying to preempt a
goroutine to scan its stack while that goroutine is blocked in a
non-preemptible state to get the attention of all Ps.
There's currently a hack in a few places in the runtime to move the
calling goroutine into _Gwaiting before it goes into a non-preemptible
state to preempt other threads. This lets the GC scan its stack because
the goroutine is trivially preemptible. The only restriction is that
forEachP and stopTheWorldWithSema absolutely cannot reference the
calling goroutine's stack. This is generally not necessary, so things
are good.
Anyway, to avoid exposing the details of this hack, this change creates
a safer wrapper around forEachP (and then renames it to forEachP and the
existing one to forEachPInternal) that performs the goroutine status
change, just like stopTheWorld does. We're going to need to use this
hack with forEachP in the new tracer, so this avoids propagating the
hack further and leaves it as an implementation detail.
Change-Id: I51f02e8d8e0a3172334d23787e31abefb8a129ab
Reviewed-on: https://go-review.googlesource.com/c/go/+/533455
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
|
|
The function WriteTabs has been renamed WritePluginTable.
Change-Id: I5f04b99b91498c41121f898cb7774334a730d7b4
GitHub-Last-Rev: c98ab3f87210cef9146a1aec97e2f1f4a2bc0593
GitHub-Pull-Request: golang/go#63595
Reviewed-on: https://go-review.googlesource.com/c/go/+/535996
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
|
|
[This is an unmodified redo of CL 527056.]
Standard Ms set g0.stackguard1 to the same value as stackguard0 in
mstart0. For consistency, extra Ms should do the same for their g0. Do
this in needm -> callbackUpdateSystemStack.
Background: getg().stackguard1 is used as the stack guard for the stack
growth prolouge in functions marked //go:systemstack [1]. User Gs set
stackguard1 to ^uintptr(0) so that the check always fail, calling
morestackc, which throws to report a //go:systemstack function call on a
user stack.
g0 setting stackguard1 is unnecessary for this functionality. 0 would be
sufficient, as g0 is always allowed to call //go:systemstack functions.
However, since we have the check anyway, setting stackguard1 to the
actual stack bound is useful to detect actual stack overflows on g0
(though morestackc doesn't detect this case and would report a
misleading message about user stacks).
[1] cmd/internal/obj calls //go:systemstack functions AttrCFunc. This is
a holdover from when the runtime contained actual C functions. But since
CL 2275, it has simply meant "pretend this is a C function, which would
thus need to use the system stack". Hence the name morestackc. At this
point, this terminology is pretty far removed from reality and should
probably be updated to something more intuitive.
Change-Id: If315677217354465fbbfbd0d406d79be20db0cc3
Reviewed-on: https://go-review.googlesource.com/c/go/+/527716
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This reverts CL 527056.
CL 525455 breaks darwin, alpine, and android. This CL must be reverted
in order to revert that CL.
For #62440.
Change-Id: I4e1b16e384b475a605e0214ca36c918d50faa22c
Reviewed-on: https://go-review.googlesource.com/c/go/+/527316
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Standard Ms set g0.stackguard1 to the same value as stackguard0 in
mstart0. For consistency, extra Ms should do the same for their g0. Do
this in needm -> callbackUpdateSystemStack.
Background: getg().stackguard1 is used as the stack guard for the stack
growth prolouge in functions marked //go:systemstack [1]. User Gs set
stackguard1 to ^uintptr(0) so that the check always fail, calling
morestackc, which throws to report a //go:systemstack function call on a
user stack.
g0 setting stackguard1 is unnecessary for this functionality. 0 would be
sufficient, as g0 is always allowed to call //go:systemstack functions.
However, since we have the check anyway, setting stackguard1 to the
actual stack bound is useful to detect actual stack overflows on g0
(though morestackc doesn't detect this case and would report a
misleading message about user stacks).
[1] cmd/internal/obj calls //go:systemstack functions AttrCFunc. This is
a holdover from when the runtime contained actual C functions. But since
CL 2275, it has simply meant "pretend this is a C function, which would
thus need to use the system stack". Hence the name morestackc. At this
point, this terminology is pretty far removed from reality and should
probably be updated to something more intuitive.
Change-Id: I8d0e5628ce31ac6a189a7d7a4124be85aef89862
Reviewed-on: https://go-review.googlesource.com/c/go/+/527056
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
When passing pointers of Go objects from Go to C, the cgo command generate _Cgo_use(pN) for the unsafe.Pointer type arguments, so that the Go compiler will escape these object to heap.
Since the C function may callback to Go, then the Go stack might grow/shrink, that means the pointers that the C function have will be invalid.
After adding the #cgo noescape annotation for a C function, the cgo command won't generate _Cgo_use(pN), and the Go compiler won't force the object escape to heap.
After adding the #cgo nocallback annotation for a C function, which means the C function won't callback to Go, if it do callback to Go, the Go process will crash.
Fixes #56378
Change-Id: Ifdca070584e0d349c7b12276270e50089e481f7a
GitHub-Last-Rev: f1a17b08b0590eca2670e404bbfedad5461df72f
GitHub-Pull-Request: golang/go#60399
Reviewed-on: https://go-review.googlesource.com/c/go/+/497837
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
|
|
Currently, the pcvalue cache is stack allocated for each operation
that needs to look up a lot of pcvalues. It's not always clear where
to put it, a lot of the time we just pass a nil cache, it doesn't get
reused across operations, and we put a surprising amount of effort
into threading these caches around.
This CL moves it to the M, where it can be long-lived and used by all
pcvalue lookups, and we don't have to carefully thread it across
operations.
This is a re-roll of CL 515276 with a fix for reentrant use of the
pcvalue cache from the signal handler.
Change-Id: Id94c0c0fb3004d1fda1b196790eebd949c621f28
Reviewed-on: https://go-review.googlesource.com/c/go/+/520063
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
Change-Id: If93b6cfa5a598a5f4101c879a0cd88a194e4a6aa
Reviewed-on: https://go-review.googlesource.com/c/go/+/518116
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Andy Pan <panjf2000@gmail.com>
|
|
The pprof mutex profile was meant to match the Google C++ (now Abseil)
mutex profiler, originally designed and implemented by Mike Burrows.
When we worked on the Go version, pjw and I missed that C++ counts the
time each thread is blocked, even if multiple threads are blocked on a
mutex. That is, if 100 threads are blocked on the same mutex for the
same 10ms, that still counts as 1000ms of contention in C++. In Go, to
date, /debug/pprof/mutex has counted that as only 10ms of contention.
If 100 goroutines are blocked on one mutex and only 1 goroutine is
blocked on another mutex, we probably do want to see the first mutex
as being more contended, so the Abseil approach is the more useful one.
This CL adopts "contention scales with number of goroutines blocked",
to better match Abseil [1]. However, it still makes sure to attribute the
time to the unlock that caused the backup, not subsequent innocent
unlocks that were affected by the congestion. In this way it still gives
more accurate profiles than Abseil does.
[1] https://github.com/abseil/abseil-cpp/blob/lts_2023_01_25/absl/synchronization/mutex.cc#L2390
Fixes #61015.
Change-Id: I7eb9e706867ffa8c0abb5b26a1b448f6eba49331
Reviewed-on: https://go-review.googlesource.com/c/go/+/506415
Run-TryBot: Russ Cox <rsc@golang.org>
Auto-Submit: Russ Cox <rsc@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
This is subtle and the compiler and runtime be in sync.
It is easier to develop the rest of the changes (especially when using
toolstash save/restore) if this change is separated out and done first.
Preparation for proposal #61405. The actual logic in the
compiler will be guarded by a GOEXPERIMENT, but it is
easier not to have GOEXPERIMENT-specific data structures
in the runtime, so just make the field always.
Change-Id: I7ec7049b99ae98bf0db365d42966baeec56e3774
Reviewed-on: https://go-review.googlesource.com/c/go/+/510539
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
|
|
This reverts CL 515276.
This broke the longtest builders. For example:
https://build.golang.org/log/351a1a198a6b843b1881c1fb6cdef51f3e413e8b
Change-Id: Ie79067464fe8e226da31721cf127f3efb6011452
Reviewed-on: https://go-review.googlesource.com/c/go/+/516856
Auto-Submit: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
Currently, the pcvalue cache is stack allocated for each operation
that needs to look up a lot of pcvalues. It's not always clear where
to put it, a lot of the time we just pass a nil cache, it doesn't get
reused across operations, and we put a surprising amount of effort
into threading these caches around.
This CL moves it to the M, where it can be long-lived and used by all
pcvalue lookups, and we don't have to carefully thread it across
operations.
Change-Id: I675e583e0daac887c8ef77a402ba792648d96027
Reviewed-on: https://go-review.googlesource.com/c/go/+/515276
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
|