| Age | Commit message (Collapse) | Author |
|
delete unused parameter userForced
Change-Id: I71c26ab5e3fadc532b6b1f266212c6f620769efd
GitHub-Last-Rev: 6a32c8ff6e63f8b149c0803e8b6636022bd9d066
GitHub-Pull-Request: golang/go#76732
Reviewed-on: https://go-review.googlesource.com/c/go/+/727680
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Tony Tang <jianfeng.tony@gmail.com>
|
|
Change-Id: I5dec35b1432705b3a52859c38e758220282226af
Reviewed-on: https://go-review.googlesource.com/c/go/+/726700
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Sean Liao <sean@liao.dev>
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
|
|
Implement secret.Do.
- When secret.Do returns:
- Clear stack that is used by the argument function.
- Clear all the registers that might contain secrets.
- On stack growth in secret mode, clear the old stack.
- When objects are allocated in secret mode, mark them and then zero
the marked objects immediately when they are freed.
- If the argument function panics, raise that panic as if it originated
from secret.Do. This removes anything about the secret function
from tracebacks.
For now, this is only implemented on linux for arm64 and amd64.
This is a rebased version of Keith Randalls initial implementation at
CL 600635. I have added arm64 support, signal handling, preemption
handling and dealt with vDSOs spilling into system stacks.
Fixes #21865
Change-Id: I6fbd5a233beeaceb160785e0c0199a5c94d8e520
Co-authored-by: Keith Randall <khr@golang.org>
Reviewed-on: https://go-review.googlesource.com/c/go/+/704615
Reviewed-by: Roland Shoemaker <roland@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Filippo Valsorda <filippo@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
The first part, assignWaitingGCWorker selects a mark worker (if any) and
assigns it to the P. The second part, findRunnableGCWorker, is
responsible for actually marking the worker as runnable and updating the
CPU limiter.
The advantage of this split is that assignWaitingGCWorker is safe to do
during STW, which will allow the next CL to make selections during
procresize.
This change is a semantic no-op in preparation for the next CL.
For #65694.
Change-Id: I6a6a636c8beb212185829946cfa1e49f706ac31a
Reviewed-on: https://go-review.googlesource.com/c/go/+/721001
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
|
|
atomic.Int64 automatically maintains proper alignment, avoiding the need
to manually adjust alignment back and forth as fields above change.
Change-Id: I6a6a636c4c3c366353f6dc8ecac473c075dd5cd9
Reviewed-on: https://go-review.googlesource.com/c/go/+/716700
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
|
|
Now that we have a reusable list type, use it to replace the custom
linked list code for spanSPMCs.
Change-Id: I6a6a636ca54f2ba4b5c7dddba607c94ebf3c3ac8
Reviewed-on: https://go-review.googlesource.com/c/go/+/714021
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently this lock is treated like a leaf lock, but it's not one. It
can acquire the globalAlloc lock via fixalloc and persistentalloc.
This means we need to figure out where it can be acquired. In general,
it can be acquired by any write barrier, which is already incredibly
broad. AFAICT, it can't be acquired directly otherwise, except when
destroying a spanSPMC during procresize, in which case we'll be holding
sched.lock.
Fixes #75916.
Change-Id: I2da1f5b82c750bf4e8d6a8a562046b9a17fd44be
Reviewed-on: https://go-review.googlesource.com/c/go/+/717500
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Added in CL 700496, freeSomeSpanSPMCs attempts to bound tail latency by
processing at most 64 entries at a time, as well as returning early if
it notices a preemption request. Both of those are attempts to reduce
tail latency, as we cannot preempt the function while it holds the lock.
This scheme is based on a similar scheme in freeSomeWbufs.
freeSomeWbufs has a key difference: all workbufs in its list are
unconditionally freed. So freeSomeWbufs will always make forward
progress in each call (unless it is constantly preempted).
In contrast, freeSomeSpanSPMCs only frees "dead" entries. If the list
contains >64 live entries, a call may make no progress, and the caller
will simply keep calling in a loop forever, until the GC ends at which
point it returns success early. The infinite loop likely restarts at the
next GC cycle.
The queues are used on each P, so it is easy to have 64 permanently live
queues if GOMAXPROCS >= 64. If GOMAXPROCS < 64, it is possible to
transiently have more queues, but spanQueue.drain increases queue size
in an attempt to reach a steady state of one queue per P.
We must drop work.spanSPMCs.lock to allow preemption, but dropping the
lock allows mutation of the linked list, meaning we cannot simply
continue iteration after retaking lock. Since there is no
straightforward resolution to this and we expect this to generally only
be around 1 entry per P, simply remove the batching and process the
entire list without preemption. We may want to revisit this in the
future for very high GOMAXPROCS or if application regularly otherwise
create very long lists.
Fixes #75771.
Change-Id: I6a6a636cd3be443aacde5a678c460aa7066b4c4a
Reviewed-on: https://go-review.googlesource.com/c/go/+/709575
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Proposal #74609
Change-Id: I97a754b128aac1bc5b7b9ab607fcd5bb390058c8
GitHub-Last-Rev: 60f2a192badf415112246de8bc6c0084085314f6
GitHub-Pull-Request: golang/go#74622
Reviewed-on: https://go-review.googlesource.com/c/go/+/688335
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: t hepudds <thepudds1460@gmail.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
|
|
This change removes the locked global span queue and replaces the
fixed-size local span queue with a variable-sized local span queue. The
variable-sized local span queue grows as needed to accomodate local
work. With no global span queue either, GC workers balance work amongst
themselves by stealing from each other.
The new variable-sized local span queues are inspired by the P-local
deque underlying sync.Pool. Unlike the sync.Pool deque, however, both
the owning P and stealing Ps take spans from the tail, making this
incarnation a strict queue, not a deque. This is intentional, since we
want a queue-like order to encourage objects to accumulate on each span.
These variable-sized local span queues are crucial to mark termination,
just like the global span queue was. To avoid hitting the ragged barrier
too often, we must check whether any Ps have any spans on their
variable-sized local span queues. We maintain a per-P atomic bitmask
(another pMask) that contains this state. We can also use this to speed
up stealing by skipping Ps that don't have any local spans.
The variable-sized local span queues are slower than the old fixed-size
local span queues because of the additional indirection, so this change
adds a non-atomic local fixed-size queue. This risks getting work stuck
on it, so, similarly to how workbufs work, each worker will occasionally
dump some spans onto its local variable-sized queue. This scales much
more nicely than dumping to a global queue, but is still visible to all
other Ps.
For #73581.
Change-Id: I814f54d9c3cc7fa7896167746e9823f50943ac22
Reviewed-on: https://go-review.googlesource.com/c/go/+/700496
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Right now, gcMarkWorkAvailable is used in two scenarios. The first is
critical: we use it to determine whether we're approaching mark
termination, and it's crucial to reaching a fixed point across the
ragged barrier in gcMarkDone. The second is a heuristic: should we spin
up another GC worker?
This change splits gcMarkWorkAvailable into these two separate
conditions. This change also deduplicates the logic for updating
work.nwait into more abstract helpers "gcBeginWork" and "gcEndWork."
This change is solely refactoring, and should be a no-op. There are only
two functional changes:
- work.nwait is incremented after setting pp.gcMarkWorkerMode in the
background worker code. I don't believe this change is observable
except if the code fails to update work.nwait (either it results in a
non-sensical number, or the stack is corrupted) in which case the
goroutine may not be labeled as a mark worker in the resulting stack
trace (it should be obvious from the stack frames though).
- endCheckmarks also checks work.nwait == work.nproc, which should be
fine since we never mutate work.nwait on that path. That extra check
should be a no-op.
Splitting these two use-cases for gcMarkWorkAvailable is conceptually
helpful, and the checks may also diverge from Green Tea once we get rid
of the global span queue.
Change-Id: I0bec244a14ee82919c4deb7c1575589c0dca1089
Reviewed-on: https://go-review.googlesource.com/c/go/+/701176
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently isShrinkStackSafe returns false if a goroutine is put into
_Gwaiting while it actually goes and executes on the system stack.
For a long time, we needed to be robust to the goroutine's stack
shrinking while we're executing on the system stack.
Unfortunately, this has become harder and harder to do over time. First,
the execution tracer might be invoked in these contexts and it may wish
to take a stack trace. We cannot take the stack trace if the garbage
collector might concurrently shrink the stack of the user goroutine we
want to trace. So, isShrinkStackSafe grew the condition that we wouldn't
try to shrink the stack in these cases if execution tracing was enabled.
Today, runtime.mutex may wish to take a stack trace for the mutex
profile, and it can happen in a very similar context. Taking the stack
trace is no longer safe.
This change takes the stance that we stop trying to make this work at
all, and instead guarantee that the stack won't move while we're in
these sensitive contexts.
Change-Id: Ibfad2d7a335ee97cecaa48001df0db9812deeab1
Reviewed-on: https://go-review.googlesource.com/c/go/+/692716
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
Almost everywhere we stop the world we casGToWaitingForGC to prevent
mutual deadlock with the GC trying to scan our stack. This historically
was only necessary if we weren't stopping the world to change the GC
phase, because what we were worried about was mutual deadlock with mark
workers' use of suspendG. And, they were the only users of suspendG.
In Go 1.22 this changed. The execution tracer began using suspendG, too.
This leads to the possibility of mutual deadlock between the execution
tracer and a goroutine trying to start or end the GC mark phase. The fix
is simple: make the stop-the-world calls for the GC also call
casGToWaitingForGC. This way, suspendG is guaranteed to make progress in
this circumstance, and once it completes, the stop-the-world can
complete as well.
We can take this a step further, though, and move casGToWaitingForGC
into stopTheWorldWithSema, since there's no longer really a place we can
afford to skip this detail.
While we're here, rename casGToWaitingForGC to casGToWaitingForSuspendG,
since the GC is now not the only potential source of mutual deadlock.
Fixes #72740.
Change-Id: I5e3739a463ef3e8173ad33c531e696e46260692f
Reviewed-on: https://go-review.googlesource.com/c/go/+/681501
Reviewed-by: Carlos Amedee <carlos@golang.org>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
There are two additional sources of memory overhead per P that come from
greenteagc. One is for ptrBuf, but on platforms other than Windows it
doesn't actually cost anything due to demand-paging (Windows also
demand-pages, but the memory is 'committed' so it still counts against
OS RSS metrics). The other is for per-sizeclass scan stats. However when
greenteagc is disabled, most of these scan stats are completely unused.
The worst-case memory overhead from these two sources is relatively
small (about 10 KiB per P), but for programs with a small memory
footprint running on a machine with a lot of cores, this can be
significant (single-digit percent).
This change does two things. First, it puts ptrBuf initialization behind
the greenteagc experiment, so now that memory is never allocated by
default. Second, it abstracts the implementation details of scan stat
collection and emission, such that we can have two different
implementations depending on the build tag. This lets us remove all the
unused stats when the greenteagc experiment is disabled, reducing the
memory overhead of the stats from ~2.6 KiB per P to 536 bytes per P.
This is enough to make the difference no longer noticable in our
benchmark suite.
Fixes #73931.
Change-Id: I4351f1cbb3f6743d8f5922d757d73442c6d6ad3f
Reviewed-on: https://go-review.googlesource.com/c/go/+/678535
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
|
|
This change adds tracking for approximate finalizer and cleanup queue
lengths. These lengths are reported once every GC cycle as a single line
printed to stderr when GODEBUG=checkfinalizer>0.
This change lays the groundwork for runtime/metrics metrics to produce
the same values.
For #72948.
For #72950.
Change-Id: I081721238a0fc4c7e5bee2dbaba6cfb4120d1a33
Reviewed-on: https://go-review.googlesource.com/c/go/+/671437
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This new debug mode detects cleanup/finalizer leaks using checkmark
mode. It runs a partial GC using only specials as roots. If the GC can
find a path from one of these roots back to the object the special is
attached to, then the object might never be reclaimed. (The cycle could
be broken in the future, but it's almost certainly a bug.)
This debug mode is very barebones. It contains no type information and
no stack location for where the finalizer or cleanup was created.
For #72949.
Change-Id: Ibffd64c1380b51f281950e4cfe61f677385d42a5
Reviewed-on: https://go-review.googlesource.com/c/go/+/634599
Reviewed-by: Carlos Amedee <carlos@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
ncpu is the total logical CPU count at startup. It is never updated. For
#73193, we will start using updated CPU counts for updated GOMAXPROCS,
making the ncpu name a bit ambiguous. Change to a less ambiguous name.
While we're at it, give the OS specific lookup functions a common name,
so it can be used outside of osinit later.
For #73193.
Change-Id: I6a6a636cf21cc60de36b211f3c374080849fc667
Reviewed-on: https://go-review.googlesource.com/c/go/+/672277
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
|
|
This change splits the finalizer and cleanup queues and implements a new
lock-free blocking queue for cleanups. The basic design is as follows:
The cleanup queue is organized in fixed-sized blocks. Individual cleanup
functions are queued, but only whole blocks are dequeued.
Enqueuing cleanups places them in P-local cleanup blocks. These are
flushed to the full list as they get full. Cleanups can only be enqueued
by an active sweeper.
Dequeuing cleanups always dequeues entire blocks from the full list.
Cleanup blocks can be dequeued and executed at any time.
The very last active sweeper in the sweep phase is responsible for
flushing all local cleanup blocks to the full list. It can do this
without any synchronization because the next GC can't start yet, so we
can be very certain that nobody else will be accessing the local blocks.
Cleanup blocks are stored off-heap because the need to be allocated by
the sweeper, which is called from heap allocation paths. As a result,
the GC treats cleanup blocks as roots, just like finalizer blocks.
Flushes to the full list signal to the scheduler that cleanup goroutines
should be awoken. Every time the scheduler goes to wake up a cleanup
goroutine and there were more signals than goroutines to wake, it then
forwards this signal to runtime.AddCleanup, so that it creates another
goroutine the next time it is called, up to gomaxprocs goroutines.
The signals here are a little convoluted, but exist because the sweeper
and the scheduler cannot safely create new goroutines.
For #71772.
For #71825.
Change-Id: Ie839fde2b67e1b79ac1426be0ea29a8d923a62cc
Reviewed-on: https://go-review.googlesource.com/c/go/+/650697
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
We've settled on calling the group of goroutines started by
synctest.Run a "bubble". At the time the runtime implementation
was written, I was still calling this a "group". Update the code
to match the current terminology.
Change-Id: I31b757f31d804b5d5f9564c182627030a9532f4a
Reviewed-on: https://go-review.googlesource.com/c/go/+/670135
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Damien Neil <dneil@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This change moves the unique package away from using a concurrent map
and instead toward a bespoke concurrent canonicalization map. The map
holds all its keys weakly, though keys may be looked up by value. The
result is the strong pointer for the canonical value. Entries in the map
are automatically cleaned up once the canonical reference no longer
exists.
Why do this? There's a problem with the current implementation when it
comes to chains of unique.Handle: because the unique map will have a
unique.Handle stored in its keys, each nested handle must be cleaned up
1 GC at a time. It takes N GC cycles, at minimum, to clean up a nested
chain of N handles. This implementation, where the *only* value in the
set is weakly-held, does not have this problem. The entire chain is
dropped at once.
The canon map implementation is a stripped-down version of HashTrieMap.
The weak set implementation also has lower memory overheads by virtue of
the fact that keys are all stored weakly. Whereas the previous map had
both a T and a weak.Pointer[T], this *only* has a weak.Pointer[T].
The canonicalization map is a better abstraction overall and
dramatically simplifies the unique.Make code.
While we're here, delete the background goroutine and switch to
runtime.AddCleanup. This is a step toward fixing #71772. We still need
some kind of back-pressure mechanism, which will be implemented in a
follow-up CL.
For #71772.
Fixes #71846.
Change-Id: I5b2ee04ebfc7f6dd24c2c4a959dd0f6a8af24ca4
Reviewed-on: https://go-review.googlesource.com/c/go/+/650256
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
Our current parallel mark algorithm suffers from frequent stalls on
memory since its access pattern is essentially random. Small objects
are the worst offenders, since each one forces pulling in at least one
full cache line to access even when the amount to be scanned is far
smaller than that. Each object also requires an independent access to
per-object metadata.
The purpose of this change is to improve garbage collector performance
by scanning small objects in batches to obtain better cache locality
than our current approach. The core idea behind this change is to defer
marking and scanning small objects, and then scan them in batches
localized to a span.
This change adds scanned bits to each small object (<=512 bytes) span in
addition to mark bits. The scanned bits indicate that the object has
been scanned. (One way to think of them is "grey" bits and "black" bits
in the tri-color mark-sweep abstraction.) Each of these spans is always
8 KiB and if they contain pointers, the pointer/scalar data is already
packed together at the end of the span, allowing us to further optimize
the mark algorithm for this specific case.
When the GC encounters a pointer, it first checks if it points into a
small object span. If so, it is first marked in the mark bits, and then
the object is queued on a work-stealing P-local queue. This object
represents the whole span, and we ensure that a span can only appear at
most once in any queue by maintaining an atomic ownership bit for each
span. Later, when the pointer is dequeued, we scan every object with a
set mark that doesn't have a corresponding scanned bit. If it turns out
that was the only object in the mark bits since the last time we scanned
the span, we scan just that object directly, essentially falling back to
the existing algorithm. noscan objects have no scan work, so they are
never queued.
Each span's mark and scanned bits are co-located together at the end of
the span. Since the span is always 8 KiB in size, it can be found with
simple pointer arithmetic. Next to the marks and scans we also store the
size class, eliminating the need to access the span's mspan altogether.
The work-stealing P-local queue is a new source of GC work. If this
queue gets full, half of it is dumped to a global linked list of spans
to scan. The regular scan queues are always prioritized over this queue
to allow time for darts to accumulate. Stealing work from other Ps is a
last resort.
This change also adds a new debug mode under GODEBUG=gctrace=2 that
dumps whole-span scanning statistics by size class on every GC cycle.
A future extension to this CL is to use SIMD-accelerated scanning
kernels for scanning spans with high mark bit density.
For #19112. (Deadlock averted in GOEXPERIMENT.)
For #73581.
Change-Id: I4bbb4e36f376950a53e61aaaae157ce842c341bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/658036
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently we assume alignment to 8 bytes, so we can steal the low 3 bits.
This CL assumes alignment to 512 bytes, so we can steal the low 9 bits.
That's 6 extra bits!
Aligning to 512 bytes wastes a bit of space but it is not egregious.
Most of the objects that we make tagged pointers to are pretty big.
Update #49405
Change-Id: I66fc7784ac1be5f12f285de1d7851d5a6871fb75
Reviewed-on: https://go-review.googlesource.com/c/go/+/665815
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Before CL, all instances of gQueue and gList stored the size of
structures in a separate variable. The size changed manually and passed
as a separate argument to different functions. This CL added an
additional field to gQueue and gList structures to store the size. Also,
the calculation of size was moved into the implementation of API for
these structures. This allows to reduce possible errors by eliminating
manual calculation of the size and simplifying functions' signatures.
Change-Id: I087da2dfaec4925e4254ad40fce5ccb4c175ec41
Reviewed-on: https://go-review.googlesource.com/c/go/+/664777
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
|
|
This change moves finBlockSize into mfinal.go and renames finblock to
finBlock.
Change-Id: I20a0bc3907e7b028a2caa5d2fe8cf3f76332c871
Reviewed-on: https://go-review.googlesource.com/c/go/+/650695
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
|
|
This change deduplicates trace wire format definitions between the
runtime and the trace parser by making the internal/trace/tracev2
package the source of truth.
Change-Id: Ia0721d3484a80417e40ac473ec32870bee73df09
Reviewed-on: https://go-review.googlesource.com/c/go/+/644221
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This change fixes GODEBUG=gccheckmark=1 which seems to have bit-rotted.
Because the root jobs weren't being reset, it wasn't doing anything.
Then, it turned out that checkmark mode would queue up noscan objects in
workbufs, which caused it to fail. Then it turned out checkmark mode was
broken with user arenas, since their heap arenas are not registered
anywhere. Then, it turned out that checkmark mode could just not run
properly if the goroutine's preemption flag was set (since
sched.gcwaiting is true during the STW). And lastly, it turned out that
async preemption could cause erroneous checkmark failures.
This change fixes all these issues and adds a simple smoke test to dist
to run the runtime tests under gccheckmark, which exercises all of these
issues.
Fixes #69074.
Fixes #69377.
Fixes #69376.
Change-Id: Iaa0bb7b9e63ed4ba34d222b47510d6292ce168bc
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest
Reviewed-on: https://go-review.googlesource.com/c/go/+/608915
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
|
|
Add an internal (for now) implementation of testing/synctest.
The synctest.Run function executes a tree of goroutines in an
isolated environment using a fake clock. The synctest.Wait function
allows a test to wait for all other goroutines within the test
to reach a blocking point.
For #67434
For #69687
Change-Id: Icb39e54c54cece96517e58ef9cfb18bf68506cfc
Reviewed-on: https://go-review.googlesource.com/c/go/+/591997
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
This change introduces AddCleanup to the runtime package. AddCleanup attaches
a cleanup function to an pointer to an object.
The Stop method on Cleanups will be implemented in a followup CL.
AddCleanup is intended to be an incremental improvement over
SetFinalizer and will result in SetFinalizer being deprecated.
For #67535
Change-Id: I99645152e3fdcee85fcf42a4f312c6917e8aecb1
Reviewed-on: https://go-review.googlesource.com/c/go/+/627695
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
Currently it's possible for weak->strong conversions to create more GC
work during mark termination. When a weak->strong conversion happens
during the mark phase, we need to mark the newly-strong pointer, since
it may now be the only pointer to that object. In other words, the
object could be white.
But queueing new white objects creates GC work, and if this happens
during mark termination, we could end up violating mark termination
invariants. In the parlance of the mark termination algorithm, the
weak->strong conversion is a non-monotonic source of GC work, unlike the
write barriers (which will eventually only see black objects).
This change fixes the problem by forcing weak->strong conversions to
block during mark termination. We can do this efficiently by setting a
global flag before the ragged barrier that is checked at each
weak->strong conversion. If the flag is set, then the conversions block.
The ragged barrier ensures that all Ps have observed the flag and that
any weak->strong conversions which completed before the ragged barrier
have their newly-minted strong pointers visible in GC work queues if
necessary. We later unset the flag and wake all the blocked goroutines
during the mark termination STW.
There are a few subtleties that we need to account for. For one, it's
possible that a goroutine which blocked in a weak->strong conversion
wakes up only to find it's mark termination time again, so we need to
recheck the global flag on wake. We should also stay non-preemptible
while performing the check, so that if the check *does* appear as true,
it cannot switch back to false while we're actively trying to block. If
it switches to false while we try to block, then we'll be stuck in the
queue until the following GC.
All-in-all, this CL is more complicated than I would have liked, but
it's the only idea so far that is clearly correct to me at a high level.
This change adds a test which is somewhat invasive as it manipulates
mark termination, but hopefully that infrastructure will be useful for
debugging, fixing, and regression testing mark termination whenever we
do fix it.
Fixes #69803.
Change-Id: Ie314e6fd357c9e2a07a9be21f217f75f7aba8c4a
Reviewed-on: https://go-review.googlesource.com/c/go/+/623615
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
|
|
Change-Id: If5132400aac0ef00e467958beeaab5e64d053d10
Reviewed-on: https://go-review.googlesource.com/c/go/+/619099
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Austin Clements <austin@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
frugal no longer uses these methods from next Go version
Fixes #69222
Change-Id: Ie71de0752cabef7d5584d3392d6e5920ba742350
Reviewed-on: https://go-review.googlesource.com/c/go/+/609918
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
For #67401.
Change-Id: I015408a3f437c1733d97160ef2fb5da6d4efcc5c
Reviewed-on: https://go-review.googlesource.com/c/go/+/587598
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Russ Cox <rsc@golang.org>
|
|
Ignored these linknames which have not worked for a while:
github.com/xtls/xray-core:
context.newCancelCtx removed in CL 463999 (Feb 2023)
github.com/u-root/u-root:
funcPC removed in CL 513837 (Jul 2023)
tinygo.org/x/drivers:
net.useNetdev never existed
For #67401.
Change-Id: I9293f4ef197bb5552b431de8939fa94988a060ce
Reviewed-on: https://go-review.googlesource.com/c/go/+/587576
Auto-Submit: Russ Cox <rsc@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
For #67401.
Change-Id: I7dd28c3b01a1a647f84929d15412aa43ab0089ee
Reviewed-on: https://go-review.googlesource.com/c/go/+/587575
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Russ Cox <rsc@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
For #67401.
Change-Id: Icc10ede72547d8020c0ba45e89d954822a4b2455
Reviewed-on: https://go-review.googlesource.com/c/go/+/587218
Auto-Submit: Russ Cox <rsc@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
allocfreetrace prints all allocations and frees to stderr. It's not
terribly useful because it has a really huge overhead, making it not
feasible to use except for the most trivial programs. A follow-up CL
will replace it with something that is both more thorough and also lower
overhead.
Change-Id: I1d668fee8b6aaef5251a5aea3054ec2444d75eb6
Reviewed-on: https://go-review.googlesource.com/c/go/+/583376
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
This change adds the unique package for canonicalizing values, as
described by the proposal in #62483.
Fixes #62483.
Change-Id: I1dc3d34ec12351cb4dc3838a8ea29a5368d59e99
Reviewed-on: https://go-review.googlesource.com/c/go/+/574355
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ingo Oeser <nightlyone@googlemail.com>
Reviewed-by: David Chase <drchase@google.com>
|
|
The previous CL, CL 570257, made it so that STW time no longer
overlapped with other CPU time tracking. However, what we lost was
insight into the CPU time spent _stopping_ the world, which can be just
as important. There's pretty much no easy way to measure this
indirectly, so this CL implements a direct measurement: whenever a P
enters _Pgcstop, it writes down what time it did so. stopTheWorld then
accumulates all the time deltas between when it finished stopping the
world and each P's stop time into a total additional pause time. The GC
pause cases then accumulate this number into the metrics.
This should cause minimal additional overhead in stopping the world. GC
STWs already take on the order of 10s to 100s of microseconds. Even for
100 Ps, the extra `nanotime` call per P is only 1500ns of additional CPU
time. This is likely to be much less in actual pause latency, since it
all happens concurrently.
Change-Id: Icf190ffea469cd35ebaf0b2587bf6358648c8554
Reviewed-on: https://go-review.googlesource.com/c/go/+/574215
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Nicolas Hillegeer <aktau@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
Currently the GC CPU pause time metrics start measuring before the STW
is complete. This results in a slightly less accurate measurement and
creates some overlap with other timings (for example, the idle time of
idle Ps) that will cause double-counting.
This CL adds a field to worldStop to track the point at which the world
actually stopped and uses that as the basis for the GC CPU pause time
metrics, basically eliminating this overlap.
Note that this will cause Ps in _Pgcstop before the world is fully
stopped to be counted as user time. A follow-up CL will fix this
discrepancy.
Change-Id: I287731f08415ffd97d327f582ddf7e5d2248a6f5
Reviewed-on: https://go-review.googlesource.com/c/go/+/570258
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Nicolas Hillegeer <aktau@google.com>
|
|
This change fixes a possible race with updating metrics and reading
them. The update is intended to be protected by the world being stopped,
but here, it clearly isn't.
Fixing this lets us lower the thresholds in the metrics tests by an
order of magnitude, because the only thing we have to worry about now is
floating point error (the tests were previously written assuming the
floating point error was much higher than it actually was; that turns
out not to be the case, and this bug was the problem instead). However,
this still isn't that tight of a bound; we still want to catch any and
all problems of exactness. For this purpose, this CL adds a test to
check the source-of-truth (in uint64 nanoseconds) that ensures the
totals exactly match.
This means we unfortunately have to take another time measurement, but
for now let's prioritize correctness. A few additional nanoseconds of
STW time won't be terribly noticable.
Fixes #66212.
Change-Id: Id02c66e8a43c13b1f70e9b268b8a84cc72293bfd
Reviewed-on: https://go-review.googlesource.com/c/go/+/570257
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Nicolas Hillegeer <aktau@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently we use stwprocs as the multiplier for the STW CPU time
computation, but this isn't the same as GOMAXPROCS, which is used for
the total time in the CPU metrics. The two numbers need to be
comparable, so this change switches to using maxprocs to make it so.
Change-Id: I423e3c441d05b1bd656353368cb323289661e302
Reviewed-on: https://go-review.googlesource.com/c/go/+/570256
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Nicolas Hillegeer <aktau@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently this is done manually in two places. Replace these manual
updates with a method that also forces the caller to be mindful that the
number will be multiplied (and that it needs to be). This will make
follow-up changes simpler too.
Change-Id: I81ea844b47a40ff3470d23214b4b2fb5b71a4abe
Reviewed-on: https://go-review.googlesource.com/c/go/+/570255
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Currently, the execution tracer may attempt to take a stack trace of a
goroutine whose stack it does not own. For example, if the goroutine is
in _Grunnable or _Gwaiting. This is easily fixed in all cases by simply
moving the emission of GoStop and GoBlock events to before the
casgstatus happens. The goroutine status is what is used to signal stack
ownership, and the GC may shrink a goroutine's stack if it can acquire
the scan bit.
Although this is easily fixed, the interaction here is very subtle,
because stack ownership is only implicit in the goroutine's scan status.
To make this invariant more maintainable and less error-prone in the
future, this change adds a GODEBUG setting that checks, at the point of
taking a stack trace, whether the caller owns the goroutine. This check
is not quite perfect because there's no way for the stack tracing code
to know that the _Gscan bit was acquired by the caller, so for
simplicity it assumes that it was the caller that acquired the scan bit.
In all other cases however, we can check for ownership precisely. At the
very least, this check is sufficient to catch the issue this change is
fixing.
To make sure this debug check doesn't bitrot, it's always enabled during
trace testing. This new mode has actually caught a few other issues
already, so this change fixes them.
One issue that this debug mode caught was that it's not safe to take a
stack trace of a _Gwaiting goroutine that's being unparked.
Another much bigger issue this debug mode caught was the fact that the
execution tracer could try to take a stack trace of a G that was in
_Gwaiting solely to avoid a deadlock in the GC. The execution tracer
already has a partial list of these cases since they're modeled as the
goroutine just executing as normal in the tracer, but this change takes
the list and makes it more formal. In this specific case, we now prevent
the GC from shrinking the stacks of goroutines in this state if tracing
is enabled. The stack traces from these scenarios are too useful to
discard, but there is indeed a race here between the tracer and any
attempt to shrink the stack by the GC.
Change-Id: I019850dabc8cede202fd6dcc0a4b1f16764209fb
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest,gotip-linux-amd64-longtest-race
Reviewed-on: https://go-review.googlesource.com/c/go/+/573155
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
|
|
For #65355
Change-Id: I65dd090fb99de9b231af2112c5ccb0eb635db2be
Reviewed-on: https://go-review.googlesource.com/c/go/+/560155
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ibrahim Bazoka <ibrahimbazoka729@gmail.com>
Auto-Submit: Emmanuel Odeke <emmanuel@orijtech.com>
|
|
Change-Id: Icb6d9ca996b4119d8636d9f7f6a56e510d74d059
GitHub-Last-Rev: 08178e8ff798f4a51860573788c9347a0fb6bc40
GitHub-Pull-Request: golang/go#66188
Reviewed-on: https://go-review.googlesource.com/c/go/+/569979
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: qiulaidongfeng <2645477756@qq.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
|
|
I ran go fmt to fix format on the entire repository.
Change-Id: I2f09166b6b8ba0ffb0ba27f6500efb0ea4cf21ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/566835
Reviewed-by: Ian Lance Taylor <iant@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
Auto-Submit: Ian Lance Taylor <iant@golang.org>
|
|
gcBgMarkStartWorkers currently starts workers one at a time, using a
note to communicate readiness back from the worker.
However, this is a pretty standard goroutine, so we can just use a
channel to communicate between the goroutines.
In addition to being conceptually simpler, using channels has the
additional advantage of coordinating with the scheduler. Notes use OS
locks and sleep the entire thread, requiring other threads to run the
other goroutines. Waiting on a channel allows the scheduler to directly
run another goroutine. When the worker sends to the channel, the
scheduler can use runnext to run gcBgMarkStartWorker immediately,
reducing latency.
We could additionally batch start all workers and then wait only once,
however this would defeate runnext switching between the workers and
gcBgMarkStartWorkers, so in a heavily loaded system, we expect the
direct switches to reduce latency.
Change-Id: Iedf0d2ad8ad796b43fd8d32ccb1e815cfe010cb4
Reviewed-on: https://go-review.googlesource.com/c/go/+/558535
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
|
|
Change-Id: I2eac85b502df9851df294f8d46c7845f635dde9b
GitHub-Last-Rev: 3c8382442a0fadb355be9e4656942c2e03db2391
GitHub-Pull-Request: golang/go#64198
Reviewed-on: https://go-review.googlesource.com/c/go/+/542697
Run-TryBot: Jes Cok <xigua67damn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
|
|
This CL adds four new time histogram metrics:
/sched/pauses/stopping/gc:seconds
/sched/pauses/stopping/other:seconds
/sched/pauses/total/gc:seconds
/sched/pauses/total/other:seconds
The "stopping" metrics measure the time taken to start a stop-the-world
pause. i.e., how long it takes stopTheWorldWithSema to stop all Ps.
This can be used to detect STW struggling to preempt Ps.
The "total" metrics measure the total duration of a stop-the-world
pause, from starting to stop-the-world until the world is started again.
This includes the time spent in the "start" phase.
The "gc" metrics are used for GC-related STW pauses. The "other" metrics
are used for all other STW pauses.
All of these metrics start timing in stopTheWorldWithSema only after
successfully acquiring sched.lock, thus excluding lock contention on
sched.lock. The reasoning behind this is that while waiting on
sched.lock the world is not stopped at all (all other Ps can run), so
the impact of this contention is primarily limited to the goroutine
attempting to stop-the-world. Additionally, we already have some
visibility into sched.lock contention via contention profiles (#57071).
/sched/pauses/total/gc:seconds is conceptually equivalent to
/gc/pauses:seconds, so the latter is marked as deprecated and returns
the same histogram as the former.
In the implementation, there are a few minor differences:
* For both mark and sweep termination stops, /gc/pauses:seconds started
timing prior to calling startTheWorldWithSema, thus including lock
contention.
These details are minor enough, that I do not believe the slight change
in reporting will matter. For mark termination stops, moving timing stop
into startTheWorldWithSema does have the side effect of requiring moving
other GC metric calculations outside of the STW, as they depend on the
same end time.
Fixes #63340
Change-Id: Iacd0bab11bedab85d3dcfb982361413a7d9c0d05
Reviewed-on: https://go-review.googlesource.com/c/go/+/534161
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|
|
Most of the uses of work.pauseStart are completely useless, it could
simply be a local variable. One use passes a parameter from gcMarkDone
to gcMarkTermination, but that could simply be an argument.
Keeping this field in workType makes it seems more important than it
really is, so just drop it.
Change-Id: I2fdc0b21f8844e5e7be47148c3e10f13e49815c6
Reviewed-on: https://go-review.googlesource.com/c/go/+/542075
Reviewed-by: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
|