aboutsummaryrefslogtreecommitdiff
path: root/src/runtime/stack1.go
AgeCommit message (Collapse)Author
2015-10-17runtime: merge stack{1,2}.go -> stack.goNodir Turakulov
* rename stack1.go -> stack.go * prepend contents of stack2.go to stack.go Updates #12952 Change-Id: I60d409af37162a5a7596c678dfebc2cea89564ff Reviewed-on: https://go-review.googlesource.com/16008 Reviewed-by: Austin Clements <austin@google.com> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-15runtime: use unsafe.Pointer(x) instead of (unsafe.Pointer)(x)Matthew Dempsky
This isn't C anymore. No binary change to pkg/linux_amd64/runtime.a. Change-Id: I24d66b0f5ac888f432b874aac684b1395e7c8345 Reviewed-on: https://go-review.googlesource.com/15903 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-08-28runtime: check explicitly for short unwinding of stacksRuss Cox
Right now we find out implicitly if stack barriers are in place, or defers. This change makes sure we find out about short unwinds always. Change-Id: Ibdde1ba9c79eb792660dcb7aa6f186e4e4d559b3 Reviewed-on: https://go-review.googlesource.com/13966 Reviewed-by: Austin Clements <austin@google.com>
2015-08-25runtime: add a missing hex conversionShenghou Ma
gobuf.g is a guintptr, so without hex(), it will be printed as a decimal, which is not very helpful and inconsistent with how other pointers are printed. Change-Id: I7c0432e9709e90a5c3b3e22ce799551a6242d017 Reviewed-on: https://go-review.googlesource.com/13879 Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-28runtime: fix out-of-bounds in stack debuggingDmitry Vyukov
Currently stackDebug=4 crashes as: panic: runtime error: index out of range fatal error: panic on system stack runtime stack: runtime.throw(0x607470, 0x15) src/runtime/panic.go:527 +0x96 runtime.gopanic(0x5ada00, 0xc82000a1d0) src/runtime/panic.go:354 +0xb9 runtime.panicindex() src/runtime/panic.go:12 +0x49 runtime.adjustpointers(0xc820065ac8, 0x7ffe58b56100, 0x7ffe58b56318, 0x0) src/runtime/stack1.go:428 +0x5fb runtime.adjustframe(0x7ffe58b56200, 0x7ffe58b56318, 0x1) src/runtime/stack1.go:542 +0x780 runtime.gentraceback(0x487760, 0xc820065ac0, 0x0, 0xc820001080, 0x0, 0x0, 0x7fffffff, 0x6341b8, 0x7ffe58b56318, 0x0, ...) src/runtime/traceback.go:336 +0xa7e runtime.copystack(0xc820001080, 0x1000) src/runtime/stack1.go:616 +0x3b1 runtime.newstack() src/runtime/stack1.go:801 +0xdde Change-Id: If2d60960231480a9dbe545d87385fe650d6db808 Reviewed-on: https://go-review.googlesource.com/12763 Reviewed-by: Russ Cox <rsc@golang.org>
2015-06-29runtime: don't free stack spans during GCAustin Clements
Memory for stacks is manually managed by the runtime and, currently (with one exception) we free stack spans immediately when the last stack on a span is freed. However, the garbage collector assumes that spans can never transition from non-free to free during scan or mark. This disagreement makes it possible for the garbage collector to mark uninitialized objects and is blocking us from re-enabling the bad pointer test in the garbage collector (issue #9880). For example, the following sequence will result in marking an uninitialized object: 1. scanobject loads a pointer slot out of the object it's scanning. This happens to be one of the special pointers from the heap into a stack. Call the pointer p and suppose it points into X's stack. 2. X, running on another thread, grows its stack and frees its old stack. 3. The old stack happens to be large or was the last stack in its span, so X frees this span, setting it to state _MSpanFree. 4. The span gets reused as a heap span. 5. scanobject calls heapBitsForObject, which loads the span containing p, which is now in state _MSpanInUse, but doesn't necessarily have an object at p. The not-object at p gets marked, and at this point all sorts of things can go wrong. We already have a partial solution to this. When shrinking a stack, we put the old stack on a queue to be freed at the end of garbage collection. This was done to address exactly this problem, but wasn't a complete solution. This commit generalizes this solution to both shrinking and growing stacks. For stacks that fit in the stack pool, we simply don't free the span, even if its reference count reaches zero. It's fine to reuse the span for other stacks, and this enables that. At the end of GC, we sweep for cached stack spans with a zero reference count and free them. For larger stacks, we simply queue the stack span to be freed at the end of GC. Ideally, we would reuse these large stack spans the way we can small stack spans, but that's a more invasive change that will have to wait until after the freeze. Fixes #11267. Change-Id: Ib7f2c5da4845cc0268e8dc098b08465116972a71 Reviewed-on: https://go-review.googlesource.com/11502 Reviewed-by: Russ Cox <rsc@golang.org>
2015-06-17runtime: fix races in stack scanRuss Cox
This fixes a hang during runtime.TestTraceStress. It also fixes double-scan of stacks, which leads to stack barrier installation failures. Both of these have shown up as flaky failures on the dashboard. Fixes #10941. Change-Id: Ia2a5991ce2c9f43ba06ae1c7032f7c898dc990e0 Reviewed-on: https://go-review.googlesource.com/11089 Reviewed-by: Austin Clements <austin@google.com>
2015-06-16runtime: account for stack guard when shrinking the stackAustin Clements
Currently, when shrinkstack computes whether the halved stack allocation will have enough room for the stack, it accounts for the stack space that's actively in use but fails to leave extra room for the stack guard space. As a result, *if* the minimum stack size is small enough or the guard large enough, it may shrink the stack and leave less than enough room to run nosplit functions. If the next function called after the stack shrink is a nosplit function, it may overflow the stack without noticing and overwrite non-stack memory. We don't think this is happening under normal conditions right now. The minimum stack allocation is 2K and the guard is 640 bytes. The "worst case" stack shrink is from 4K (4048 bytes after stack barrier array reservation) to 2K (2016 bytes after stack barrier array reservation), which means the largest "used" size that will qualify for shrinking is 4048/4 - 8 = 1004 bytes. After copying, that leaves 2016 - 1004 = 1012 bytes of available stack, which is significantly more than the guard space. If we were to reduce the minimum stack size to 1K or raise the guard space above 1012 bytes, the logic in shrinkstack would no longer leave enough space. It's also possible to trigger this problem by setting firstStackBarrierOffset to 0, which puts stack barriers in a debug mode that steals away *half* of the stack for the stack barrier array reservation. Then, the largest "used" size that qualifies for shrinking is (4096/2)/4 - 8 = 504 bytes. After copying, that leaves (2096/2) - 504 = 8 bytes of available stack; much less than the required guard space. This causes failures like those in issue #11027 because func gc() shrinks its own stack and then immediately calls casgstatus (a nosplit function), which overflows the stack and overwrites a free list pointer in the neighboring span. However, since this seems to require the special debug mode, we don't think it's responsible for issue #11027. To forestall all of these subtle issues, this commit modifies shrinkstack to correctly account for the guard space when considering whether to halve the stack allocation. Change-Id: I7312584addc63b5bfe55cc384a1012f6181f1b9d Reviewed-on: https://go-review.googlesource.com/10714 Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
2015-06-15runtime: add GODEBUG gcshrinkstackoff, gcstackbarrieroff, and gcstoptheworld ↵Russ Cox
variables While we're here, update the documentation and delete variables with no effect. Change-Id: I4df0d266dff880df61b488ed547c2870205862f0 Reviewed-on: https://go-review.googlesource.com/10790 Reviewed-by: Austin Clements <austin@google.com>
2015-06-02runtime: implement GC stack barriersAustin Clements
This commit implements stack barriers to minimize the amount of stack re-scanning that must be done during mark termination. Currently the GC scans stacks of active goroutines twice during every GC cycle: once at the beginning during root discovery and once at the end during mark termination. The second scan happens while the world is stopped and guarantees that we've seen all of the roots (since there are no write barriers on writes to local stack variables). However, this means pause time is proportional to stack size. In particularly recursive programs, this can drive pause time up past our 10ms goal (e.g., it takes about 150ms to scan a 50MB heap). Re-scanning the entire stack is rarely necessary, especially for large stacks, because usually most of the frames on the stack were not active between the first and second scans and hence any changes to these frames (via non-escaping pointers passed down the stack) were tracked by write barriers. To efficiently track how far a stack has been unwound since the first scan (and, hence, how much needs to be re-scanned), this commit introduces stack barriers. During the first scan, at exponentially spaced points in each stack, the scan overwrites return PCs with the PC of the stack barrier function. When "returned" to, the stack barrier function records how far the stack has unwound and jumps to the original return PC for that point in the stack. Then the second scan only needs to proceed as far as the lowest barrier that hasn't been hit. For deeply recursive programs, this substantially reduces mark termination time (and hence pause time). For the goscheme example linked in issue #10898, prior to this change, mark termination times were typically between 100 and 500ms; with this change, mark termination times are typically between 10 and 20ms. As a result of the reduced stack scanning work, this reduces overall execution time of the goscheme example by 20%. Fixes #10898. The effect of this on programs that are not deeply recursive is minimal: name old time/op new time/op delta BinaryTree17 3.16s ± 2% 3.26s ± 1% +3.31% (p=0.000 n=19+19) Fannkuch11 2.42s ± 1% 2.48s ± 1% +2.24% (p=0.000 n=17+19) FmtFprintfEmpty 50.0ns ± 3% 49.8ns ± 1% ~ (p=0.534 n=20+19) FmtFprintfString 173ns ± 0% 175ns ± 0% +1.49% (p=0.000 n=16+19) FmtFprintfInt 170ns ± 1% 175ns ± 1% +2.97% (p=0.000 n=20+19) FmtFprintfIntInt 288ns ± 0% 295ns ± 0% +2.73% (p=0.000 n=16+19) FmtFprintfPrefixedInt 242ns ± 1% 252ns ± 1% +4.13% (p=0.000 n=18+18) FmtFprintfFloat 324ns ± 0% 323ns ± 0% -0.36% (p=0.000 n=20+19) FmtManyArgs 1.14µs ± 0% 1.12µs ± 1% -1.01% (p=0.000 n=18+19) GobDecode 8.88ms ± 1% 8.87ms ± 0% ~ (p=0.480 n=19+18) GobEncode 6.80ms ± 1% 6.85ms ± 0% +0.82% (p=0.000 n=20+18) Gzip 363ms ± 1% 363ms ± 1% ~ (p=0.077 n=18+20) Gunzip 90.6ms ± 0% 90.0ms ± 1% -0.71% (p=0.000 n=17+18) HTTPClientServer 51.5µs ± 1% 50.8µs ± 1% -1.32% (p=0.000 n=18+18) JSONEncode 17.0ms ± 0% 17.1ms ± 0% +0.40% (p=0.000 n=18+17) JSONDecode 61.8ms ± 0% 63.8ms ± 1% +3.11% (p=0.000 n=18+17) Mandelbrot200 3.84ms ± 0% 3.84ms ± 1% ~ (p=0.583 n=19+19) GoParse 3.71ms ± 1% 3.72ms ± 1% ~ (p=0.159 n=18+19) RegexpMatchEasy0_32 100ns ± 0% 100ns ± 1% -0.19% (p=0.033 n=17+19) RegexpMatchEasy0_1K 342ns ± 1% 331ns ± 0% -3.41% (p=0.000 n=19+19) RegexpMatchEasy1_32 82.5ns ± 0% 81.7ns ± 0% -0.98% (p=0.000 n=18+18) RegexpMatchEasy1_1K 505ns ± 0% 494ns ± 1% -2.16% (p=0.000 n=18+18) RegexpMatchMedium_32 137ns ± 1% 137ns ± 1% -0.24% (p=0.048 n=20+18) RegexpMatchMedium_1K 41.6µs ± 0% 41.3µs ± 1% -0.57% (p=0.004 n=18+20) RegexpMatchHard_32 2.11µs ± 0% 2.11µs ± 1% +0.20% (p=0.037 n=17+19) RegexpMatchHard_1K 63.9µs ± 2% 63.3µs ± 0% -0.99% (p=0.000 n=20+17) Revcomp 560ms ± 1% 522ms ± 0% -6.87% (p=0.000 n=18+16) Template 75.0ms ± 0% 75.1ms ± 1% +0.18% (p=0.013 n=18+19) TimeParse 358ns ± 1% 364ns ± 0% +1.74% (p=0.000 n=20+15) TimeFormat 360ns ± 0% 372ns ± 0% +3.55% (p=0.000 n=20+18) Change-Id: If8a9bfae6c128d15a4f405e02bcfa50129df82a2 Reviewed-on: https://go-review.googlesource.com/10314 Reviewed-by: Russ Cox <rsc@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-06-02runtime: avoid double-scanning of stacksAustin Clements
Currently there's a race between stopg scanning another G's stack and the G reaching a preemption point and scanning its own stack. When this race occurs, the G's stack is scanned twice. Currently this is okay, so this race is benign. However, we will shortly be adding stack barriers during the first stack scan, so scanning will no longer be idempotent. To prepare for this, this change ensures that each stack is scanned only once during each GC phase by checking the flag that indicates that the stack has been scanned in this phase before scanning the stack. Change-Id: Id9f4d5e2e5b839bc3f200ec1723a4a12dd677ab4 Reviewed-on: https://go-review.googlesource.com/10458 Reviewed-by: Rick Hudson <rlh@golang.org>
2015-06-02runtime: steal space for stack barrier tracking from stackAustin Clements
The stack barrier code will need a bookkeeping structure to keep track of the overwritten return PCs. This commit introduces and allocates this structure, but does not yet use the structure. We don't want to allocate space for this structure during garbage collection, so this commit allocates it along with the allocation of the corresponding stack. However, we can't do a regular allocation in newstack because mallocgc may itself grow the stack (which would lead to a recursive allocation). Hence, this commit makes the bookkeeping structure part of the stack allocation itself by stealing the necessary space from the top of the stack allocation. Since the size of this bookkeeping structure is logarithmic in the size of the stack, this has minimal impact on stack behavior. Change-Id: Ia14408be06aafa9ca4867f4e70bddb3fe0e96665 Reviewed-on: https://go-review.googlesource.com/10313 Reviewed-by: Russ Cox <rsc@golang.org>
2015-06-02runtime: decouple stack bounds and stack allocation sizeAustin Clements
Currently the runtime assumes that the allocation for the stack is exactly [stack.lo, stack.hi). We're about to steal a small part of this allocation for per-stack GC metadata. To prepare for this, this commit adds a field to the G for the allocated size of the stack. With this change, stack.lo and stack.hi continue to act as the true bounds on the stack, but are no longer also used as the bounds on the stack allocation. (I also tried this the other way around, where stack.lo and stack.hi remained the allocation bounds and I introduced a new top of stack. However, there are far more places that assume stack.hi is the true top of the stack than there are places that assume it's the top of the allocation.) Change-Id: Ifa9d956753be53d286d09cbc73d47fb34a18c0c6 Reviewed-on: https://go-review.googlesource.com/10312 Reviewed-by: Russ Cox <rsc@golang.org>
2015-05-21runtime: eliminate write barrier from adjustpointersAustin Clements
Currently adjustpointers invokes a write barrier for every stack slot it updates. This is safe---the write barrier always does nothing because the new value is never a heap pointer---but it's unnecessary overhead in performance and complexity. Fix this by rewriting adjustpointers to work with *uintptrs instead of *unsafe.Pointers. As an added bonus, this makes the code cleaner. name old mean new mean delta BinaryTree17 3.35s × (0.98,1.01) 3.33s × (0.99,1.02) ~ (p=0.095 n=20+19) Fannkuch11 2.49s × (1.00,1.01) 2.52s × (0.99,1.01) +1.23% (p=0.000 n=19+20) FmtFprintfEmpty 52.2ns × (0.99,1.02) 52.2ns × (0.99,1.02) ~ (p=0.766 n=19+19) FmtFprintfString 181ns × (0.99,1.02) 179ns × (0.99,1.01) -1.06% (p=0.000 n=20+19) FmtFprintfInt 177ns × (0.99,1.01) 173ns × (0.99,1.02) -2.26% (p=0.000 n=17+20) FmtFprintfIntInt 300ns × (0.99,1.01) 302ns × (0.99,1.01) +0.76% (p=0.000 n=19+20) FmtFprintfPrefixedInt 253ns × (0.99,1.02) 256ns × (0.99,1.01) +0.96% (p=0.000 n=20+19) FmtFprintfFloat 334ns × (0.99,1.02) 334ns × (1.00,1.01) ~ (p=0.243 n=20+19) FmtManyArgs 1.16µs × (0.99,1.01) 1.17µs × (0.99,1.02) +0.88% (p=0.000 n=20+20) GobDecode 9.16ms × (0.99,1.02) 9.18ms × (1.00,1.00) +0.21% (p=0.048 n=20+17) GobEncode 7.03ms × (0.99,1.01) 7.05ms × (0.99,1.01) ~ (p=0.091 n=19+19) Gzip 374ms × (0.99,1.01) 372ms × (0.99,1.02) -0.50% (p=0.008 n=18+20) Gunzip 92.9ms × (0.99,1.01) 92.5ms × (1.00,1.01) -0.47% (p=0.002 n=19+19) HTTPClientServer 53.1µs × (0.98,1.01) 52.5µs × (0.99,1.01) -0.98% (p=0.000 n=20+19) JSONEncode 17.4ms × (0.99,1.02) 17.5ms × (0.99,1.01) ~ (p=0.061 n=19+20) JSONDecode 66.0ms × (0.99,1.02) 64.7ms × (0.99,1.01) -1.87% (p=0.000 n=20+20) Mandelbrot200 3.94ms × (1.00,1.01) 3.95ms × (1.00,1.01) ~ (p=0.799 n=18+19) GoParse 3.89ms × (0.99,1.02) 3.86ms × (0.99,1.01) -0.70% (p=0.016 n=20+19) RegexpMatchEasy0_32 102ns × (0.99,1.02) 102ns × (1.00,1.01) ~ (p=0.557 n=20+18) RegexpMatchEasy0_1K 353ns × (0.99,1.02) 341ns × (0.99,1.01) -3.38% (p=0.000 n=20+20) RegexpMatchEasy1_32 85.0ns × (0.99,1.02) 85.0ns × (0.99,1.01) ~ (p=0.851 n=19+20) RegexpMatchEasy1_1K 521ns × (0.99,1.02) 506ns × (1.00,1.01) -2.85% (p=0.000 n=20+18) RegexpMatchMedium_32 142ns × (0.99,1.02) 141ns × (1.00,1.01) -1.17% (p=0.000 n=20+19) RegexpMatchMedium_1K 42.8µs × (0.99,1.01) 42.3µs × (0.99,1.01) -1.07% (p=0.000 n=20+19) RegexpMatchHard_32 2.17µs × (0.99,1.01) 2.16µs × (1.00,1.01) -0.51% (p=0.042 n=20+18) RegexpMatchHard_1K 65.6µs × (0.99,1.01) 64.8µs × (1.00,1.00) -1.21% (p=0.000 n=20+17) Revcomp 581ms × (0.99,1.04) 536ms × (1.00,1.01) -7.71% (p=0.000 n=20+18) Template 77.2ms × (0.99,1.01) 76.8ms × (0.99,1.01) ~ (p=0.426 n=20+18) TimeParse 369ns × (0.99,1.02) 371ns × (1.00,1.01) ~ (p=0.117 n=20+19) TimeFormat 371ns × (0.99,1.02) 391ns × (0.99,1.01) +5.33% (p=0.000 n=20+19) Change-Id: I5b952ba577ac4365c8c87db837c5804a1e30b7be Reviewed-on: https://go-review.googlesource.com/10293 Reviewed-by: Russ Cox <rsc@golang.org>
2015-05-11runtime: use 2-bit heap bitmap (in place of 4-bit)Russ Cox
Previous CLs changed the representation of the non-heap type bitmaps to be 1-bit bitmaps (pointer or not). Before this CL, the heap bitmap stored a 2-bit type for each word and a mark bit and checkmark bit for the first word of the object. (There used to be additional per-word bits.) Reduce heap bitmap to 2-bit, with 1 dedicated to pointer or not, and the other used for mark, checkmark, and "keep scanning forward to find pointers in this object." See comments for details. This CL replaces heapBitsSetType with very slow but obviously correct code. A followup CL will optimize it. (Spoiler: the new code is faster than Go 1.4 was.) Change-Id: I999577a133f3cfecacebdec9cdc3573c235c7fb9 Reviewed-on: https://go-review.googlesource.com/9703 Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Austin Clements <austin@google.com>
2015-05-02runtime: fix stackDebug commentAlex Brainman
Change-Id: Ia9191bd7ecdf7bd5ee7d69ae23aa71760f379aa8 Reviewed-on: https://go-review.googlesource.com/9590 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-05-01cmd/internal/gc, runtime: use 1-bit bitmap for stack frames, data, bssRuss Cox
The bitmaps were 2 bits per pointer because we needed to distinguish scalar, pointer, multiword, and we used the leftover value to distinguish uninitialized from scalar, even though the garbage collector (GC) didn't care. Now that there are no multiword structures from the GC's point of view, cut the bitmaps down to 1 bit per pointer, recording just live pointer vs not. The GC assumes the same layout for stack frames and for the maps describing the global data and bss sections, so change them all in one CL. The code still refers to 4-bit heap bitmaps and 2-bit "type bitmaps", since the 2-bit representation lives (at least for now) in some of the reflect data. Because these stack frame bitmaps are stored directly in the rodata in the binary, this CL reduces the size of the 6g binary by about 1.1%. Performance change is basically a wash, but using less memory, and smaller binaries, and enables other bitmap reductions. name old mean new mean delta BenchmarkBinaryTree17 13.2s × (0.97,1.03) 13.0s × (0.99,1.01) -0.93% (p=0.005) BenchmarkBinaryTree17-2 9.69s × (0.96,1.05) 9.51s × (0.96,1.03) -1.86% (p=0.001) BenchmarkBinaryTree17-4 10.1s × (0.97,1.05) 10.0s × (0.96,1.05) ~ (p=0.141) BenchmarkFannkuch11 4.35s × (0.99,1.01) 4.43s × (0.98,1.04) +1.75% (p=0.001) BenchmarkFannkuch11-2 4.31s × (0.99,1.03) 4.32s × (1.00,1.00) ~ (p=0.095) BenchmarkFannkuch11-4 4.32s × (0.99,1.02) 4.38s × (0.98,1.04) +1.38% (p=0.008) BenchmarkFmtFprintfEmpty 83.5ns × (0.97,1.10) 87.3ns × (0.92,1.11) +4.55% (p=0.014) BenchmarkFmtFprintfEmpty-2 81.8ns × (0.98,1.04) 82.5ns × (0.97,1.08) ~ (p=0.364) BenchmarkFmtFprintfEmpty-4 80.9ns × (0.99,1.01) 82.6ns × (0.97,1.08) +2.12% (p=0.010) BenchmarkFmtFprintfString 320ns × (0.95,1.04) 322ns × (0.97,1.05) ~ (p=0.368) BenchmarkFmtFprintfString-2 303ns × (0.97,1.04) 304ns × (0.97,1.04) ~ (p=0.484) BenchmarkFmtFprintfString-4 305ns × (0.97,1.05) 306ns × (0.98,1.05) ~ (p=0.543) BenchmarkFmtFprintfInt 311ns × (0.98,1.03) 319ns × (0.97,1.03) +2.63% (p=0.000) BenchmarkFmtFprintfInt-2 297ns × (0.98,1.04) 301ns × (0.97,1.04) +1.19% (p=0.023) BenchmarkFmtFprintfInt-4 302ns × (0.98,1.02) 304ns × (0.97,1.03) ~ (p=0.126) BenchmarkFmtFprintfIntInt 554ns × (0.96,1.05) 554ns × (0.97,1.03) ~ (p=0.975) BenchmarkFmtFprintfIntInt-2 520ns × (0.98,1.03) 517ns × (0.98,1.02) ~ (p=0.153) BenchmarkFmtFprintfIntInt-4 524ns × (0.98,1.02) 525ns × (0.98,1.03) ~ (p=0.597) BenchmarkFmtFprintfPrefixedInt 433ns × (0.97,1.06) 434ns × (0.97,1.06) ~ (p=0.804) BenchmarkFmtFprintfPrefixedInt-2 413ns × (0.98,1.04) 413ns × (0.98,1.03) ~ (p=0.881) BenchmarkFmtFprintfPrefixedInt-4 420ns × (0.97,1.03) 421ns × (0.97,1.03) ~ (p=0.561) BenchmarkFmtFprintfFloat 620ns × (0.99,1.03) 636ns × (0.97,1.03) +2.57% (p=0.000) BenchmarkFmtFprintfFloat-2 601ns × (0.98,1.02) 617ns × (0.98,1.03) +2.58% (p=0.000) BenchmarkFmtFprintfFloat-4 613ns × (0.98,1.03) 626ns × (0.98,1.02) +2.15% (p=0.000) BenchmarkFmtManyArgs 2.19µs × (0.96,1.04) 2.23µs × (0.97,1.02) +1.65% (p=0.000) BenchmarkFmtManyArgs-2 2.08µs × (0.98,1.03) 2.10µs × (0.99,1.02) +0.79% (p=0.019) BenchmarkFmtManyArgs-4 2.10µs × (0.98,1.02) 2.13µs × (0.98,1.02) +1.72% (p=0.000) BenchmarkGobDecode 21.3ms × (0.97,1.05) 21.1ms × (0.97,1.04) -1.36% (p=0.025) BenchmarkGobDecode-2 20.0ms × (0.97,1.03) 19.2ms × (0.97,1.03) -4.00% (p=0.000) BenchmarkGobDecode-4 19.5ms × (0.99,1.02) 19.0ms × (0.99,1.01) -2.39% (p=0.000) BenchmarkGobEncode 18.3ms × (0.95,1.07) 18.1ms × (0.96,1.08) ~ (p=0.305) BenchmarkGobEncode-2 16.8ms × (0.97,1.02) 16.4ms × (0.98,1.02) -2.79% (p=0.000) BenchmarkGobEncode-4 15.4ms × (0.98,1.02) 15.4ms × (0.98,1.02) ~ (p=0.465) BenchmarkGzip 650ms × (0.98,1.03) 655ms × (0.97,1.04) ~ (p=0.075) BenchmarkGzip-2 652ms × (0.98,1.03) 655ms × (0.98,1.02) ~ (p=0.337) BenchmarkGzip-4 656ms × (0.98,1.04) 653ms × (0.98,1.03) ~ (p=0.291) BenchmarkGunzip 143ms × (1.00,1.01) 143ms × (1.00,1.01) ~ (p=0.507) BenchmarkGunzip-2 143ms × (1.00,1.01) 143ms × (1.00,1.01) ~ (p=0.313) BenchmarkGunzip-4 143ms × (1.00,1.01) 143ms × (1.00,1.01) ~ (p=0.312) BenchmarkHTTPClientServer 110µs × (0.98,1.03) 109µs × (0.99,1.02) -1.40% (p=0.000) BenchmarkHTTPClientServer-2 154µs × (0.90,1.08) 149µs × (0.90,1.08) -3.43% (p=0.007) BenchmarkHTTPClientServer-4 138µs × (0.97,1.04) 138µs × (0.96,1.04) ~ (p=0.670) BenchmarkJSONEncode 40.2ms × (0.98,1.02) 40.2ms × (0.98,1.05) ~ (p=0.828) BenchmarkJSONEncode-2 35.1ms × (0.99,1.02) 35.2ms × (0.98,1.03) ~ (p=0.392) BenchmarkJSONEncode-4 35.3ms × (0.98,1.03) 35.3ms × (0.98,1.02) ~ (p=0.813) BenchmarkJSONDecode 119ms × (0.97,1.02) 117ms × (0.98,1.02) -1.80% (p=0.000) BenchmarkJSONDecode-2 115ms × (0.99,1.02) 114ms × (0.98,1.02) -1.18% (p=0.000) BenchmarkJSONDecode-4 116ms × (0.98,1.02) 114ms × (0.98,1.02) -1.43% (p=0.000) BenchmarkMandelbrot200 6.03ms × (1.00,1.01) 6.03ms × (1.00,1.01) ~ (p=0.985) BenchmarkMandelbrot200-2 6.03ms × (1.00,1.01) 6.02ms × (1.00,1.01) ~ (p=0.320) BenchmarkMandelbrot200-4 6.03ms × (1.00,1.01) 6.03ms × (1.00,1.01) ~ (p=0.799) BenchmarkGoParse 8.63ms × (0.89,1.10) 8.58ms × (0.93,1.09) ~ (p=0.667) BenchmarkGoParse-2 8.20ms × (0.97,1.04) 8.37ms × (0.97,1.04) +1.96% (p=0.001) BenchmarkGoParse-4 8.00ms × (0.98,1.02) 8.14ms × (0.99,1.02) +1.75% (p=0.000) BenchmarkRegexpMatchEasy0_32 162ns × (1.00,1.01) 164ns × (0.98,1.04) +1.35% (p=0.011) BenchmarkRegexpMatchEasy0_32-2 161ns × (1.00,1.01) 161ns × (1.00,1.00) ~ (p=0.185) BenchmarkRegexpMatchEasy0_32-4 161ns × (1.00,1.00) 161ns × (1.00,1.00) -0.19% (p=0.001) BenchmarkRegexpMatchEasy0_1K 540ns × (0.99,1.02) 566ns × (0.98,1.04) +4.98% (p=0.000) BenchmarkRegexpMatchEasy0_1K-2 540ns × (0.99,1.01) 557ns × (0.99,1.01) +3.21% (p=0.000) BenchmarkRegexpMatchEasy0_1K-4 541ns × (0.99,1.01) 559ns × (0.99,1.01) +3.26% (p=0.000) BenchmarkRegexpMatchEasy1_32 139ns × (0.98,1.04) 139ns × (0.99,1.03) ~ (p=0.979) BenchmarkRegexpMatchEasy1_32-2 139ns × (0.99,1.04) 139ns × (0.99,1.02) ~ (p=0.777) BenchmarkRegexpMatchEasy1_32-4 139ns × (0.98,1.04) 139ns × (0.99,1.04) ~ (p=0.771) BenchmarkRegexpMatchEasy1_1K 890ns × (0.99,1.03) 885ns × (1.00,1.01) -0.50% (p=0.004) BenchmarkRegexpMatchEasy1_1K-2 888ns × (0.99,1.01) 885ns × (0.99,1.01) -0.37% (p=0.004) BenchmarkRegexpMatchEasy1_1K-4 890ns × (0.99,1.02) 884ns × (1.00,1.00) -0.70% (p=0.000) BenchmarkRegexpMatchMedium_32 252ns × (0.99,1.01) 251ns × (0.99,1.01) ~ (p=0.081) BenchmarkRegexpMatchMedium_32-2 254ns × (0.99,1.04) 252ns × (0.99,1.01) -0.78% (p=0.027) BenchmarkRegexpMatchMedium_32-4 253ns × (0.99,1.04) 252ns × (0.99,1.01) -0.70% (p=0.022) BenchmarkRegexpMatchMedium_1K 72.9µs × (0.99,1.01) 72.7µs × (1.00,1.00) ~ (p=0.064) BenchmarkRegexpMatchMedium_1K-2 74.1µs × (0.98,1.05) 72.9µs × (1.00,1.01) -1.61% (p=0.001) BenchmarkRegexpMatchMedium_1K-4 73.6µs × (0.99,1.05) 72.8µs × (1.00,1.00) -1.13% (p=0.007) BenchmarkRegexpMatchHard_32 3.88µs × (0.99,1.03) 3.92µs × (0.98,1.05) ~ (p=0.143) BenchmarkRegexpMatchHard_32-2 3.89µs × (0.99,1.03) 3.93µs × (0.98,1.09) ~ (p=0.278) BenchmarkRegexpMatchHard_32-4 3.90µs × (0.99,1.05) 3.93µs × (0.98,1.05) ~ (p=0.252) BenchmarkRegexpMatchHard_1K 118µs × (0.99,1.01) 117µs × (0.99,1.02) -0.54% (p=0.003) BenchmarkRegexpMatchHard_1K-2 118µs × (0.99,1.01) 118µs × (0.99,1.03) ~ (p=0.581) BenchmarkRegexpMatchHard_1K-4 118µs × (0.99,1.02) 117µs × (0.99,1.01) -0.54% (p=0.002) BenchmarkRevcomp 991ms × (0.95,1.10) 989ms × (0.94,1.08) ~ (p=0.879) BenchmarkRevcomp-2 978ms × (0.95,1.11) 962ms × (0.96,1.08) ~ (p=0.257) BenchmarkRevcomp-4 979ms × (0.96,1.07) 974ms × (0.96,1.11) ~ (p=0.678) BenchmarkTemplate 141ms × (0.99,1.02) 145ms × (0.99,1.02) +2.75% (p=0.000) BenchmarkTemplate-2 135ms × (0.98,1.02) 138ms × (0.99,1.02) +2.34% (p=0.000) BenchmarkTemplate-4 136ms × (0.98,1.02) 140ms × (0.99,1.02) +2.71% (p=0.000) BenchmarkTimeParse 640ns × (0.99,1.01) 622ns × (0.99,1.01) -2.88% (p=0.000) BenchmarkTimeParse-2 640ns × (0.99,1.01) 622ns × (1.00,1.00) -2.81% (p=0.000) BenchmarkTimeParse-4 640ns × (1.00,1.01) 622ns × (0.99,1.01) -2.82% (p=0.000) BenchmarkTimeFormat 730ns × (0.98,1.02) 731ns × (0.98,1.03) ~ (p=0.767) BenchmarkTimeFormat-2 709ns × (0.99,1.02) 707ns × (0.99,1.02) ~ (p=0.347) BenchmarkTimeFormat-4 717ns × (0.98,1.01) 718ns × (0.98,1.02) ~ (p=0.793) Change-Id: Ie779c47e912bf80eb918bafa13638bd8dfd6c2d9 Reviewed-on: https://go-review.googlesource.com/9406 Reviewed-by: Rick Hudson <rlh@golang.org>
2015-04-20runtime: replace func-based write barrier skipping with type-basedRuss Cox
This CL revises CL 7504 to use explicitly uintptr types for the struct fields that are going to be updated sometimes without write barriers. The result is that the fields are now updated *always* without write barriers. This approach has two important properties: 1) Now the GC never looks at the field, so if the missing reference could cause a problem, it will do so all the time, not just when the write barrier is missed at just the right moment. 2) Now a write barrier never happens for the field, avoiding the (correct) detection of inconsistent write barriers when GODEBUG=wbshadow=1. Change-Id: Iebd3962c727c0046495cc08914a8dc0808460e0e Reviewed-on: https://go-review.googlesource.com/9019 Reviewed-by: Austin Clements <austin@google.com> Run-TryBot: Russ Cox <rsc@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-03-16runtime: add support for linux/arm64Aram Hăvărneanu
Change-Id: Ibda6a5bedaff57fd161d63fc04ad260931d34413 Reviewed-on: https://go-review.googlesource.com/7142 Reviewed-by: Russ Cox <rsc@golang.org>
2015-02-03runtime: use 2*regSize for saved frame pointer checkAustin Clements
Previously, we checked for a saved frame pointer by looking for a 2*ptrSize gap between the argument pointer and the locals pointer. The intent of this check was to look for a two stack slot gap (caller IP and saved frame pointer), but stack slots are regSize, not ptrSize. Correct this by checking instead for a 2*regSize gap. On most platforms, this made no difference because ptrSize==regSize. However, on amd64p32 (nacl), the saved frame pointer check incorrectly fired when there was no saved frame pointer because the one stack slot for the caller IP left an 8 byte gap, which is 2*ptrSize (but not 2*regSize) on amd64p32. Fixes #9760. Change-Id: I6eedcf681fe5bf2bf924dde8a8f2d9860a4d758e Reviewed-on: https://go-review.googlesource.com/3781 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-02-03runtime: add missing \n to error messageAustin Clements
Change-Id: Ife7d30f4191e6a8aaf3a442340d277989f7a062d Reviewed-on: https://go-review.googlesource.com/3780 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-02-02cmd/6g, liblink, runtime: support saving base pointersAustin Clements
This adds a "framepointer" GOEXPERIMENT that that makes the amd64 toolchain maintain base pointer chains in the same way that gcc -fno-omit-frame-pointer does. Go doesn't use these saved base pointers, but this does enable external tools like Linux perf and VTune to unwind Go stacks when collecting system-wide profiles. This requires support in the compilers to not clobber BP, support in liblink for generating the BP-saving function prologue and unwinding epilogue, and support in the runtime to save BPs across preemption, to skip saved BPs during stack unwinding and, and to adjust saved BPs during stack moving. As with other GOEXPERIMENTs, everything from the toolchain to the runtime must be compiled with this experiment enabled. To do this, run make.bash (or all.bash) with GOEXPERIMENT=framepointer. Change-Id: I4024853beefb9539949e5ca381adfdd9cfada544 Reviewed-on: https://go-review.googlesource.com/2992 Reviewed-by: Russ Cox <rsc@golang.org>
2015-02-02runtime: rename m.gcing to m.preemptoff and make it a stringAustin Clements
m.gcing has become overloaded to mean "don't preempt this g" in general. Once the garbage collector is preemptible, the one thing it *won't* mean is that we're in the garbage collector. So, rename gcing to "preemptoff" and make it a string giving a reason that preemption is disabled. gcing was never set to anything but 0 or 1, so we don't have to worry about there being a stack of reasons. Change-Id: I4337c29e8e942e7aa4f106fc29597e1b5de4ef46 Reviewed-on: https://go-review.googlesource.com/3660 Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-28runtime: add tracing of runtime eventsDmitry Vyukov
Add actual tracing of interesting runtime events. Part of a larger tracing functionality: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub Full change: https://codereview.appspot.com/146920043 Change-Id: Icccf54aea54e09350bb698ba6bf11532f9fbe6d3 Reviewed-on: https://go-review.googlesource.com/1451 Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-19runtime: factor out bitmap, finalizer code from malloc/mgcRuss Cox
The code in mfinal.go is moved from malloc*.go and mgc*.go and substantially unchanged. The code in mbitmap.go is also moved from those files, but cleaned up so that it can be called from those files (in most cases the code being moved was not already a standalone function). I also renamed the constants and wrote comments describing the format. The result is a significant cleanup and isolation of the bitmap code, but, roughly speaking, it should be treated and reviewed as new code. The other files changed only as much as necessary to support this code movement. This CL does NOT change the semantics of the heap or type bitmaps at all, although there are now some obvious opportunities to do so in followup CLs. Change-Id: I41b8d5de87ad1d3cd322709931ab25e659dbb21d Reviewed-on: https://go-review.googlesource.com/2991 Reviewed-by: Keith Randall <khr@golang.org>
2015-01-19runtime: move write barrier code into mbarrier.goRuss Cox
I also added new comments at the top of mbarrier.go, but the rest of the code is just copy-and-paste. Change-Id: Iaeb2b12f8b1eaa33dbff5c2de676ca902bfddf2e Reviewed-on: https://go-review.googlesource.com/2990 Reviewed-by: Austin Clements <austin@google.com>
2015-01-14runtime: avoid race checking for preemptionRuss Cox
Moving the "don't really preempt" check up earlier in the function introduced a race where gp.stackguard0 might change between the early check and the later one. Since the later one is missing the "don't really preempt" logic, it could decide to preempt incorrectly. Pull the result of the check into a local variable and use an atomic to access stackguard0, to eliminate the race. I believe this will fix the broken OS X and Solaris builders. Change-Id: I238350dd76560282b0c15a3306549cbcf390dbff Reviewed-on: https://go-review.googlesource.com/2823 Reviewed-by: Austin Clements <austin@google.com>
2015-01-14runtime: fix a few GC-related bugsRuss Cox
1) Move non-preemption check even earlier in newstack. This avoids a few priority inversion problems. 2) Always use atomic operations to update bitmap for 1-word objects. This avoids lost mark bits during concurrent GC. 3) Stop using work.nproc == 1 as a signal for being single-threaded. The concurrent GC runs with work.nproc == 1 but other procs are running mutator code. The use of work.nproc == 1 in getfull *is* safe, but remove it anyway, since it is saving only a single atomic operation per GC round. Fixes #9225. Change-Id: I24134f100ad592ea8cb59efb6a54f5a1311093dc Reviewed-on: https://go-review.googlesource.com/2745 Reviewed-by: Rick Hudson <rlh@golang.org>
2015-01-06runtime: add GODEBUG wbshadow for finding missing write barriersRuss Cox
This is the detection code. It works well enough that I know of a handful of missing write barriers. However, those are subtle enough that I'll address them in separate followup CLs. GODEBUG=wbshadow=1 checks for a write that bypassed the write barrier at the next write barrier of the same word. If a bug can be detected in this mode it is typically easy to understand, since the crash says quite clearly what kind of word has missed a write barrier. GODEBUG=wbshadow=2 adds a check of the write barrier shadow copy during garbage collection. Bugs detected at garbage collection can be difficult to understand, because there is no context for what the found word means. Typically you have to reproduce the problem with allocfreetrace=1 in order to understand the type of the badly updated word. Change-Id: If863837308e7c50d96b5bdc7d65af4969bf53a6e Reviewed-on: https://go-review.googlesource.com/2061 Reviewed-by: Austin Clements <austin@google.com>
2015-01-05Revert "liblink, cmd/ld, runtime: remove stackguard1"Russ Cox
This reverts commit ab0535ae3fb45ba734d47542cc4845f27f708d1b. I think it will remain useful to distinguish code that must run on a system stack from code that can run on either stack, even if that distinction is no longer based on the implementation language. That is, I expect to add a //go:systemstack comment that, in terms of the old implementation, tells the compiler, to pretend this function was written in C. Change-Id: I33d2ebb2f99ae12496484c6ec8ed07233d693275 Reviewed-on: https://go-review.googlesource.com/2275 Reviewed-by: Russ Cox <rsc@golang.org>
2014-12-29runtime: remove go prefix from a few routinesKeith Randall
They are no longer needed now that C is gone. goatoi -> atoi gofuncname/funcname -> funcname/cfuncname goroundupsize -> already existing roundupsize Change-Id: I278bc33d279e1fdc5e8a2a04e961c4c1573b28c7 Reviewed-on: https://go-review.googlesource.com/2154 Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
2014-12-29liblink, cmd/ld, runtime: remove stackguard1Shenghou Ma
Now that we've removed all the C code in runtime and the C compilers, there is no need to have a separate stackguard field to check for C code on Go stack. Remove field g.stackguard1 and rename g.stackguard0 to g.stackguard. Adjust liblink and cmd/ld as necessary. Change-Id: I54e75db5a93d783e86af5ff1a6cd497d669d8d33 Reviewed-on: https://go-review.googlesource.com/2144 Reviewed-by: Keith Randall <khr@golang.org>
2014-12-28runtime: rename gothrow to throwKeith Randall
Rename "gothrow" to "throw" now that the C version of "throw" is no longer needed. This change is purely mechanical except in panic.go where the old version of "throw" has been deleted. sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3 Reviewed-on: https://go-review.googlesource.com/2150 Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Dave Cheney <dave@cheney.net>
2014-12-23runtime: make stack frames fixed size by modifying goproc/deferproc.Keith Randall
Calls to goproc/deferproc used to push & pop two extra arguments, the argument size and the function to call. Now, we allocate space for those arguments in the outargs section so we don't have to modify the SP. Defers now use the stack pointer (instead of the argument pointer) to identify which frame they are associated with. A followon CL might simplify funcspdelta and some of the stack walking code. Fixes issue #8641 Change-Id: I835ec2f42f0392c5dec7cb0fe6bba6f2aed1dad8 Reviewed-on: https://go-review.googlesource.com/1601 Reviewed-by: Russ Cox <rsc@golang.org>
2014-12-05[dev.garbage] all: merge dev.cc (81884b89bd88) into dev.garbageRuss Cox
TBR=rlh CC=golang-codereviews https://golang.org/cl/181100044
2014-12-05[dev.cc] all: merge default (8d42099cdc23) into dev.ccRuss Cox
TBR=austin CC=golang-codereviews https://golang.org/cl/178700044
2014-11-24[dev.garbage] all: merge dev.cc (493ad916c3b1) into dev.garbageRuss Cox
TBR=austin CC=golang-codereviews https://golang.org/cl/179290043
2014-11-21[dev.garbage] runtime: Stop running gs during the GCscan phase.Rick Hudson
Ensure that all gs are in a scan state when their stacks are being scanned. LGTM=rsc R=rsc CC=golang-codereviews https://golang.org/cl/179160044
2014-11-20[dev.garbage] runtime: Turn concurrent GC on by default. Avoid write ↵Rick Hudson
barriers for GC internal structures such as free lists. LGTM=rsc R=rsc CC=golang-codereviews, rsc https://golang.org/cl/179000043
2014-11-18[dev.cc] runtime: generate GOOS- and GOARCH-specific files with go generateRuss Cox
Eventually I'd like almost everything cmd/dist generates to be done with 'go generate' and checked in, to simplify the bootstrap process. The only thing cmd/dist really needs to do is write things like the current experiment info and the current version. This is a first step toward that. It replaces the _NaCl etc constants with generated ones goos_nacl, goos_darwin, goarch_386, and so on. LGTM=dave, austin R=austin, dave, bradfitz CC=golang-codereviews, iant, r https://golang.org/cl/174290043
2014-11-15[dev.garbage] all: merge dev.cc into dev.garbageRuss Cox
The garbage collector is now written in Go. There is plenty to clean up (just like on dev.cc). all.bash passes on darwin/amd64, darwin/386, linux/amd64, linux/386. TBR=rlh R=austin, rlh, bradfitz CC=golang-codereviews https://golang.org/cl/173250043
2014-11-12[dev.cc] runtime: delete scalararg, ptrarg; rename onM to systemstackRuss Cox
Scalararg and ptrarg are not "signal safe". Go code filling them out can be interrupted by a signal, and then the signal handler runs, and if it also ends up in Go code that uses scalararg or ptrarg, now the old values have been smashed. For the pieces of code that do need to run in a signal handler, we introduced onM_signalok, which is really just onM except that the _signalok is meant to convey that the caller asserts that scalarg and ptrarg will be restored to their old values after the call (instead of the usual behavior, zeroing them). Scalararg and ptrarg are also untyped and therefore error-prone. Go code can always pass a closure instead of using scalararg and ptrarg; they were only really necessary for C code. And there's no more C code. For all these reasons, delete scalararg and ptrarg, converting the few remaining references to use closures. Once those are gone, there is no need for a distinction between onM and onM_signalok, so replace both with a single function equivalent to the current onM_signalok (that is, it can be called on any of the curg, g0, and gsignal stacks). The name onM and the phrase 'm stack' are misnomers, because on most system an M has two system stacks: the main thread stack and the signal handling stack. Correct the misnomer by naming the replacement function systemstack. Fix a few references to "M stack" in code. The main motivation for this change is to eliminate scalararg/ptrarg. Rick and I have already seen them cause problems because the calling sequence m.ptrarg[0] = p is a heap pointer assignment, so it gets a write barrier. The write barrier also uses onM, so it has all the same problems as if it were being invoked by a signal handler. We worked around this by saving and restoring the old values and by calling onM_signalok, but there's no point in keeping this nice home for bugs around any longer. This CL also changes funcline to return the file name as a result instead of filling in a passed-in *string. (The *string signature is left over from when the code was written in and called from C.) That's arguably an unrelated change, except that once I had done the ptrarg/scalararg/onM cleanup I started getting false positives about the *string argument escaping (not allowed in package runtime). The compiler is wrong, but the easiest fix is to write the code like Go code instead of like C code. I am a bit worried that the compiler is wrong because of some use of uninitialized memory in the escape analysis. If that's the reason, it will go away when we convert the compiler to Go. (And if not, we'll debug it the next time.) LGTM=khr R=r, khr CC=austin, golang-codereviews, iant, rlh https://golang.org/cl/174950043
2014-11-11[dev.cc] runtime: convert panic and stack code from C to GoRuss Cox
The conversion was done with an automated tool and then modified only as necessary to make it compile and run. [This CL is part of the removal of C code from package runtime. See golang.org/s/dev.cc for an overview.] LGTM=r R=r, dave CC=austin, dvyukov, golang-codereviews, iant, khr https://golang.org/cl/166520043