aboutsummaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorAustin Clements <austin@google.com>2015-06-22 11:18:23 -0400
committerAustin Clements <austin@google.com>2015-06-22 18:54:38 +0000
commit9a3112bcaeaa4ff81803c1e5dd9ca6932c14f2c1 (patch)
tree2883c12d914cf072ff5fcf091280af0b667c3145 /src
parent2a331ca8bbadfa20fc1790b04ff90eec23b156e8 (diff)
downloadgo-9a3112bcaeaa4ff81803c1e5dd9ca6932c14f2c1.tar.xz
runtime: one more Map{Bits,Spans} before arena_used update
In order to avoid a race with a concurrent write barrier or garbage collector thread, any update to arena_used must be preceded by mapping the corresponding heap bitmap and spans array memory. Otherwise, the concurrent access may observe that a pointer falls within the heap arena, but then attempt to access unmapped memory to look up its span or heap bits. Commit d57c889 fixed all of the places where we updated arena_used immediately before mapping the heap bitmap and spans, but it missed the one place where we update arena_used and depend on later code to update it again and map the bitmap and spans. This creates a window where the original race can still happen. This commit fixes this by mapping the heap bitmap and spans before this arena_used update as well. This code path is only taken when expanding the heap reservation on 32-bit over a hole in the address space, so these extra mmap calls should have negligible impact. Fixes #10212, #11324. Change-Id: Id67795e6c7563eb551873bc401e5cc997aaa2bd8 Reviewed-on: https://go-review.googlesource.com/11340 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Diffstat (limited to 'src')
-rw-r--r--src/runtime/malloc.go5
-rw-r--r--src/runtime/mheap.go2
2 files changed, 5 insertions, 2 deletions
diff --git a/src/runtime/malloc.go b/src/runtime/malloc.go
index 37d3a1eea1..f6608309e8 100644
--- a/src/runtime/malloc.go
+++ b/src/runtime/malloc.go
@@ -405,7 +405,10 @@ func mHeap_SysAlloc(h *mheap, n uintptr) unsafe.Pointer {
// Keep everything page-aligned.
// Our pages are bigger than hardware pages.
h.arena_end = p + p_size
- h.arena_used = p + (-uintptr(p) & (_PageSize - 1))
+ used := p + (-uintptr(p) & (_PageSize - 1))
+ mHeap_MapBits(h, used)
+ mHeap_MapSpans(h, used)
+ h.arena_used = used
h.arena_reserved = reserved
} else {
var stat uint64
diff --git a/src/runtime/mheap.go b/src/runtime/mheap.go
index fceee7d464..b73a155700 100644
--- a/src/runtime/mheap.go
+++ b/src/runtime/mheap.go
@@ -41,7 +41,7 @@ type mheap struct {
bitmap uintptr
bitmap_mapped uintptr
arena_start uintptr
- arena_used uintptr
+ arena_used uintptr // always mHeap_Map{Bits,Spans} before updating
arena_end uintptr
arena_reserved bool