aboutsummaryrefslogtreecommitdiff
path: root/src/internal/fuzz
diff options
context:
space:
mode:
authorMichael Anthony Knyszek <mknyszek@google.com>2022-05-20 16:30:11 +0000
committerMichael Knyszek <mknyszek@google.com>2022-05-20 21:54:20 +0000
commitb58067013eaa2f2bf0dc24f4d848e10bb758b6bd (patch)
tree6870a458dedd52c3126c5047dec0bbd11fbb5fcd /src/internal/fuzz
parent7ec6ef432a85a390365f2daed788f0d14c830c73 (diff)
downloadgo-b58067013eaa2f2bf0dc24f4d848e10bb758b6bd.tar.xz
runtime: allocate physical-page-aligned memory differently
Currently, physical-page-aligned allocations for stacks (where the physical page size is greater than the runtime page size) first overallocates some memory, then frees the unaligned portions back to the heap. However, because allocating via h.pages.alloc causes scavenged bits to get cleared, we need to account for that memory correctly in heapFree and heapReleased. Currently that is not the case, leading to throws at runtime. Trying to get that accounting right is complicated, because information about exactly which pages were scavenged needs to get plumbed up. Instead, find the oversized region first, and then only allocate the aligned part. This avoids any accounting issues. However, this does come with some performance cost, because we don't update searchAddr (which is safe, it just means the next allocation potentially must look harder) and we skip the fast path that h.pages.alloc has for simplicity. Fixes #52682. Change-Id: Iefa68317584d73b187634979d730eb30db770bb6 Reviewed-on: https://go-review.googlesource.com/c/go/+/407502 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Cherry Mui <cherryyz@google.com>
Diffstat (limited to 'src/internal/fuzz')
0 files changed, 0 insertions, 0 deletions