diff options
| author | Martin Möhrmann <moehrmann@google.com> | 2017-08-14 10:16:21 +0200 |
|---|---|---|
| committer | Martin Möhrmann <moehrmann@google.com> | 2017-08-22 20:28:21 +0000 |
| commit | cbc4e5d9c4f2444c5d40ae6333b4e1f4c9cfbd41 (patch) | |
| tree | ba4c93d8a8fef88cf61720332ab2426fd34c3d3f /src/runtime/hashmap_fast.go | |
| parent | 0fb0f575bcd2dc3e00a370b325e7e6d020f226b8 (diff) | |
| download | go-cbc4e5d9c4f2444c5d40ae6333b4e1f4c9cfbd41.tar.xz | |
cmd/compile: generate makemap calls with int arguments
Where possible generate calls to runtime makemap with int hint argument
during compile time instead of makemap with int64 hint argument.
This eliminates converting the hint argument for calls to makemap with
int64 hint argument for platforms where int64 values do not fit into
an argument of type int.
A similar optimization for makeslice was introduced in CL
golang.org/cl/27851.
386:
name old time/op new time/op delta
NewEmptyMap 53.5ns ± 5% 41.9ns ± 5% -21.56% (p=0.000 n=10+10)
NewSmallMap 182ns ± 1% 165ns ± 1% -8.92% (p=0.000 n=10+10)
Change-Id: Ibd2b4c57b36f171b173bf7a0602b3a59771e6e44
Reviewed-on: https://go-review.googlesource.com/55142
Reviewed-by: Keith Randall <khr@golang.org>
Diffstat (limited to 'src/runtime/hashmap_fast.go')
| -rw-r--r-- | src/runtime/hashmap_fast.go | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/src/runtime/hashmap_fast.go b/src/runtime/hashmap_fast.go index c3ce5ae150..e83c72d0f9 100644 --- a/src/runtime/hashmap_fast.go +++ b/src/runtime/hashmap_fast.go @@ -462,7 +462,7 @@ again: // If we hit the max load factor or we have too many overflow buckets, // and we're not already in the middle of growing, start growing. - if !h.growing() && (overLoadFactor(int64(h.count), h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) { + if !h.growing() && (overLoadFactor(h.count, h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) { hashGrow(t, h) goto again // Growing the table invalidates everything, so try again } @@ -547,7 +547,7 @@ again: // If we hit the max load factor or we have too many overflow buckets, // and we're not already in the middle of growing, start growing. - if !h.growing() && (overLoadFactor(int64(h.count), h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) { + if !h.growing() && (overLoadFactor(h.count, h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) { hashGrow(t, h) goto again // Growing the table invalidates everything, so try again } @@ -637,7 +637,7 @@ again: // If we hit the max load factor or we have too many overflow buckets, // and we're not already in the middle of growing, start growing. - if !h.growing() && (overLoadFactor(int64(h.count), h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) { + if !h.growing() && (overLoadFactor(h.count, h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) { hashGrow(t, h) goto again // Growing the table invalidates everything, so try again } |
