aboutsummaryrefslogtreecommitdiff
path: root/src/pkg/runtime/hashmap.c
AgeCommit message (Collapse)Author
2014-02-20runtime: use goc2c as much as possibleRuss Cox
Package runtime's C functions written to be called from Go started out written in C using carefully constructed argument lists and the FLUSH macro to write a result back to memory. For some functions, the appropriate parameter list ended up being architecture-dependent due to differences in alignment, so we added 'goc2c', which takes a .goc file containing Go func declarations but C bodies, rewrites the Go func declaration to equivalent C declarations for the target architecture, adds the needed FLUSH statements, and writes out an equivalent C file. That C file is compiled as part of package runtime. Native Client's x86-64 support introduces the most complex alignment rules yet, breaking many functions that could until now be portably written in C. Using goc2c for those avoids the breakage. Separately, Keith's work on emitting stack information from the C compiler would require the hand-written functions to add #pragmas specifying how many arguments are result parameters. Using goc2c for those avoids maintaining #pragmas. For both reasons, use goc2c for as many Go-called C functions as possible. This CL is a replay of the bulk of CL 15400047 and CL 15790043, both of which were reviewed as part of the NaCl port and are checked in to the NaCl branch. This CL is part of bringing the NaCl code into the main tree. No new code here, just reformatting and occasional movement into .h files. LGTM=r R=dave, alex.brainman, r CC=golang-codereviews https://golang.org/cl/65220044
2014-01-14runtime: Change size of map iter offset so 32-bit version compiles cleanly.Keith Randall
R=golang-codereviews, minux.ma CC=golang-codereviews https://golang.org/cl/52310043
2014-01-14runtime: change map iteration randomization to use intra-bucket offsetJosh Bleecher Snyder
Map iteration previously started from a random bucket, but walked each bucket from the beginning. Now, iteration always starts from the first bucket and walks each bucket starting at a random offset. For performance, the random offset is selected at the start of iteration and reused for each bucket. Iteration over a map with 8 or fewer elements--a single bucket--will now be non-deterministic. There will now be only 8 different possible map iterations. Significant benchmark changes, on my OS X laptop (rough but consistent): benchmark old ns/op new ns/op delta BenchmarkMapIter 128 121 -5.47% BenchmarkMapIterEmpty 4.26 4.45 +4.46% BenchmarkNewEmptyMap 114 111 -2.63% Fixes #6719. R=khr, bradfitz CC=golang-codereviews https://golang.org/cl/47370043
2014-01-04runtime: Fix race detector checks to ignore KindNoPointers bitKeith Randall
when comparing kinds. R=dvyukov, dave, khr CC=golang-codereviews https://golang.org/cl/41660045
2013-12-30runtime: use readrange instead of read to check for racesKeith Randall
on map keys and values which are now passed by reference. R=dvyukov, khr CC=golang-codereviews https://golang.org/cl/43490044
2013-12-19reflect: rewrite Value to separate out pointer vs. nonpointer info.Keith Randall
Needed for precise gc and copying stacks. reflect.Value now takes 4 words instead of 3. Still to do: - un-iword-ify channel ops. - un-iword-ify method receivers. R=golang-dev, iant, rsc, khr CC=golang-dev https://golang.org/cl/43040043
2013-12-17runtime: don't store evacuate bit as low bit of hashtable overflow pointer.Keith Randall
Hash tables currently store an evacuated bit in the low bit of the overflow pointer. That's probably not sustainable in the long term as GC wants correctly typed & aligned pointers. It is also a pain to move any of this code to Go in the current state. This change moves the evacuated bit into the tophash entries. Performance change is negligable. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/14412043
2013-12-02runtime: fix race detector when map keys/values are passed by pointer.Keith Randall
Now that the map implementation is reading the keys and values from arbitrary memory (instead of from stack slots), it needs to tell the race detector when it does so. Fixes #6875. R=golang-dev, dave CC=golang-dev https://golang.org/cl/36360043
2013-12-02runtime: pass key/value to map accessors by reference, not by value.Keith Randall
This change is part of the plan to get rid of all vararg C calls which are a pain for getting exact stack scanning. We allocate a chunk of zero memory to return a pointer to when a map access doesn't find the key. This is simpler than returning nil and fixing things up in the caller. Linker magic allocates a single zero memory area that is shared by all (non-reflect-generated) map types. Passing things by reference gets rid of some copies, so it speeds up code with big keys/values. benchmark old ns/op new ns/op delta BenchmarkBigKeyMap 34 31 -8.48% BenchmarkBigValMap 37 30 -18.62% BenchmarkSmallKeyMap 26 23 -11.28% R=golang-dev, dvyukov, khr, rsc CC=golang-dev https://golang.org/cl/14794043
2013-10-04runtime: fix bug in maps at the intersection of iterators, growing, and NaN keysKeith Randall
If an iterator is started while a map is in the middle of a grow, and the map has NaN keys, then those keys might get returned by the iterator more than once. This fix makes the evacuation decision deterministic and repeatable for NaN keys so each one gets returned only once. R=golang-dev, r, khr, iant CC=golang-dev https://golang.org/cl/14367043
2013-08-31runtime: clean up map code. Remove hashmap.h.Keith Randall
Use cnew/cnewarray instead of mallocgc. R=golang-dev, dvyukov CC=golang-dev https://golang.org/cl/13396045
2013-08-31runtime: record type information for hashtable internal structures.Keith Randall
Remove all hashtable-specific GC code. Fixes bug 6119. R=cshapiro, dvyukov, khr CC=golang-dev https://golang.org/cl/13078044
2013-08-29cmd/cc,runtime: change preprocessor to expand macros inside ofKeith Randall
#pragma textflag and #pragma dataflag directives. Update dataflag directives to use symbols instead of integer constants. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/13310043
2013-08-15runtime: remove old preemption checksDmitriy Vyukov
runtime.gcwaiting checks are not needed anymore R=golang-dev, khr CC=golang-dev https://golang.org/cl/12934043
2013-08-13undo CL 12840043 / 3b9f54db72a1Keith Randall
Breaks the build. Old bucket arrays kept by iterators still need to be scanned. ««« original CL description runtime: tell GC not to scan internal hashmap structures. We'll do it ourselves via hash_gciter, thanks. Fixes bug 6119. R=golang-dev, dvyukov, cookieo9, rsc CC=golang-dev https://golang.org/cl/12840043 »»» R=golang-dev CC=golang-dev https://golang.org/cl/12884043
2013-08-13runtime: tell GC not to scan internal hashmap structures.Keith Randall
We'll do it ourselves via hash_gciter, thanks. Fixes bug 6119. R=golang-dev, dvyukov, cookieo9, rsc CC=golang-dev https://golang.org/cl/12840043
2013-08-12runtime: change textflags from numbers to symbolsKeith Randall
R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/12798043
2013-07-31runtime: rewrite map size testRuss Cox
I don't know why the memstats code is flaky. TBR=bradfitz CC=golang-dev https://golang.org/cl/12160043
2013-07-30runtime: optimize some hash lookups.Keith Randall
When comparing strings, check these (in order): - length mismatch => not equal - string pointer equal => equal - if length is short: - memeq on body - if length is long: - compare first&last few bytes, if different => not equal - save entry as a possible match - after checking every entry, if there is only one possible match, use memeq on that entry. Otherwise, fallback to hash. benchmark old ns/op new ns/op delta BenchmarkSameLengthMap 43 4 -89.77% Fixes #5194. Update #3885. R=golang-dev, bradfitz, khr, rsc CC=golang-dev https://golang.org/cl/12128044
2013-07-30runtime: cut struct Hmap back to 48-byte allocationRuss Cox
struct Hmap is the header for a map value. CL 8377046 made flags a uint32 so that it could be updated atomically, but that bumped the struct to 56 bytes, which allocates as 64 bytes (on amd64). hash0 is initialized from runtime.fastrand1, which returns a uint32, so the top 32 bits were always zero anyway. Declare it as a uint32 to reclaim 4 bytes and bring the Hmap size back down to a 48-byte allocation. Fixes #5237. R=golang-dev, khr, khr CC=bradfitz, dvyukov, golang-dev https://golang.org/cl/12034047
2013-07-26runtime: refactor mallocgcDmitriy Vyukov
Make it accept type, combine flags. Several reasons for the change: 1. mallocgc and settype must be atomic wrt GC 2. settype is called from only one place now 3. it will help performance (eventually settype functionality must be combined with markallocated) 4. flags are easier to read now (no mallocgc(sz, 0, 1, 0) anymore) R=golang-dev, iant, nightlyone, rsc, dave, khr, bradfitz, r CC=golang-dev https://golang.org/cl/10136043
2013-07-16runtime: minor cleanup of hashmap codeDmitriy Vyukov
R=golang-dev, iant CC=golang-dev https://golang.org/cl/11357043
2013-05-31runtime: revert of CL 8852047: do hashmap grow work during reads.Keith Randall
seems to break freebsd-386. R=golang-dev, dave CC=golang-dev https://golang.org/cl/9915047
2013-05-31runtime: do hashmap grow work during reads.Keith Randall
Before this change, grow work was done only during map writes to ensure multithreaded safety. This can lead to maps remaining in a partially grown state for a long time, potentially forever. This change allows grow work to happen during reads, which will lead to grow work finishing sooner, making the resulting map smaller and faster. Grow work is not done in parallel. Reads can happen in parallel while grow work is happening. R=golang-dev, dvyukov, khr, iant CC=golang-dev https://golang.org/cl/8852047
2013-05-27runtime: flag static variables as no-pointersJan Ziak
Variables in data sections of 32-bit executables interfere with garbage collector's ability to free objects and/or unnecessarily slow down the garbage collector. This changeset moves some static variables to .noptr sections. 'files' in symtab.c is now allocated dynamically. R=golang-dev, dvyukov, minux.ma CC=golang-dev https://golang.org/cl/9786044
2013-05-23runtime: faster range on empty mapFrederick Kelly Mayle III
benchmark old ns/op new ns/op delta BenchmarkMapIter 191 190 -0.52% BenchmarkMapIterEmpty 22 4 -78.96% R=golang-dev, minux.ma, dvyukov, iant, khr CC=golang-dev https://golang.org/cl/9637043
2013-04-08runtime: fix integer overflow in hashmapDmitriy Vyukov
The test is problematic, because it requires 8GB+ of RAM. Fixes #5239. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/8550043
2013-04-07runtime: fix race on hashmap flags fieldDmitriy Vyukov
Use atomic operations on flags field to make sure we aren't losing a flag update during parallel map operations. R=golang-dev, dave, r CC=golang-dev https://golang.org/cl/8377046
2013-04-02runtime: avoid hashing strings until needed in single-bucket mapsBrad Fitzpatrick
This changes the map lookup behavior for string maps with 2-8 keys. There was already previously a fastpath for 0 items and 1 item. Now, if a string-keyed map has <= 8 items, first check all the keys for length first. If only one has the right length, then just check it for equality and avoid hashing altogether. Once the map has more than 8 items, always hash like normal. I don't know why some of the other non-string map benchmarks got faster. This was with benchtime=2s, multiple times. I haven't anything else getting slower, though. benchmark old ns/op new ns/op delta BenchmarkHashStringSpeed 37 34 -8.20% BenchmarkHashInt32Speed 32 29 -10.67% BenchmarkHashInt64Speed 31 27 -12.82% BenchmarkHashStringArraySpeed 105 99 -5.43% BenchmarkMegMap 274206 255153 -6.95% BenchmarkMegOneMap 27 23 -14.80% BenchmarkMegEqMap 148332 116089 -21.74% BenchmarkMegEmptyMap 4 3 -12.72% BenchmarkSmallStrMap 22 22 -0.89% BenchmarkMapStringKeysEight_32 42 23 -43.71% BenchmarkMapStringKeysEight_64 55 23 -56.96% BenchmarkMapStringKeysEight_1M 279688 24 -99.99% BenchmarkIntMap 16 15 -10.18% BenchmarkRepeatedLookupStrMapKey32 40 37 -8.15% BenchmarkRepeatedLookupStrMapKey1M 287918 272980 -5.19% BenchmarkNewEmptyMap 156 130 -16.67% R=golang-dev, khr CC=golang-dev https://golang.org/cl/7641057
2013-04-02runtime: Implement faster equals for strings and bytes.Keith Randall
(amd64) benchmark old ns/op new ns/op delta BenchmarkEqual0 16 6 -63.15% BenchmarkEqual9 22 7 -65.37% BenchmarkEqual32 36 9 -74.91% BenchmarkEqual4K 2187 120 -94.51% benchmark old MB/s new MB/s speedup BenchmarkEqual9 392.22 1134.38 2.89x BenchmarkEqual32 866.72 3457.39 3.99x BenchmarkEqual4K 1872.73 33998.87 18.15x (386) benchmark old ns/op new ns/op delta BenchmarkEqual0 16 5 -63.85% BenchmarkEqual9 22 7 -67.84% BenchmarkEqual32 34 12 -64.94% BenchmarkEqual4K 2196 113 -94.85% benchmark old MB/s new MB/s speedup BenchmarkEqual9 405.81 1260.18 3.11x BenchmarkEqual32 919.55 2631.21 2.86x BenchmarkEqual4K 1864.85 36072.54 19.34x Update #3751 R=bradfitz, r, khr, dave, remyoudompheng, fullung, minux.ma, ality CC=golang-dev https://golang.org/cl/8056043
2013-04-01runtime: make map reads multithreaded safe.Keith Randall
Doing grow work on reads is not multithreaded safe. Changed code to do grow work only on inserts & deletes. This is a short-term fix, eventually we'll want to do grow work in parallel to recover the space of the old table. Fixes #5120. R=bradfitz, khr CC=golang-dev https://golang.org/cl/8242043
2013-03-29runtime: fix gdb printing of mapsKeith Randall
Fixes #5098 R=minux.ma, bradfitz, khr, rsc CC=golang-dev https://golang.org/cl/7746045
2013-03-27runtime: allocate maps' first bucket table lazilyBrad Fitzpatrick
Motivated by garbage profiling in HTTP benchmarks. This changes means new empty maps are just one small allocation (the HMap) instead the HMap + the relatively larger h->buckets allocation. This helps maps which remain empty throughout their life. benchmark old ns/op new ns/op delta BenchmarkNewEmptyMap 196 107 -45.41% benchmark old allocs new allocs delta BenchmarkNewEmptyMap 2 1 -50.00% benchmark old bytes new bytes delta BenchmarkNewEmptyMap 195 50 -74.36% R=khr, golang-dev, r CC=golang-dev https://golang.org/cl/7722046
2013-03-26reflect: add garbage collection info in ChanOf, MapOf, PtrTo, SliceOfRuss Cox
ArrayOf will remain unexported (and therefore unused) for Go 1.1. Fixes #4375. R=0xe2.0x9a.0x9b, r, iant CC=golang-dev https://golang.org/cl/7716048
2013-03-25runtime: optionally check all allocations in hashmap.cJan Ziak
Adds the new debugging constant 'checkgc'. If its value is non-zero all calls to mallocgc() from hashmap.c will start a garbage collection. Fixes #5074. R=golang-dev, khr CC=golang-dev, rsc https://golang.org/cl/7663051
2013-03-20runtime: free map structures more aggressivelyKeith Randall
R=rsc, bradfitz, khr CC=golang-dev https://golang.org/cl/7849047
2013-03-20runtime: faster hashmap implementation.Keith Randall
Hashtable is arranged as an array of 8-entry buckets with chained overflow. Each bucket has 8 extra hash bits per key to provide quick lookup within a bucket. Table is grown incrementally. Update #3885 Go time drops from 0.51s to 0.34s. R=r, rsc, m3b, dave, bradfitz, khr, ugorji, remyoudompheng CC=golang-dev https://golang.org/cl/7504044
2013-03-19runtime: prevent garbage collection during hashmap insertionJan Ziak
Inserting a key-value pair into a hashmap storing keys or values indirectly can cause the garbage collector to find the hashmap in an inconsistent state. Fixes #5074. R=golang-dev, minux.ma, rsc CC=golang-dev https://golang.org/cl/7913043
2013-02-08runtime: precise garbage collection of hashmapsJan Ziak
R=golang-dev, rsc CC=dave, dvyukov, golang-dev, minux.ma, remyoudompheng https://golang.org/cl/7252047
2012-12-13runtime: deletion on nil maps is a no-op nowShenghou Ma
Fixes #4535. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/6942044
2012-11-30runtime: better stack traces in race reportsDmitriy Vyukov
When a race happens inside of runtime (chan, slice, etc), currently reports contain only user file:line. If the line contains a complex expression, it's difficult to figure out where the race exactly. This change adds one more top frame with exact runtime function (e.g. runtime.chansend, runtime.mapaccess). R=golang-dev CC=golang-dev https://golang.org/cl/6851125
2012-10-21runtime: store types of allocated objectsJan Ziak
R=rsc CC=golang-dev https://golang.org/cl/6569057
2012-10-07race: runtime changesDmitriy Vyukov
This is a part of a bigger change that adds data race detection feature: https://golang.org/cl/6456044 R=rsc CC=gobot, golang-dev https://golang.org/cl/6535050
2012-09-24runtime: prepare for 64-bit intsRuss Cox
This CL makes the runtime understand that the type of the len or cap of a map, slice, or string is 'int', not 'int32', and it is also careful to distinguish between function arguments and results of type 'int' vs type 'int32'. In the runtime, the new typedefs 'intgo' and 'uintgo' refer to Go int and uint. The C types int and uint continue to be unavailable (cause intentional compile errors). This CL does not change the meaning of int, but it should make the eventual change of the meaning of int on amd64 a bit smoother. Update #2188. R=iant, r, dave, remyoudompheng CC=golang-dev https://golang.org/cl/6551067
2012-06-24runtime: detect hash map collision problemsRuss Cox
This can only happen if the hash function we're using is getting far more than it's fair share of collisions, but that has happened to us repeatedly as we've expanded the allowed use cases for hash tables (issue 1544, issue 2609, issue 2630, issue 2883, issue 3695). Maybe this will help the next time we try something new. R=golang-dev, r CC=golang-dev https://golang.org/cl/6306083
2012-05-29runtime: replace runtime·rnd function with ROUND macroRuss Cox
It's sad to introduce a new macro, but rnd shows up consistently in profiles, and the function call overwhelms the two arithmetic instructions it performs. R=r CC=golang-dev https://golang.org/cl/6260051
2012-05-24runtime: handle and test large map valuesRuss Cox
This is from CL 5451105 but was dropped from that CL. See also CL 6137051. The only change compared to 5451105 is to check for h != nil in reflect·mapiterinit; allowing use of nil maps must have happened after that original CL. Fixes #3573. R=golang-dev, dave, r CC=golang-dev https://golang.org/cl/6215078
2012-01-31runtime: use per-map hash seedsDamian Gryski
This patch adds a hash seed to the Hmap struct. Each seed is initialized by runtime.fastrand1(). This is the first step of a solution to issue 2630. Fastrand1 still needs to be updated to provide us with actually random bits. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/5599046
2011-12-05runtime: prep for type-specific algorithmsRuss Cox
Equality on structs will require arbitrary code for type equality, so change algorithm in type data from uint8 to table pointer. In the process, trim top-level map structure from 104/80 bytes (64-bit/32-bit) to 24/12. Equality on structs will require being able to call code generated by the Go compiler, and C code has no way to access Go return values, so change the hash and equal algorithm functions to take a pointer to a result instead of returning the result. R=ken CC=golang-dev https://golang.org/cl/5453043
2011-11-11gc: remove m[k] = x, falseRuss Cox
R=ken2 CC=golang-dev https://golang.org/cl/5376076