map[byte]byte{0:10} should be using least 2 bytes, one for value and one per key. But as each hashmap implmentation, there is also a hidden cost per item. What is the memory
Overhead per map entry is not a constant value, since it depends on a number of buckets per map entry.
There is a great article on the map internals: https://www.ardanlabs.com/blog/2013/12/macro-view-of-map-internals-in-go.html
The hash table for a Go map is structured as an array of buckets. The number of buckets is always equal to a power of 2.
...
How Maps Grow
As we continue to add or remove key/value pairs from the map, the efficiency of the map lookups begin to deteriorate. The load threshold values that determine when to grow the hash table are based on these four factors:
% overflow : Percentage of buckets which have an overflow bucket
bytes/entry : Number of overhead bytes used per key/value pair
hitprobe : Number of entries that need to be checked when looking up a key
missprobe : Number of entries that need to be checked when looking up an absent key
For example a very simple benchmark can show a dramatic increase in overhead per entry when increasing number of entries just by 1:
func Benchmark(b *testing.B) {
m := make(map[int64]struct{})
// also resets mem stats
b.ResetTimer()
for i := 0; i < b.N; i++ {
m[int64(i)] = struct{}{}
}
}
Benching with 106496 entries:
go test -bench . -benchtime 106496x -benchmem
Benchmark-2 106495 65.7 ns/op 31 B/op 0 allocs/op
e.g. 31 bytes per entry
Now increase the number of entries by one:
go test -bench . -benchtime 106497x -benchmem
Benchmark-2 106497 65.7 ns/op 57 B/op 0 allocs/op
e.g. 57 bytes per entry
Increasing the number of entries by 1 resulted in the doubling of the number of underlying buckets, which resulted in an additional overhead. The overhead will decrease when more entries are added, until the number of buckets is doubled again.