Consider the following snippet:
use strict;
use warnings;
my %a = ( a => 1,
b => 2,
c => \'cucu\',
d => undef,
The number of used buckets starts out to be approximately the number of keys; allocated buckets is consistently the lowest power of 2 > the number of keys. 5 keys will return 5/8. Larger numbers of unique keys grow slower, such that a hash %h that is just the list (1..128), with 64 key/value pairs, somehow gets a scalar value of 50/128.
However, once the hash has allocated its buckets, they will remain allocated even if you shrink the hash. I just made a hash %h with 9 pairs, thus 9/16 scalar; then when I reassigned %h to have just one pair, its scalar value was 1/16.
This actually makes sense in that it lets you test the hash's size, like a scalar of a simple array does.