It depends on the hash table and how it handles collisions, for example assume that in our hash table each entry points to a list of elements having the same key.
If the distribution of elements is sufficiently uniform, the average cost of a lookup depends only on the average number of elements per each list(load factor). so the average number of elements per each list is n/m where m is the size of our hash table.
- The expected time to determine whether an edge is in the graph is O(n/m)
- more space than linked list and more query time than adjacency matrix. If our hash table supports dynamic resizing then we would need extra time to move the elements between the old and new hash tables and if not we would need O(n) space for each hash table in order to have O(1) query time which results in O(n^2) space. also we have just checked expected query time, and In worst case we may have query time just like linked list(O(degree(u))) so it seems better to use adjacency matrix in order to have deterministic O(1) query time and O(n^2) space.
- read above
- yes, for example if we know that every vertices of our graph has at most d adjacent vertices and d less than n, then using hash table would need O(nd) space instead of O(n^2) and would have expected O(1) query time.