Binary Search and Hashtable Search

谁说我不能喝 提交于 2019-12-05 17:53:21

(Pretty much as noted in comments.)

I suspect you're mostly seeing the effects of cache misses. When the collection is large, you'll get a lot of cache misses - particularly with the binary search, which potentially needs to touch a lot of points in the collection to find an element.

At the low sizes, I suspect you're seeing cache misses too, but this time on your targets list - and also the overhead of LINQ itself. LINQ is fast, but it can still be significant when all you're doing is performing a single search of a tiny collection in the middle.

I'd suggest rewriting your loops to something like:

{
    // Use the same seed each time for consistency. Doesn't have to be 0.
    Random random = new Random(0);
    watch.Start();
    int found = 0;
    for (int i = 0; i < 1000 * 1000; i++)
    {
        if (BinarySearch(t, random.Next(int.MaxValue)) != null)
        {
            found++;
        }
    }
    watch.Stop();
    Console.WriteLine(string.Format
         "found {0} things out of {2} in {1} ms with binary search",
         found, watch.ElapsedMilliseconds, a.Length));
}

Of course you've then got the problem of including random number generation in the loop instead... you might want to look at using a random number generator which is faster than System.Random if you can find one. Or use some other way of determining which elements to look up.

Oh, and I'd personally rewrite the binary search to use iteration rather than recursion, but that's a different matter. I wouldn't expect it to have a significant effect.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!