Binary search vs binary search tree

后端 未结 4 1655
我寻月下人不归
我寻月下人不归 2021-01-31 03:16

What is the benefit of a binary search tree over a sorted array with binary search? Just with mathematical analysis I do not see a difference, so I assume there must be a diffe

4条回答
  •  刺人心
    刺人心 (楼主)
    2021-01-31 04:19

    Adding to @Blindy , I would say the insertion in sorted array takes more of memory operation O(n) std::rotate() than CPU instruction O(logn), refer to insertion sort.

        std::vector sorted_array;
    
        // ... ...
    
        // insert x at the end
        sorted_array.push_back(x);
    
        auto& begin = sorted_array.begin();
    
        // O(log n) CPU operation
        auto& insertion_point = std::lower_bound(begin()
                 , begin()+sorted_array().size()-1, x); 
        
        // O(n) memory operation
        std::rotate(begin, insertion_point, sorted_array.end());
    

    I guess Left child right sibling tree combines the essence of binary tree and sorted array.

    data structure operation CPU cost Memory operation cost
    sorted array insert O(logn) (benefits from pipelining) O(n) memory operation, refer to insertion-sort using std::rotate()
    search O(logn) benefits from inline implementation
    delete O(logn) (when pipelining with memory operation) O(n) memory operation, refer to std::vector::erase()
    balanced binary tree insert O(logn) (drawback of branch-prediction affecting pipelining, also added cost of tree rotation) Additional cost of pointers that exhaust the cache.
    search O(logn)
    delete O(logn) (same as insert)
    Left child right sibling tree (combines sorted array and binary tree) insert O(logn) on average No need std::rotate() when inserting on left child if kept unbalanced
    search O(logn) (in worst case O(n) when unbalanced) takes advantage of cache locality in right sibling search , refer to std::vector::lower_bound()
    delete O(logn) (when hyperthreading/pipelining) O(n) memory operation refer to std::vector::erase()

提交回复
热议问题