Understanding Time complexity calculation for Dijkstra Algorithm

前端 未结 5 442
囚心锁ツ
囚心锁ツ 2020-12-12 11:17

As per my understanding, I have calculated time complexity of Dijkstra Algorithm as big-O notation using adjacency list given below. It didn\'t come out as it was supposed t

相关标签:
5条回答
  • 2020-12-12 11:49

    Adding a more detailed explanation as I understood it just in case:

    • O(for each vertex using min heap: for each edge linearly: push vertices to min heap that edge points to)
    • V = number of vertices
    • O(V * (pop vertex from min heap + find unvisited vertices in edges * push them to min heap))
    • E = number of edges on each vertex
    • O(V * (pop vertex from min heap + E * push unvisited vertices to min heap)). Note, that we can push the same node multiple times here before we get to "visit" it.
    • O(V * (log(heap size) + E * log(heap size)))
    • O(V * ((E + 1) * log(heap size)))
    • O(V * (E * log(heap size)))
    • E = V because each vertex can reference all other vertices
    • O(V * (V * log(heap size)))
    • O(V^2 * log(heap size))
    • heap size is V^2 because we push to it every time we want to update a distance and can have up to V comparisons for each vertex. E.g. for the last vertex, 1st vertex has distance 10, 2nd has 9, 3rd has 8, etc, so, we push each time to update
    • O(V^2 * log(V^2))
    • O(V^2 * 2 * log(V))
    • O(V^2 * log(V))
    • V^2 is also a total number of edges, so if we let E = V^2 (as in the official naming), we will get the O(E * log(V))
    0 讨论(0)
  • 2020-12-12 11:52

    In dense(or complete) graph, E logV > V^2

    Using linked data & binary heap is not always best.

    That case, I prefer to use just matrix data and save minimum length by row.

    Just V^2 time needed.

    In case, E < V / logV.

    Or, max edges per vertex is less than some constant K.

    Then use binary heap.

    0 讨论(0)
  • 2020-12-12 11:52

    Let's try to analyze the algorithm as given in CLRS book.

    for each loop in line 7: for any vertex say 'u' the number of times the loop runs is equal to the number of adjacent vertices of 'u'. The number of adjacent vertices for a node is always less than or equal to the total number of edges in the graph.

    If we take V (because of while loop in line 4) and E (because of for each in line 7) and compute the complexity as VElog(V) it would be equivalent to assuming each vertex has E edges incident on it, but in actual there will be atmost or less than E edges incident on a single vertex. (the atmost E adjacent vertices for a single vertex case happens in case of a star graph for the internal vertex)

    0 讨论(0)
  • 2020-12-12 11:59

    Dijkstra's shortest path algorithm is O(ElogV) where:

    • V is the number of vertices
    • E is the total number of edges

    Your analysis is correct, but your symbols have different meanings! You say the algorithm is O(VElogV) where:

    • V is the number of vertices
    • E is the maximum number of edges attached to a single node.

    Let's rename your E to N. So one analysis says O(ElogV) and another says O(VNlogV). Both are correct and in fact E = O(VN). The difference is that ElogV is a tighter estimation.

    0 讨论(0)
  • 2020-12-12 12:04

    let n be the number of vertices and m be the number of edges.

    Since with Dijkstra's algorithm you have O(n) delete-mins and O(m) decrease_keys, each costing O(logn), the total run time using binary heaps will be O(log(n)(m + n)). It is totally possible to amortize the cost of decrease_key down to O(1) using Fibonacci heaps resulting in a total run time of O(nlogn+m) but in practice this is often not done since the constant factor penalties of FHs are pretty big and on random graphs the amount of decrease_keys is way lower than its respective upper bound (more in the range of O(n*log(m/n), which is way better on sparse graphs where m = O(n)). So always be aware of the fact that the total run time is both dependent on your data structures and the input class.

    0 讨论(0)
提交回复
热议问题