complexity-theory

Given a set of points, find if any of the three points are collinear

纵然是瞬间 提交于 2019-11-27 18:24:49
What is the best algorithm to find if any three points are collinear in a set of points say n. Please also explain the complexity if it is not trivial. Thanks Bala If you can come up with a better than O(N^2) algorithm, you can publish it! This problem is 3-SUM Hard , and whether there is a sub-quadratic algorithm (i.e. better than O(N^2)) for it is an open problem. Many common computational geometry problems (including yours) have been shown to be 3SUM hard and this class of problems is growing. Like NP-Hardness, the concept of 3SUM-Hardness has proven useful in proving 'toughness' of some

time complexity of unshift() vs. push() in Javascript

徘徊边缘 提交于 2019-11-27 18:16:41
I know what is the difference between unshift() and push() methods in Javascript, but I'm wondering what is the difference in time complexity? I suppose for push() method is O(1) because you're just adding an item to the end of array, but I'm not sure for unshift() method, because, I suppose you must "move" all the other existing elements forward and I suppose that is O(log n) or O(n)? The JavaScript language spec does not mandate the time complexity of these functions, as far as I know. It is certainly possible to implement an array-like data structure (O(1) random access) with O(1) push and

how to determine if the kth largest element of the heap is greater than x

隐身守侯 提交于 2019-11-27 18:12:42
Consider a binary heap containing n numbers (the root stores the greatest number). You are given a positive integer k < n and a number x. You have to determine whether the kth largest element of the heap is greater than x or not. Your algorithm must take O(k) time. You may use O(k) extra storage Simple dfs can do the job. We have a counter set to zero. Start from the root and in each iteration check the value of current node; if it is greater than x, then increase the counter and continue the algorithm for one of the child nodes. The algorithm terminates if counter is bigger than equal k or if

Time/Space complexity of PHP Array

a 夏天 提交于 2019-11-27 16:10:34
问题 Is there a way or a resource for finding the time and space complexity of the Array implementation in PHP other than calculating it by hand? An array in PHP is actually an ordered map. A map is a type that associates values to keys. This type is optimized for several different uses; it can be treated as an array, list (vector), hash table (an implementation of a map), dictionary, collection, stack, queue, and probably more. As array values can be other arrays, trees and multidimensional

Computational complexity of base conversion

99封情书 提交于 2019-11-27 15:44:26
What is the complexity of converting a very large n-bit number to a decimal representation? My thought is that the elementary algorithm of repeated integer division, taking the remainder to get each digit, would have O(M(n)log n) complexity, where M(n) is the complexity of the multiplication algorithm; however, the division is not between 2 n-bit numbers but rather 1 n-bit number and a small constant number, so it seems to me the complexity could be smaller. Naive base-conversion as you described takes quadratic time; you do about n bigint-by-smallint divisions, most of which take time linear

Python heapq vs. sorted complexity and performance

南笙酒味 提交于 2019-11-27 14:57:41
问题 I'm relatively new to python (using v3.x syntax) and would appreciate notes regarding complexity and performance of heapq vs. sorted. I've already implemented a heapq based solution for a greedy 'find the best job schedule' algorithm. But then I've learned about the possibility of using 'sorted' together with operator.itemgetter() and reverse=True. Sadly, I could not find any explanation on expected complexity and/or performance of 'sorted' vs. heapq. 回答1: If you use binary heap to pop all

Time complexity of find() in std::map?

[亡魂溺海] 提交于 2019-11-27 14:34:49
问题 How efficient is the find() function on the std::map class? Does it iterate through all the elements looking for the key such that it's O(n), or is it in a balanced tree, or does it use a hash function or what? 回答1: Log(n) It is based on a red black tree. Edit: n is of course the number of members in the map. 回答2: std::map and std::set are implemented by compiler vendors using highly balanced binary search trees (e.g. red-black tree, AVL tree). As correctly pointed out by David, find would

Big O, what is the complexity of summing a series of n numbers?

两盒软妹~` 提交于 2019-11-27 14:28:04
问题 I always thought the complexity of: 1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2). But today I read from a textbook, "by the formula for the sum of the first n integers, this is n(n+1)/2" and then thus: (1/2)n^2 + (1/2)n, and thus O(n^2). What am I missing here? 回答1: The big O notation can be used to determine the growth rate of any function. In this case, it seems the book is not talking about the time complexity of computing the value, but about the value

Why do we ignore co-efficients in Big O notation?

♀尐吖头ヾ 提交于 2019-11-27 14:22:01
While searching for answers relating to "Big O" notation, I have seen many SO answers such as this , this , or this , but still I have not clearly understood some points. Why do we ignore the co-efficients? For example this answer says that the final complexity of 2N + 2 is O(N) ; we remove the leading co-efficient 2 and the final constant 2 as well. Removing the final constant of 2 perhaps understandable. After all, N may be very large and so "forgetting" the final 2 may only change the grand total by a small percentage. However I cannot clearly understand how removing the leading co

What is the runtime complexity of python list functions?

送分小仙女□ 提交于 2019-11-27 13:51:54
I was writing a python function that looked something like this def foo(some_list): for i in range(0, len(some_list)): bar(some_list[i], i) so that it was called with x = [0, 1, 2, 3, ... ] foo(x) I had assumed that index access of lists was O(1) , but was surprised to find that for large lists this was significantly slower than I expected. My question, then, is how are python lists are implemented, and what is the runtime complexity of the following Indexing: list[x] Popping from the end: list.pop() Popping from the beginning: list.pop(0) Extending the list: list.append(x) For extra credit,