The complexity of len() with regards to sets and lists is equally O(1). How come it takes more time to process sets?
~$ python -m timeit \"a=[1,2,3,
Yes,you are right,it's more because of the different time required for creating the set and list objects by python. As a fairer benchmark you can use timeit module and pass the objects using setup argument:
from timeit import timeit
print '1st: ' ,timeit(stmt="len(a)", number=1000000,setup="a=set([1,2,3]*1000)")
print '2nd : ',timeit(stmt="len(a)", number=1000000,setup="a=[1,2,3]*1000")
result :
1st: 0.04927110672
2nd : 0.0530669689178
And if you want to know that why it's like so, lets go through the python world. Actually set object use a hash table and a hash table uses a hash function for creating the hash values of the items and mapping them to the values and in this deal calling the function and calculating the hash values and some another extra tasks will take much time. While for creating a list python just create a sequence of objects which you can access them with indexing.
You can check the more details on set_lookkey function from Cpython source code.
Also note that if two algorithm had same complexity it does not mean that both algorithms has exactly same run time, or execution speed.1
because big O notation describes the limiting behavior of a function and doesn't show the exact complexity equation.
For example the complexity of following equations f(x)=100000x+1 and f(x)=4x+20 is O(1)
and it means that both are linear equations bur as you can see the first function has a pretty much larger slope, and for a same input they will gives different result.