python-internals

Why does a class definition always produce the same bytecode?

 ̄綄美尐妖づ 提交于 2019-12-03 05:44:53
Say I do: #!/usr/bin/env python # encoding: utf-8 class A(object): pass Now I disassemble it: python -m dis test0.py 4 0 LOAD_CONST 0 ('A') 3 LOAD_NAME 0 (object) 6 BUILD_TUPLE 1 9 LOAD_CONST 1 (<code object A at 0x1004ebb30, file "test0.py", line 4>) 12 MAKE_FUNCTION 0 15 CALL_FUNCTION 0 18 BUILD_CLASS 19 STORE_NAME 1 (A) 22 LOAD_CONST 2 (None) 25 RETURN_VALUE Now I add some statements in the class definition: #!/usr/bin/env python # encoding: utf-8 class A(object): print 'hello' 1+1 pass And I disassemble again: 4 0 LOAD_CONST 0 ('A') 3 LOAD_NAME 0 (object) 6 BUILD_TUPLE 1 9 LOAD_CONST 1 (

Very strange behavior of operator 'is' with methods

*爱你&永不变心* 提交于 2019-12-03 05:23:58
问题 Why is the first result False , should it not be True ? >>> from collections import OrderedDict >>> OrderedDict.__repr__ is OrderedDict.__repr__ False >>> dict.__repr__ is dict.__repr__ True 回答1: For user-defined functions, in Python 2 unbound and bound methods are created on demand, through the descriptor protocol; OrderedDict.__repr__ is such a method object, as the wrapped function is implemented as a pure-Python function. The descriptor protocol will call the __get__ method on objects

Complexity of len() with regard to sets and lists

给你一囗甜甜゛ 提交于 2019-12-03 05:12:57
The complexity of len() with regards to sets and lists is equally O(1). How come it takes more time to process sets? ~$ python -m timeit "a=[1,2,3,4,5,6,7,8,9,10];len(a)" 10000000 loops, best of 3: 0.168 usec per loop ~$ python -m timeit "a={1,2,3,4,5,6,7,8,9,10};len(a)" 1000000 loops, best of 3: 0.375 usec per loop Is it related to the particular benchmark, as in, it takes more time to build sets than lists and the benchmark takes that into account as well? If the creation of a set object takes more time compared to creating a list, what would be the underlying reason? Firstly, you have not

Why is 'x' in ('x',) faster than 'x' == 'x'?

跟風遠走 提交于 2019-12-03 01:33:03
问题 >>> timeit.timeit("'x' in ('x',)") 0.04869917374131205 >>> timeit.timeit("'x' == 'x'") 0.06144205736110564 Also works for tuples with multiple elements, both versions seem to grow linearly: >>> timeit.timeit("'x' in ('x', 'y')") 0.04866674801541748 >>> timeit.timeit("'x' == 'x' or 'x' == 'y'") 0.06565782838087131 >>> timeit.timeit("'x' in ('y', 'x')") 0.08975995576448526 >>> timeit.timeit("'x' == 'y' or 'x' == 'y'") 0.12992391047427532 Based on this, I think I should totally start using in

Fatal Python error and `BufferedWriter`

[亡魂溺海] 提交于 2019-12-03 01:19:43
I've came across this paragraph in the Documentations which says: Binary buffered objects (instances of BufferedReader , BufferedWriter , BufferedRandom and BufferedRWPair ) protect their internal structures using a lock; it is therefore safe to call them from multiple threads at once. I'm not sure why they need to "protect" their internal structures given the GIL is in action. Who cares? I didn't care much until I found out that this lock has some significance, consider this piece of code: from _thread import start_new_thread import time def start(): for i in range(10): print("SPAM SPAM SPAM!

Imports behave differently when in __init__.py that is imported

杀马特。学长 韩版系。学妹 提交于 2019-12-03 00:48:53
Imports in an __init__.py seem to behave differently when the file is run, to when it is imported. If we have the following files: run.py : import test test/b.py : class B(object): pass test/__init__.py : from b import B print B print b If we run __init__.py we get an error as I expect: % python test/__init__.py <class 'b.B'> Traceback (most recent call last): File "test/__init__.py", line 6, in <module> print b NameError: name 'b' is not defined But if we run.py then we don't: % python run.py <class 'test.b.B'> <module 'test.b' from '~/temp/test/b.py'> I would expect the behaviour to be the

What exactly is __weakref__ in Python?

谁都会走 提交于 2019-12-03 00:47:44
问题 Surprisingly, there's no explicit documentation for __weakref__ . Weak references are explained here. __weakref__ is also shortly mentioned in the documentation of __slots__ . But I could not find anything about __weakref__ itself. What exactly is __weakref__ ? - Is it just a member acting as a flag: If present, the object may be weakly-referenced? - Or is it a function/variable that can be overridden/assigned to get a desired behavior? How? 回答1: __weakref__ is just an opaque object that

Why are some float < integer comparisons four times slower than others?

放肆的年华 提交于 2019-12-03 00:04:29
问题 When comparing floats to integers, some pairs of values take much longer to be evaluated than other values of a similar magnitude. For example: >>> import timeit >>> timeit.timeit("562949953420000.7 < 562949953421000") # run 1 million times 0.5387085462592742 But if the float or integer is made smaller or larger by a certain amount, the comparison runs much more quickly: >>> timeit.timeit("562949953420000.7 < 562949953422000") # integer increased by 1000 0.1481498428446173 >>> timeit.timeit(

Why is str.strip() so much faster than str.strip(' ')?

时光怂恿深爱的人放手 提交于 2019-12-02 23:31:02
Splitting on white-space can be done in two ways with str.strip . You can either issue a call with no arguments, str.strip() , which defaults to using a white-space delimiter or explicitly supply the argument yourself with str.strip(' ') . But, why is it that when timed these functions perform so differently? Using a sample string with an intentional amount of white spaces: s = " " * 100 + 'a' + " " * 100 The timings for s.strip() and s.strip(' ') are respectively: %timeit s.strip() The slowest run took 32.74 times longer than the fastest. This could mean that an intermediate result is being

Why is repr(int) faster than str(int)?

人走茶凉 提交于 2019-12-02 23:25:16
I am wondering why repr(int) is faster than str(int) . With the following code snippet: ROUNDS = 10000 def concat_strings_str(): return ''.join(map(str, range(ROUNDS))) def concat_strings_repr(): return ''.join(map(repr, range(ROUNDS))) %timeit concat_strings_str() %timeit concat_strings_repr() I get these timings (python 3.5.2, but very similar results with 2.7.12): 1.9 ms ± 17.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 1.38 ms ± 9.07 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) If I'm on the right path, the same function long_to_decimal_string is getting called