python-internals

PEP 424 __length_hint__() - Is there a way to do the same for generators or zips?

僤鯓⒐⒋嵵緔 提交于 2021-02-19 01:34:05
问题 Just came across this awesome __length_hint__() method for iterators from PEP 424 (https://www.python.org/dev/peps/pep-0424/). Wow! A way to get the iterator length without exhausting the iterator. My questions: Is there a simple explanation how does this magic work? I'm just curious. Are there limitations and cases where it wouldn't work? ("hint" just sounds a bit suspicious). Is there a way to get the hint for zips and generators as well? Or is it something fundamental only to iterators?

Set Popping (Python)

别来无恙 提交于 2021-02-18 21:06:51
问题 Lets say you have a set: foo = {1, 2, 3, 4, 5} In the book I am currently reading, Pro Python, it says that using foo.pop() will pop an arbitrary number from that selection. BUT...When I try it out, it pops 1, then 2, then 3... Does it do it arbitrarily, or is this just a coincidence? 回答1: The reason it says it is arbitrary is because there is no guarantee about the ordering it will pop out. Since you just created the set, it may be storing the elements in a "nice" order, and thus .pop()

Set Popping (Python)

∥☆過路亽.° 提交于 2021-02-18 21:06:09
问题 Lets say you have a set: foo = {1, 2, 3, 4, 5} In the book I am currently reading, Pro Python, it says that using foo.pop() will pop an arbitrary number from that selection. BUT...When I try it out, it pops 1, then 2, then 3... Does it do it arbitrarily, or is this just a coincidence? 回答1: The reason it says it is arbitrary is because there is no guarantee about the ordering it will pop out. Since you just created the set, it may be storing the elements in a "nice" order, and thus .pop()

Python 3.4 multiprocessing Queue faster than Pipe, unexpected

冷暖自知 提交于 2021-02-18 08:56:50
问题 I am doing an audio player that received samples from an udp socket, and everything was working fine. But when I implemented an Lost Concealment algorithm, the player failed to keep producing silence at the excepted rate (each 10ms send a list of multiple 160 bytes). When playing audio with pyaudio, using the blocking call write to play some samples, I noticed it blocked on average for duration of the sample. So I created a new dedicated process to play the samples. The main process processes

Difference between 'for a[-1] in a' and 'for a in a' in Python?

会有一股神秘感。 提交于 2021-02-16 13:36:06
问题 In this post, the following code snippets could work. a = [0, 1, 2, 3] for a[-1] in a: print(a[-1]) Refer to this answer While doing for a[-1] in a , you actually iterate through the list and temporary store the value of the current element into a[-1] . Likewise, I think doing for a in a , it should iterate through the list and temporary store the value of current element to a , so the value of a could be 0 , and is not iterable, then TypeError exception will be thrown in the next iteration.

Stacks / list in python - how does it append?

£可爱£侵袭症+ 提交于 2021-02-08 10:46:59
问题 If I have a list: list_1 = ["apples", "apricots", "oranges"] and I append an new item to the list : "berries" list_1 = ["apples", "apricots", "oranges", "berries"] Under-the-hood (so to speak), I thought I remember reading that Python creates another list (list_2) and points it to the original list (list_1) so that list_1 remains static...if this is true, would it look something like this (under-the-hood)? list_1 = ["apples", "apricots", ["oranges", "berries"]] So in this way, the original

Numpy array neither C or F contiguous implications

限于喜欢 提交于 2021-02-08 08:30:31
问题 TL;DR Question Regarding numpy arrays that are neighter C or F contiguous (array's c_contiguous and f_contiguous flags are False): Can an array really be neither C or F contiguous? Or falsy flags just mean numpy can't figure out the correct contigious type? What are the performance implications on such arrays? Are there any optimizations we miss when staying in this state? An array for example: import numpy as np arr = np.random.randint(0, 255, (1000, 1000, 3), dtype='uint8') arr = arr[:, :,

Numpy array neither C or F contiguous implications

戏子无情 提交于 2021-02-08 08:30:09
问题 TL;DR Question Regarding numpy arrays that are neighter C or F contiguous (array's c_contiguous and f_contiguous flags are False): Can an array really be neither C or F contiguous? Or falsy flags just mean numpy can't figure out the correct contigious type? What are the performance implications on such arrays? Are there any optimizations we miss when staying in this state? An array for example: import numpy as np arr = np.random.randint(0, 255, (1000, 1000, 3), dtype='uint8') arr = arr[:, :,

Which import mechanism is faster?

别来无恙 提交于 2021-02-07 20:59:04
问题 1: %%timeit -n 1000000 import math math.sin(math.pi/2) 1000000 loops, best of 3: 284 ns per loop 2: %%timeit -n 1000000 from math import sin, pi sin(pi/2) 1000000 loops, best of 3: 1.01 µs per loop 回答1: There you are timing two different statements - and indeed, "common sense" has it that attribute lookup as in the first snippet should have a higher overhead than using local (in this case, global) scoped names. What is happening though, is that when one does from math import sin, pi , those

Which import mechanism is faster?

不羁的心 提交于 2021-02-07 20:54:22
问题 1: %%timeit -n 1000000 import math math.sin(math.pi/2) 1000000 loops, best of 3: 284 ns per loop 2: %%timeit -n 1000000 from math import sin, pi sin(pi/2) 1000000 loops, best of 3: 1.01 µs per loop 回答1: There you are timing two different statements - and indeed, "common sense" has it that attribute lookup as in the first snippet should have a higher overhead than using local (in this case, global) scoped names. What is happening though, is that when one does from math import sin, pi , those