itertools

Python: itertools.islice not working in a loop

若如初见. 提交于 2019-12-24 04:34:04
问题 I have code like this: #opened file f goto_line = num_lines #Total number of lines while not found: line_str = next(itertools.islice(f, goto_line - 1, goto_line)) goto_line = goto_line/2 #checks for data, sets found to True if needed line_str is correct the first pass, but every pass after that is reading a different line then it should. So for example, goto_line starts off as 1000. It reads line 1000 just fine. Then the next loop, goto_line is 500 but it doesn't read line 500. It reads some

Is there a way of avoiding so many list(chain(*list_of_list))?

本秂侑毒 提交于 2019-12-24 03:52:31
问题 If I have a list of list of list of tuples of two strings. I want to flatten it out to a non-nested list of tuples, I could do this: >>> from itertools import chain >>> lst_of_lst_of_lst_of_tuples = [ [[('ab', 'cd'), ('ef', 'gh')], [('ij', 'kl'), ('mn', 'op')]], [[('qr', 'st'), ('uv', 'w')], [('x', 'y'), ('z', 'foobar')]] ] >>> lllt = lst_of_lst_of_lst_of_tuples >>> list(chain(*list(chain(*lllt)))) [('ab', 'cd'), ('ef', 'gh'), ('ij', 'kl'), ('mn', 'op'), ('qr', 'st'), ('uv', 'w'), ('x', 'y'),

python - calculate orthographic similarity between words of a list

牧云@^-^@ 提交于 2019-12-24 02:13:19
问题 I need to calculate orthographic similarity (edit/Levenshtein distance) among words in a given corpus. As Kirill suggested below, I tried to do the following: import csv, itertools, Levenshtein import numpy as np # import the list of words from csv file path = '/Users/my path' file = path + 'file.csv' with open(file, 'rb') as f: reader = csv.reader(f) wordlist = list(reader) wordlist = np.array(wordlist) #make it a np array wordlist2 = wordlist[:,0] #subset the first column of the imported

Sequential function mapping in python

丶灬走出姿态 提交于 2019-12-24 01:57:19
问题 I have a bunch of functions in a list: funcs = [f1, f2, f3, f4, f5] and all of the functions take in return a single argument, eg. f1 = lambda x: x*2 I'd like to map all these functions together result = lambda x: f5(f4(f3(f2(f1(x))))) or, iterating over funcs def dispatch(x): for f in funcs: x = f(x) return x dispatch works fine, but I couldn't figure out a clean way to do this using iterools . Is it possible? Does this sequential function mapping idiom have a name? 回答1: There is no point in

keeping only unique instances of Lists whose only difference is order

痞子三分冷 提交于 2019-12-24 01:06:27
问题 Using this code: from itertools import product list1 = ['Gabe', 'Taylor', 'Kyle', 'Jay'] list2 = ['Gabe', 'Taylor', 'Kyle', 'Jay', 'James', 'John', 'Tyde','Chris', 'Bruno', 'David'] list3 = ['Gabe', 'Taylor', 'Kyle', 'Jay', 'James', 'John', 'Tyde','Chris', 'Bruno', 'David'] list4 = ['Kyle', 'James', 'John', 'Tyde','Bruno', 'Drew', 'Chris'] list5 = ['James', 'John', 'Brendan','Tim', 'Drew' ] FinalList = [] for x in product(list1, list2, list3, list4, list5): # check for duplicates if len(set(x

Is it possible to pickle itertools.product in python?

僤鯓⒐⒋嵵緔 提交于 2019-12-23 22:20:30
问题 I would like to save the state of itertools.product() after my program quits. Is it possible to do this with pickling? What I am planning to do is to generate permutations and if the process is interrupted (KeyboardInterrupt), I can resume the process the next time I run the program. def trywith(itr): try: for word in itr: time.sleep(1) print("".join(word)) except KeyboardInterrupt: f=open("/root/pickle.dat","wb") pickle.dump((itr),f) f.close() if os.path.exists("/root/pickle.dat"): f=open("

summing all possible combinations of an arbitrary number of arrays and applying limits and returning indices

北城余情 提交于 2019-12-23 21:09:33
问题 This is a modification of this question in which I would like to return the indices of the arrays elements in addition to the elements themselves. I've successfully modified arraysums() , arraysums_recursive() , but I'm struggling with arraysums_recursive_anyvals() . Here are the details: I modified arraysums() : def arraysums(arrays,lower,upper): products = itertools.product(*arrays) result = list() indices = itertools.product(*[np.arange(len(arr)) for arr in arrays]) index = list() for n,k

memory efficient random number iterator without replacement

一曲冷凌霜 提交于 2019-12-23 17:36:00
问题 I feel like this one should be easy but after numerous searches and attempts I can't figure an answer out. Basically I have a very large number of items that I want to sample in a random order without replacement. In this case they are cells in a 2D array. The solution that I would use for a smaller array doesn't translate because it requires shuffling an in memory array. If the number I had to sample was small I could also just randomly sample items and keep a list of the values I'd tried.

How does itertools.combinations scale in Python?

故事扮演 提交于 2019-12-23 03:51:28
问题 I'm doing a brute-force approach to trying to find the combination to an extension to a puzzle. I am trying to get a large number of combinations and then test each combination to see if they fit certain criteria. I generate the combinations using Python's excellent itertools, essentially this gives me an iterator I can go over and test each one. This returns quickly and gives me 91390 combinations to check: itertools.combinations(range(1, 40), 4) This takes a couple of minutes and give me

Groups of pairwise combinations where each member appears only once

余生长醉 提交于 2019-12-23 01:44:14
问题 I have a list of unique tuples each containing 2 elements from 1 to 10. A total number of elements in a list is 45. I would like to divide them into 10 groups each of them containing only numbers from 1 to 10. I have tried solve my problem using this answer: python get groups of combinations that each member appear only once python: from itertools import combinations, chain l = ['A','B','C','D','E', 'F', 'G','H','I','J'] c = list(combinations(l,2)) [set(i) for i in list(combinations(c,5)) if