I\'m using NLTK to search for n-grams in a corpus but it\'s taking a very long time in some cases. I\'ve noticed calculating n-grams isn\'t an uncommon feature in other pack
You might find a pythonic, elegant and fast ngram generation function using zip and splat (*) operator here :
zip
def find_ngrams(input_list, n): return zip(*[input_list[i:] for i in range(n)])