I have a list which has repeating items and I want a list of the unique items with their frequency.
For example, I have [\'a\', \'a\', \'b\', \'b\', \'b\']
Another way to do this would be
mylist = [1, 1, 2, 3, 3, 3, 4, 4, 4, 4]
mydict = {}
for i in mylist:
if i in mydict: mydict[i] += 1
else: mydict[i] = 1
then to get the list of tuples,
mytups = [(i, mydict[i]) for i in mydict]
This only goes over the list once, but it does have to traverse the dictionary once as well. However, given that there are a lot of duplicates in the list, then the dictionary should be a lot smaller, hence faster to traverse.
Nevertheless, not a very pretty or concise bit of code, I'll admit.
Convert any data structure into a pandas series s:
CODE:
for i in sort(s.value_counts().unique()):
print i, (s.value_counts()==i).sum()
With Python 2.7+, you can use collections.Counter.
Otherwise, see this counter receipe.
Under Python 2.7+:
from collections import Counter
input = ['a', 'a', 'b', 'b', 'b']
c = Counter( input )
print( c.items() )
Output is:
[('a', 2), ('b', 3)]
I know this isn't a one-liner... but to me I like it because it's clear to me that we pass over the initial list of values once (instead of calling count on it):
>>> from collections import defaultdict
>>> l = ['a', 'a', 'b', 'b', 'b']
>>> d = defaultdict(int)
>>> for i in l:
... d[i] += 1
...
>>> d
defaultdict(<type 'int'>, {'a': 2, 'b': 3})
>>> list(d.iteritems())
[('a', 2), ('b', 3)]
>>>
With help of pandas you can do like:
import pandas as pd
dict(pd.value_counts(my_list))