Frequency table for a single variable
Maybe .value_counts()
?
>>> import pandas>>> my_series = pandas.Series([1,2,2,3,3,3, "fred", 1.8, 1.8])>>> my_series0 11 22 23 34 35 36 fred7 1.88 1.8>>> counts = my_series.value_counts()>>> counts3 32 21.8 2fred 11 1>>> len(counts)5>>> sum(counts)9>>> counts["fred"]1>>> dict(counts){1.8: 2, 2: 2, 3: 3, 1: 1, 'fred': 1}
You can use list comprehension on a dataframe to count frequencies of the columns as such
[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]
Breakdown:
my_series.select_dtypes(include=['O'])
Selects just the categorical data
list(my_series.select_dtypes(include=['O']).columns)
Turns the columns from above into a list
[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]
Iterates through the list above and applies value_counts() to each of the columns
The answer provided by @DSM is simple and straightforward, but I thought I'd add my own input to this question. If you look at the code for pandas.value_counts, you'll see that there is a lot going on.
If you need to calculate the frequency of many series, this could take a while. A faster implementation would be to use numpy.unique with return_counts = True
Here is an example:
import pandas as pdimport numpy as npmy_series = pd.Series([1,2,2,3,3,3])print(my_series.value_counts())3 32 21 1dtype: int64
Notice here that the item returned is a pandas.Series
In comparison, numpy.unique
returns a tuple with two items, the unique values and the counts.
vals, counts = np.unique(my_series, return_counts=True)print(vals, counts)[1 2 3] [1 2 3]
You can then combine these into a dictionary:
results = dict(zip(vals, counts))print(results){1: 1, 2: 2, 3: 3}
And then into a pandas.Series
print(pd.Series(results))1 12 23 3dtype: int64