Use scikit-learn to classify into multiple categories Use scikit-learn to classify into multiple categories python python

Use scikit-learn to classify into multiple categories


What you want is called multi-label classification. Scikits-learn can do that. See here: http://scikit-learn.org/dev/modules/multiclass.html.

I'm not sure what's going wrong in your example, my version of sklearn apparently doesn't have WordNGramAnalyzer. Perhaps it's a question of using more training examples or trying a different classifier? Though note that the multi-label classifier expects the target to be a list of tuples/lists of labels.

The following works for me:

import numpy as npfrom sklearn.pipeline import Pipelinefrom sklearn.feature_extraction.text import CountVectorizerfrom sklearn.svm import LinearSVCfrom sklearn.feature_extraction.text import TfidfTransformerfrom sklearn.multiclass import OneVsRestClassifierX_train = np.array(["new york is a hell of a town",                    "new york was originally dutch",                    "the big apple is great",                    "new york is also called the big apple",                    "nyc is nice",                    "people abbreviate new york city as nyc",                    "the capital of great britain is london",                    "london is in the uk",                    "london is in england",                    "london is in great britain",                    "it rains a lot in london",                    "london hosts the british museum",                    "new york is great and so is london",                    "i like london better than new york"])y_train = [[0],[0],[0],[0],[0],[0],[1],[1],[1],[1],[1],[1],[0,1],[0,1]]X_test = np.array(['nice day in nyc',                   'welcome to london',                   'hello welcome to new york. enjoy it here and london too'])   target_names = ['New York', 'London']classifier = Pipeline([    ('vectorizer', CountVectorizer(min_n=1,max_n=2)),    ('tfidf', TfidfTransformer()),    ('clf', OneVsRestClassifier(LinearSVC()))])classifier.fit(X_train, y_train)predicted = classifier.predict(X_test)for item, labels in zip(X_test, predicted):    print '%s => %s' % (item, ', '.join(target_names[x] for x in labels))

For me, this produces the output:

nice day in nyc => New Yorkwelcome to london => Londonhello welcome to new york. enjoy it here and london too => New York, London

Hope this helps.


EDIT: Updated for Python 3, scikit-learn 0.18.1 using MultiLabelBinarizer as suggested.

I've been working on this as well, and made a slight enhancement to mwv's excellent answer that may be useful. It takes text labels as the input rather than binary labels and encodes them using MultiLabelBinarizer.

import numpy as npfrom sklearn.pipeline import Pipelinefrom sklearn.feature_extraction.text import CountVectorizerfrom sklearn.svm import LinearSVCfrom sklearn.feature_extraction.text import TfidfTransformerfrom sklearn.multiclass import OneVsRestClassifierfrom sklearn.preprocessing import MultiLabelBinarizerX_train = np.array(["new york is a hell of a town",                    "new york was originally dutch",                    "the big apple is great",                    "new york is also called the big apple",                    "nyc is nice",                    "people abbreviate new york city as nyc",                    "the capital of great britain is london",                    "london is in the uk",                    "london is in england",                    "london is in great britain",                    "it rains a lot in london",                    "london hosts the british museum",                    "new york is great and so is london",                    "i like london better than new york"])y_train_text = [["new york"],["new york"],["new york"],["new york"],["new york"],                ["new york"],["london"],["london"],["london"],["london"],                ["london"],["london"],["new york","london"],["new york","london"]]X_test = np.array(['nice day in nyc',                   'welcome to london',                   'london is rainy',                   'it is raining in britian',                   'it is raining in britian and the big apple',                   'it is raining in britian and nyc',                   'hello welcome to new york. enjoy it here and london too'])target_names = ['New York', 'London']mlb = MultiLabelBinarizer()Y = mlb.fit_transform(y_train_text)classifier = Pipeline([    ('vectorizer', CountVectorizer()),    ('tfidf', TfidfTransformer()),    ('clf', OneVsRestClassifier(LinearSVC()))])classifier.fit(X_train, Y)predicted = classifier.predict(X_test)all_labels = mlb.inverse_transform(predicted)for item, labels in zip(X_test, all_labels):    print('{0} => {1}'.format(item, ', '.join(labels)))

This gives me the following output:

nice day in nyc => new yorkwelcome to london => londonlondon is rainy => londonit is raining in britian => londonit is raining in britian and the big apple => new yorkit is raining in britian and nyc => london, new yorkhello welcome to new york. enjoy it here and london too => london, new york


I just ran into this as well, and the problem for me was that my y_Train was a sequence of Strings, rather than a sequence of sequences of String. Apparently, OneVsRestClassifier will decide based on the input label format whether to use multi-class vs. multi-label. So change:

y_train = ('New York','London')

to

y_train = (['New York'],['London'])

Apparently this will disappear in the future, since it breaks of all the labels are the same: https://github.com/scikit-learn/scikit-learn/pull/1987