Determining the most contributing features for SVM classifier in sklearn Determining the most contributing features for SVM classifier in sklearn python python

Determining the most contributing features for SVM classifier in sklearn


Yes, there is attribute coef_ for SVM classifier but it only works for SVM with linear kernel. For other kernels it is not possible because data are transformed by kernel method to another space, which is not related to input space, check the explanation.

from matplotlib import pyplot as pltfrom sklearn import svmdef f_importances(coef, names):    imp = coef    imp,names = zip(*sorted(zip(imp,names)))    plt.barh(range(len(names)), imp, align='center')    plt.yticks(range(len(names)), names)    plt.show()features_names = ['input1', 'input2']svm = svm.SVC(kernel='linear')svm.fit(X, Y)f_importances(svm.coef_, features_names)

And the output of the function looks like this:Feature importances


In only one line of code:

fit an SVM model:

from sklearn import svmsvm = svm.SVC(gamma=0.001, C=100., kernel = 'linear')

and implement the plot as follows:

pd.Series(abs(svm.coef_[0]), index=features.columns).nlargest(10).plot(kind='barh')

The resuit will be:

the most contributing features of the SVM model in absolute values


I created a solution which also works for Python 3 and is based on Jakub Macina's code snippet.

from matplotlib import pyplot as pltfrom sklearn import svmdef f_importances(coef, names, top=-1):    imp = coef    imp, names = zip(*sorted(list(zip(imp, names))))    # Show all features    if top == -1:        top = len(names)    plt.barh(range(top), imp[::-1][0:top], align='center')    plt.yticks(range(top), names[::-1][0:top])    plt.show()# whatever your features are calledfeatures_names = ['input1', 'input2', ...] svm = svm.SVC(kernel='linear')svm.fit(X_train, y_train)# Specify your top n features you want to visualize.# You can also discard the abs() function # if you are interested in negative contribution of featuresf_importances(abs(clf.coef_[0]), feature_names, top=10)

Feature importance