Is there easy way to grid search without cross validation in python?
I would really advise against using OOB to evaluate a model, but it is useful to know how to run a grid search outside of GridSearchCV()
(I frequently do this so I can save the CV predictions from the best grid for easy model stacking). I think the easiest way is to create your grid of parameters via ParameterGrid()
and then just loop through every set of params. For example assuming you have a grid dict, named "grid", and RF model object, named "rf", then you can do something like this:
for g in ParameterGrid(grid): rf.set_params(**g) rf.fit(X,y) # save if best if rf.oob_score_ > best_score: best_score = rf.oob_score_ best_grid = gprint "OOB: %0.5f" % best_score print "Grid:", best_grid
One method is to use ParameterGrid
to make a iterator of the parameters you want and loop over it.
Another thing you could do is actually configure the GridSearchCV to do what you want. I wouldn't recommend this much because it's unnecessarily complicated.
What you would need to do is:
- Use the arg
cv
from the docs and give it a generator which yields a tuple with all indices (so that train and test are same) - Change the
scoring
arg to use the oob given out from the Random forest.
See this link:https://stackoverflow.com/a/44682305/2202107
He used cv=[(slice(None), slice(None))]
which is NOT recommended by sklearn's authors.