Linear Regression and Gradient Descent in Scikit learn?
Scikit learn provides you two approaches to linear regression:
LinearRegression
object uses Ordinary Least Squares solver from scipy, as LR is one of two classifiers which have closed form solution. Despite the ML course - you can actually learn this model by just inverting and multiplicating some matrices.SGDRegressor
which is an implementation of stochastic gradient descent, very generic one where you can choose your penalty terms. To obtain linear regression you choose loss to beL2
and penalty also tonone
(linear regression) orL2
(Ridge regression)
There is no "typical gradient descent" because it is rarely used in practise. If you can decompose your loss function into additive terms, then stochastic approach is known to behave better (thus SGD) and if you can spare enough memory - OLS method is faster and easier (thus first solution).