Which parameters should be used for early stopping? Which parameters should be used for early stopping? python python

Which parameters should be used for early stopping?


early stopping

Early stopping is basically stopping the training once your loss starts to increase (or in other words validation accuracy starts to decrease). According to documents it is used as follows;

keras.callbacks.EarlyStopping(monitor='val_loss',                              min_delta=0,                              patience=0,                              verbose=0, mode='auto')

Values depends on your implementation (problem, batch size etc...) but generally to prevent overfitting I would use;

  1. Monitor the validation loss (need to use crossvalidation or at least train/test sets) by setting the monitorargument to 'val_loss'.
  2. min_delta is a threshold to whether quantify a loss at some epoch asimprovement or not. If the difference of loss is below min_delta, it is quantifiedas no improvement. Better to leave it as 0 since we're interested inwhen loss becomes worse.
  3. patience argument represents the number of epochs before stopping once your loss starts to increase (stops improving).This depends on your implementation, if you use very small batchesor a large learning rate your loss zig-zag (accuracy will be more noisy) so better set alarge patience argument. If you use large batches and a smalllearning rate your loss will be smoother so you can use asmaller patience argument. Either way I'll leave it as 2 so I wouldgive the model more chance.
  4. verbose decides what to print, leave it at default (0).
  5. mode argument depends on what direction your monitored quantityhas (is it supposed to be decreasing or increasing), since we monitor the loss, we can use min. But let's leave kerashandle that for us and set that to auto

So I would use something like this and experiment by plotting the error loss with and without early stopping.

keras.callbacks.EarlyStopping(monitor='val_loss',                              min_delta=0,                              patience=2,                              verbose=0, mode='auto')

For possible ambiguity on how callbacks work, I'll try to explain more. Once you call fit(... callbacks=[es]) on your model, Keras calls given callback objects predetermined functions. These functions can be called on_train_begin, on_train_end, on_epoch_begin, on_epoch_end and on_batch_begin, on_batch_end. Early stopping callback is called on every epoch end, compares the best monitored value with the current one and stops if conditions are met (how many epochs have past since the observation of the best monitored value and is it more than patience argument, the difference between last value is bigger than min_delta etc..).

As pointed by @BrentFaust in comments, model's training will continue until either Early Stopping conditions are met or epochs parameter (default=10) in fit() is satisfied. Setting an Early Stopping callback will not make the model to train beyond its epochs parameter. So calling fit() function with a larger epochs value would benefit more from Early Stopping callback.


Here's an example of EarlyStopping from another project, AutoKeras (https://autokeras.com/), an automated machine learning (AutoML) library. The library sets two EarlyStopping parameters: patience=10 and min_delta=1e-4

https://github.com/keras-team/autokeras/blob/5e233956f32fddcf7a6f72a164048767a0021b9a/autokeras/engine/tuner.py#L170

the default quantity to monitor for both AutoKeras and Keras is the val_loss:

https://github.com/keras-team/keras/blob/cb306b4cc446675271e5b15b4a7197efd3b60c34/keras/callbacks.py#L1748https://autokeras.com/image_classifier/