O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Early Stopping in Deep Learning

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Próximos SlideShares
Batch Normalization
Batch Normalization
Carregando em…3
×

Confira estes a seguir

1 de 1 Anúncio

Mais Conteúdo rRelacionado

Mais recentes (20)

Anúncio

Early Stopping in Deep Learning

  1. 1. 1. We stop training process when we do not see any improvement in the validation error at the end of epoch. 2. Key parameters: 1. Patience – How many epochs of no improvement we need to wait to finally stop the training. 2. Delta – What is the minimum change in KPI that can be termed as a real improvement. For example, improvement of 0.000001% in validation error can be called as not an improvement as it is minor. 3. Keep best weights – Let’s say validation error keeps reducing from epoch 1 to 10 and after 10, it starts increasing. We have a patience of 4 which makes us wait till epoch 14 to stop training process. In this scenario, the best validation error was at the end of epoch 10. Hence, we keep the weights that were used in epoch 10. Early Stopping

×