
There are two commonly used regularization techniques: L1 (Lasso) and L2 (Ridge) regularization and Elastic Net. A hyperparameter that controls the regularization term can affect how strong the regularization is.

This significantly reduces the complexity of the model and increases its generalizability. Regularization prevents the model from fitting the training data too well by including a penalty term in the loss function. Regularization is used to achieve a balance between the model’s level of complexity and its goodness of fit.

On fresh, untested data, this may lead to poor generalization performance. An overfit model captures noise or random fluctuations in the data rather than the underlying patterns because it is too complicated and fits the training data too well. By including a penalty term to the loss function, regularization is a machine learning technique that seeks to stop overfitting.
