Ensemble Methods: Bagging vs Boosting

Jagandeep Singh
3 min readFeb 6, 2021

Ensemble Methods

Ensemble methods are techniques that create multiple models and then combine them to produce improved results. Ensemble methods usually produce more accurate solutions than a single model would.

The main causes of error in learning are due to noise, bias, and variance. Ensemble helps to minimize these factors. These methods are designed to improve the stability and the accuracy of Machine Learning algorithms.

The two main types of Ensemble methods are Bagging and Boosting.

In this blog, I will explain the difference between Bagging and Boosting ensemble methods.

Bagging

Bagging is a Parallel ensemble method (stands for Bootstrap Aggregating), is a way to decrease the variance of the prediction model by generating additional data in the training stage. This is produced by random sampling with replacement from the original set. By sampling with replacement, some observations may be repeated in each new training data set. In the case of Bagging, every element has the same probability to appear in a new dataset. By increasing the size of the training set, the model’s predictive force can’t be improved. It decreases the variance and narrowly tunes the prediction to an expected outcome.

--

--