XGBoost stands for eXtreme Gradient Boosting.
XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. It is an implementation of gradient boosted decision trees designed for speed and performance. XGBoost has been dominating machine learning and Kaggle competitions for structured or tabular data.
XGBoost algorithm was developed as a research project at the University of Washington. Tianqi Chen and Carlos Guestrin presented their paper at SIGKDD Conference in 2016 and caught the Machine Learning world by fire.
To understand XGBoost, we must first understand Gradient Descent and Gradient Boosting. Gradient Descent is an iterative optimization…
Apache Spark is an open-source parallel processing framework for storing and processing Big Data across clustered computers. Spark can be used to perform computations much faster than Hadoop can rather Hadoop and Spark can be used together efficiently. Spark is written in Scala, which is considered the primary language for interacting with the Spark Core engine, but it doesn’t require developers to know Scala, which executes inside a Java Virtual Machine (JVM). APIs for Java, Python, R, and Scala ensure Spark is within reach of a wide audience of developers, and they have embraced the software.
Spark uses a master-slave…
Neural Networks is one of the most powerful and widely used algorithms in machine learning. A neural network works similarly to the human brain’s neural network.
Neural networks are set layers of highly interconnected processing elements (neurons) that make a series of transformations on the data to generate its own understanding of it(what we commonly call features).
Ensemble methods are techniques that create multiple models and then combine them to produce improved results. Ensemble methods usually produce more accurate solutions than a single model would.
The main causes of error in learning are due to noise, bias, and variance. Ensemble helps to minimize these factors. These methods are designed to improve the stability and the accuracy of Machine Learning algorithms.
The two main types of Ensemble methods are Bagging and Boosting.
In this blog, I will explain the difference between Bagging and Boosting ensemble methods.
Bagging is a Parallel ensemble method (stands for Bootstrap Aggregating), is a…
MapReduce is a programming model that allows you to process your data across an entire cluster. It provides access to high-level applications using scripts in languages such as Hive and Pig, and programming languages as Scala and Python.
MapReduce consists of Mappers and Reducers that are different scripts, which you might write, or different functions you might use when writing a MapReduce program. MapReduce makes the use of two functions.
Reinforcement learning is an approach to machine learning that is inspired by behaviorist psychology. Reinforcement learning contrasts with other machine learning approaches in that the algorithm is not explicitly told how to perform a task, but works through the problem on its own.
Reinforcement learning differs from supervised learning in a way that in supervised learning the training data has the answer key with it so the model is trained with the correct answer itself whereas in reinforcement learning, there is no answer but the reinforcement agent decides what to do to perform the given task. …
Support Vector Machine(SVM) is a supervised learning algorithm that can be used for classification and regression problems. Support Vector Machine for classification is called Support Vector Classification(SVC) and for regression is called Support Vector Regression(SVR).
SVM works on the idea of finding a hyperplane that best separates features into different domains. Let's work with an example to fully understand how SVM works. Let’s imagine we have two tags: red and blue, and our data has two features: x and y.
Before talking about Hadoop, I think we should talk about Big Data. Big data is a term used for incredibly large datasets that cannot be stored or processed efficiently with traditional methods.
Hadoop is an open-source framework used to solve big data problems efficiently. Hadoop allows distributed processing of large data sets across clusters of computers using simple programming models.
Hadoop Ecosystem is a platform or a suite that provides various services to solve big data problems. It includes Apache projects and various commercial tools and solutions.
In this blog, I will introduce all the components of the Hadoop ecosystem.
Decision Tree is a supervised machine learning algorithm. Decision trees can be used for regression and classification tasks. A decision tree is a DAG type of classifier where data is continuously split according to a certain parameter.
In Data Science, Clustering is the most common form of unsupervised learning. Clustering is a Machine Learning technique that involves the grouping of data points. Unlike Regression and Classification, we don’t have a target variable in Clustering. Since Clustering is unsupervised, we cannot calculate errors or accuracy or any of those metrics. In this blog, I will talk about different metrics to evaluate Clustering algorithms.
Clustering is evaluated based on some similarity or dissimilarity measures such as distance between cluster points. …