# Types of Probability Distributions

## What is a Probability Distribution?

A probability distribution is a statistical function that describes all the possible values and likelihoods that a random variable can take within a given range. This range will be bounded between the minimum and maximum possible values, but precisely where the possible value is likely to be plotted on the probability distribution depends on a number of factors. These factors include the distribution’s mean (average), standard deviation, skewness, and kurtosis.

Mostly used types of Distributions are following:

## Bernoulli Distribution

The Bernoulli distribution is one of the easiest distributions to understand and can be used as a starting point to derive more complex distributions. The Bernoulli distribution is the discrete probability distribution of a random variable which takes a binary, boolean output: 1 with probability p, and 0 with probability (1-p). The idea is that, whenever you are running an experiment that might lead either to a successor to a failure, you can associate with your success (labeled with 1) a probability p, while you're in success (labeled with 0) will have probability (1-p).

## Uniform Distribution

Uniform distribution is a form of probability distribution where every possible outcome has an equal likelihood of happening. The probability is constant since each variable has equal chances of being the outcome.

Types of Uniform Distribution

**Discrete Uniform Distribution: **A** **discrete uniform distribution is a statistical distribution where the probability of outcomes is equally likely and with finite values. A good example of a discrete uniform distribution would be the possible outcomes of rolling a 6-sided die.

**Continuous Uniform Distribution: **A continuous uniform distribution is a statistical distribution with an infinite number of equally likely measurable values. Unlike discrete random variables, a continuous random variable can take any real value within a specified range. A good example of a continuous uniform distribution is an idealized random number generator.

## Binomial Distribution

The binomial distribution is a probability distribution that summarizes the likelihood that a value will take one of two independent values under a given set of parameters or assumptions. The underlying assumptions of the binomial distribution are that there is only one outcome for each trial, that each trial has the same probability of success, and that each trial is mutually exclusive, or independent of the other.

## Poisson Distribution

A Poisson distribution is a probability distribution that can be used to show how many times an event is likely to occur within a specified period of time. In other words, it is a count distribution. Poisson distributions are often used to understand independent events that occur at a constant rate within a given interval of time. It was named after French mathematician Siméon Denis Poisson. Many economic and financial data appear as count variables, such as how many times a person becomes unemployed in a given year, thus lending itself to analysis with a Poisson distribution.

## Exponential Distribution

The exponential distribution (also called the negative exponential distribution) is a probability distribution that describes the time between events in a Poisson process.

## Normal Distribution

The Normal Distribution is one of the most used distributions in Data Science. Many common phenomena that take place in our daily life follow Normal Distributions such as the income distribution in the economy, students' average reports, the average height in populations, etc. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable — whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.