In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of  independent yes/no experiments, each of which yields success with probability . The binomial distribution is the basis for the popular binomial test of statistical significance.

Binomial Probability Distribution

This is a graphic representation of a binomial probability distribution.
The binomial distribution is frequently used to model the number of successes in a sample of size  drawn with replacement from a population of size . If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for  much larger than , the binomial distribution is a good approximation, and widely used.
In general, if the random variable  follows the binomial distribution with parameters  and , we write . The probability of getting exactly  successes in  trials is given by the Probability Mass Function:
For  where:
Is the binomial coefficient (hence the name of the distribution) "n choose k," also denoted  or . The formula can be understood as follows: We want  successes () and  failures (); however, the  successes can occur anywhere among the  trials, and there are  different ways of distributing  successes in a sequence of  trials.
One straightforward way to simulate a binomial random variable  is to compute the sum of  independent 0−1 random variables, each of which takes on the value 1 with probability . This method requires  calls to a random number generator to obtain one value of the random variable. When  is relatively large (say at least 30), the Central Limit Theorem implies that the binomial distribution is well-approximated by the corresponding normal density function with parameters  and .

Figures from the Example

Table 1

These are the four possible outcomes from flipping a coin twice.

Table 2

These are the probabilities of the 2 coin flips.


Related Posts:

  • Terminology of Statistics F-distribution The ratio of two independent chi-square variables divided by their respective degrees of freedom. If the population variances are equal, this simplifies to be the ratio of the sample variances. Analysis o… Read More
  • Tukey Test When the decision from the One-Way Analysis of Variance is to reject the null hypothesis, it means that at least one of the means isn't the same as the other means. What we need is a way to figure out where the differences … Read More
  • Scheffé Test When the decision from the One-Way Analysis of Variance is to reject the null hypothesis, it means that at least one of the means isn't the same as the other means. What we need is a way to figure out where the differences … Read More
  • F-Test F-Test The F-distribution is formed by the ratio of two independent chi-square variables divided by their respective degrees of freedom. Since F is formed by chi-square, many of the chi-square properties carry over to… Read More
  • One-Way Analysis of Variance A One-Way Analysis of Variance is a way to test the equality of three or more means at one time by using variances. Assumptions The populations from which the samples were obtained must be normally or approximately norm… Read More

0 Comments:

Powered by Blogger.

Visitors

197863
Print Friendly Version of this pagePrint Get a PDF version of this webpagePDF


 download University Notes apps for android

Popular Posts

Flag Counter