Others view: Action potential/ ♦ Welcome ♦ Lissajous curves ♦ RSA encryption algorithm ♦ rss.xml ♦ welcome ♦ Matrix ♦ 2D Fast Fourier Transform ♦

In statistics, a confidence interval (CI) is a particular kind of interval estimate of a population parameter that is being measured (sampled), obtaining multiple probably similar, but not identical values. An interval likely to include most of the values. How likely the interval is to contain the parameter is determined by the confidence level or confidence coefficient. Increasing the desired confidence level will widen the confidence interval. Confidence intervals are used to indicate how reliable is the estimation of the measured value.

A confidence interval is always qualified by a particular confidence level, usually expressed as a percentage. In science, the 95% confidence interval is the most common, but in medicine some much higher values may be required.

The calculation of a confidence interval generally depends on an assumption that the distribution of the population (most often assuming normal distribution) and also may depend on more assumptions. Hence confidence intervals are often not robust statistics, though modifications can be made to add robustness.

Confidence intervals are highly important when publishing results of scientific research, as they prove that the observed differences between scientific measurements are not just by chance. To get discovered differences (say between the average weight African and Indian elephant) statistically significant, the researcher must both repeat the experiment necessary number of times (enough elephants) and observe sufficiently different results. Less repetitions are required to prove that elephant and mouse weight differently, as big observed differences are likely be significant after minimal number of repetitions. This also leads to one of drawbacks of the confidence intervals: it is possible to prove "statistical differences" between the mean values of actually not so different data sets if they are large enough. Such tiny differences in big data sets at the end may be simply caused by fact that there are no two absolutely identical objects in the Universe. Hence a researcher needs to look also *how much* the data are different, not just if they are statistically different.

Same reasons may make confidence intervals important when processing survey results and in other cases when it is important to know that result is not "just by chance".

Statistics is a wide science, and the used methods differ depending from the application area. The method, describe below, is typically used by biologists. It shows how to compute the 95 % confidence interval for n measurements [[Math:c|x_1 .. x_n]], assuming that data follow normal distribution.

1. Compute the mean:

- [[Math:c|\mu = \frac{1}{n}\cdot \sum_{i=1}^n{x_i}]]

2. Compute the standard deviation and of the sample:

- [[Math:c|\sigma = \sqrt{\frac{1}{n - 1} \sum_{i=1}^n (x_i - \mu)^2}]]

Using N-1 (instead of N) is known as Bessel's correction (formula without this correction correctly reflects variance but underestimates the standard deviation).

Standard deviation of the sample shows how much results are varying but it would not decrease with more measurements. The results do not vary less if we measure more.

3. Compute the standard deviation of the mean that decreases when we have more measurements:

- [[Math:c|\sigma_{\mu} = \frac \sigma {\sqrt n}]]

4. Compute the Student coefficient for the 95 % level of confidence. It depends on the confidence level and number of measurements but does not depend on the observed values. This value does not vary much and is between 2.5 and 1.7 for more than three measurements. See Wikipedia under "95 % two sided" column for exact values.

5. Then the observed mean can be written with the confidence interval

- [[Math:c|\mu = \sigma_{\mu} \cdot t_{{n-1}, 95}]]

where t is the computed Student coefficient of n measurements under 95 % confidence level. The degrees of freedom (the first parameter of t) is one less than the sample size.

Standard sample deviation can also be computed using formula

[[Math:c| \sigma=\sqrt{{\frac {n\sum _{k=1}^{n}{x_{{k}}}^{2}- \left( \sum _{k=1}^{n}x_{{k}} \right) ^{2}}{n \left( n-1 \right) }}}]]

This formula has been recommended for performance. It also allows to compute *partial* confidence interval, for all values that have been entered so far, without the need of large recalculations when more data arrive (sum and sum of squares can be easily updated). This is not important for small samples but may still matter when there are many thousands of samples, collected by some automatic device. It must produce exactly the same numeric result as the formula above.