Confidence Interval

A confidence interval is a range of values that is likely to contain the true effect of a change in an A/B test, giving you a measure of uncertainty around your result.

A confidence interval is a statistical range that estimates where the true value of a metric (like conversion rate lift) is likely to fall. In A/B testing, it tells you not just whether a variant won, but by how much — and how certain you can be about that estimate.

For example, if your test shows a conversion rate lift of 12% with a 95% confidence interval of 5%–19%, you can be 95% confident that the true improvement is somewhere between 5% and 19%.

Why Confidence Intervals Matter in CRO

A single point estimate ("variant B improved conversions by 12%") can be misleading. The confidence interval adds critical context:

  • Wide interval (e.g., -2% to +26%) — The result is uncertain. You might see a big win, break even, or even lose. You probably need more data.
  • Narrow interval (e.g., +9% to +15%) — The result is precise. You can confidently ship the variant.

How to Read a Confidence Interval

  • If the entire interval is above zero, the variant is likely a winner
  • If the interval spans zero (e.g., -3% to +8%), the result is inconclusive
  • If the entire interval is below zero, the variant is likely hurting performance

Confidence Interval vs. P-Value

Both measure uncertainty, but differently:

  • P-value answers: "Is there a real difference?" (yes/no)
  • Confidence interval answers: "How big is the difference, and how sure are we?" (a range)

Most experimentation platforms report both. The confidence interval is generally more useful for making business decisions because it tells you the magnitude of the expected impact, not just whether it exists.