A/B testing (also called split testing) is a controlled experiment in which traffic is split between two versions of a page or element — a control (A) and a variant (B) — to determine which performs better on a target metric such as click-through rate, sign-ups, or purchases.
How A/B Testing Works
- Identify a target — Choose a page and a metric to improve (e.g., landing page sign-up rate)
- Form a hypothesis — "Changing the CTA from 'Learn More' to 'Start Free Trial' will increase clicks"
- Create the variant — Build the B version with your proposed change
- Split traffic — Route visitors randomly to either A or B
- Wait for significance — Collect enough data to reach statistical significance
- Declare a winner — The version with the higher conversion rate wins
Statistical Significance in A/B Tests
The result of an A/B test is only meaningful if it reaches statistical significance — typically a 95% confidence level. This means there's only a 5% chance the observed difference is due to random chance rather than your actual change.
Running a test for too short a period, or on too small a sample, leads to false positives — a major source of wasted effort in CRO programs.
Limitations of Traditional A/B Testing
Standard A/B testing has a notable drawback: you can only test two variants at a time. If you want to test five different headlines, you need five sequential tests — each taking weeks to reach significance.
This is why many modern growth teams are adopting multivariate bandit testing, which can evaluate multiple variants simultaneously and automatically allocate more traffic to better-performing variants during the test itself.
A/B Testing vs. Multivariate Testing
| A/B Testing | Multivariate Testing | |
|---|---|---|
| Variants | 2 | 3+ |
| Elements tested | 1 at a time | Multiple simultaneously |
| Traffic needed | Moderate | Higher |
| Speed | Slower (sequential) | Faster (parallel) |
| Best for | Simple, focused changes | Complex page optimization |