Position bias is the tendency for users to click, engage with, or convert on elements based on where they appear on a page — not because of the element's content or quality, but simply because of its location. The first item in a list gets more clicks than the second. The left pricing plan gets more attention than the right. The CTA above the fold outperforms the identical CTA below it.
Position drives behavior independent of relevance or value.
Where It Shows Up
Position bias appears across almost every element type and context:
- Search results and product listings: Users disproportionately click the first result even when lower-ranked items are more relevant
- Navigation menus: Items at the beginning (and sometimes the end) of a nav list receive more clicks than items in the middle
- Pricing plan order: The plan displayed first or in the center often gets higher consideration, regardless of which plan is objectively best suited to the user
- CTA placement: A button above the fold typically outperforms an identical button below it, even if the user hasn't yet read enough to make an informed decision
- Feature lists and testimonials: The first item in any list receives more attention than items further down
Why It Matters for CRO
Position bias is a testing confound. If you run an A/B test where variant B moves a CTA from below the fold to above the fold, and variant B converts better, you've learned that position matters — not that the CTA's copy or design is better.
The same logic applies in reverse. If you're testing two different CTAs but their position varies between variants (intentionally or accidentally), you can't cleanly attribute the result to the copy change. Position is confounding your test.
This is more common than it sounds. Layout changes often shift element positions as a side effect, and teams don't always account for it when interpreting results.
How to Control for It
In A/B tests: Keep all element positions identical between variants when testing content changes. If you want to test both copy and position, use a factorial design with separate variants for each.
In ranked list tests: Use interleaving — a technique where results from two ranking algorithms are merged and the system infers which ranking users prefer from their click behavior. This gives a position-balanced comparison.
In CTA placement tests: Isolate placement as the explicit variable. Test 'CTA above fold' vs. 'CTA below fold' with identical copy, rather than bundling placement with a copy change.
Position Bias and Heatmap Analysis
Heatmaps are one of the clearest ways to observe position bias directly. Two patterns appear consistently:
- F-pattern: Users scan horizontally across the top of a page, then down the left side, and occasionally across a second horizontal band. Content in the upper-left receives the most attention; content in the lower-right receives almost none
- Z-pattern: On pages with less text, attention follows the top horizontal, then diagonals to the bottom of the page — a Z shape
Both patterns explain why position matters: attention is not distributed evenly, and the distribution is predictable. A heatmap analysis that reveals low engagement with a CTA often shows it's positioned outside the natural attention path — not that the copy or design is ineffective.
Understanding position bias changes how you prioritize CRO work. Before testing what an element says, confirm that users are actually seeing it.