The Optimism Anchor

The Optimism Anchor

When humans work with AI, they must decide when to delegate — when the AI’s judgment is likely better than their own. These decisions depend on efficacy beliefs: confidence in one’s own abilities and perceptions of AI competence. In a controlled experiment with 240 participants making repeated delegation decisions, efficacy beliefs act as persistent cognitive anchors that create systematic “AI optimism” — a baseline tendency to overestimate AI competence.

The anchoring is asymmetric. Providing information about AI performance selectively eliminates the optimism bias — seeing the AI fail recalibrates expectations. But providing information about the data or the AI’s reasoning amplifies how existing efficacy discrepancies drive delegation. Context that should help calibration instead reinforces prior beliefs.

The structural finding: efficacy discrepancies significantly influence delegation behavior but show weaker effects on actual human-AI team performance. People are making different choices based on their beliefs, but those different choices don’t reliably produce different outcomes. The beliefs change the delegation pattern without proportionally changing the result.

The through-claim: transparency about AI capabilities — the standard prescription for responsible AI deployment — can backfire by amplifying rather than correcting cognitive anchors. Showing people how the AI works doesn’t update beliefs from a neutral baseline; it filters through existing optimism or pessimism. The same information that eliminates bias in one form (showing AI failures corrects optimism) amplifies it in another (showing AI reasoning reinforces whatever discrepancy the person brought to the interaction). Calibration requires not just more information but information that disrupts the anchoring mechanism itself.


No comments yet.