
User Research Is Not User Behavior
- Oby Anagwu
- Oct 1
- 5 min read
Updated: 6 days ago
The problem is that what people say they want and what they actually do are often completely different things.
This gap matters especially in developing economies, where design decisions can have outsized impact on adoption, retention, and business viability. A feature that tests well in research but fails in production wastes limited resources and delays market entry in competitive environments.
The Social Desirability Problem
When someone asks you questions, you respond based on how you want to be perceived. This is not conscious deception. It is a fundamental aspect of human psychology.
Ask users if they read terms and conditions before accepting them. Most will say yes. Track actual behavior and you will find almost no one does. Ask if they would pay for a premium version of an app. Many will express willingness. Launch the paid tier and conversion rates tell a different story.
The effect intensifies in face-to-face research settings common in developing markets. When a researcher sits across from you, social pressure to give helpful, socially acceptable answers increases. Users want to be good participants. They want to help the team build something successful. So they overstate their likelihood of using features, underreport behaviors they perceive as negative and express preferences that align with what they think the researcher wants to hear.
Memory Is Reconstructive
Users cannot accurately recall their own behavior. This is because memory does not work like a recording device. It reconstructs events based on current beliefs, emotions and context. Ask someone how many times they opened an app last week and they will give you a number. Compare that number to actual usage logs and you will often find significant discrepancies. Heavy users underestimate their usage, light users overestimate it. Everyone fills in gaps with what feels plausible rather than what actually happened.
This creates problems when design decisions rest on self-reported frequency, duration, or patterns of use. A feature that users claim to need daily might actually serve a weekly use case. A pain point described as constant might occur only under specific conditions that users have forgotten or misremembered.
The Intention-Action Gap
Behavioral economics has documented extensively that intentions do not predict actions. People genuinely believe they will exercise tomorrow, save money next month, or finally organize their files this weekend. Then tomorrow arrives and other priorities take over.
In product design, this manifests as the feature request problem. Users request features they sincerely believe they would use. They can envision themselves using these features. The vision feels real and compelling. But when the feature ships, their actual workflow, habits and priorities mean they never integrate it into their daily routine.
Mobile money services in East Africa have learned this lesson repeatedly. Early user research suggested people wanted detailed transaction histories, budgeting tools and financial planning features. Usage data revealed that most users only wanted to send money and check their balance. The envisioned use case and the actual use case diverged substantially.
Context Collapse
Research happens in artificial contexts, in an interview room, a survey on a phone, a usability test in an office. Real usage happens while commuting on unreliable transport, during work breaks, in homes with poor connectivity, or while managing multiple tasks simultaneously. The context of research cannot replicate the context of use. Users in interviews have time to think carefully about questions while users in the real world make split-second decisions under cognitive load. The friction that seems minor in a calm research session becomes insurmountable in actual conditions.
This affects design decisions around onboarding flows, feature complexity and information architecture. A process that test users navigate successfully in a quiet room might completely fail when those same users encounter it on a crowded bus with intermittent internet connectivity.
Hypothetical Bias
Research often asks hypothetical questions. Would you use this feature? Would you pay this price? How often would you do this action? Users answer these hypothetical questions, but hypothetical responses systematically overstate actual behavior.
The bias increases when the hypothetical involves future actions or costs that feel abstract. Saying you would pay five dollars per month for a service costs nothing in a survey. Actually paying five dollars per month requires pulling out a payment method, entering details and watching money leave your account is a different matter. The psychological friction of the real transaction does not exist in the hypothetical version.
Willingness to pay studies consistently overestimate what users will actually pay and feature prioritization based on stated importance regularly ranks features higher than their actual usage warrants. The hypothetical version of user behavior is cleaner, more rational and more intentional than the messy reality of actual usage.
What to Do Instead
The solution is not to abandon user research. The solution is to treat stated preferences as one input among many, not as ground truth.
Behavioral data tells you what users actually do. Analytics, usage logs and observed behavior reveal patterns that users themselves might not recognize or accurately report. A feature that seems unimportant in interviews but shows high engagement in data deserves attention. A pain point that users emphasize repeatedly but never actually encounter in usage logs might not warrant priority.
Revealed preferences through real choices matter more than stated preferences through survey responses. Watch what users do when they face actual tradeoffs. Do they pay for the premium tier or stick with free? Do they complete the longer signup flow or abandon it? Do they return daily or drift away after a week?
Small scale launches and iterative releases let you test actual behavior before committing resources to full builds. A simple prototype that users can actually try produces better signal than elaborate research about a hypothetical product. The commitment required to use even a basic working version filters out the gap between intention and action.
Contextual inquiry and field observation reduce some of the artificial constraints of traditional research. Watching someone use a product in their actual environment, with their real constraints and distractions, reveals friction that never surfaces in lab settings.
The Economics of the Gap
The stated preference problem has economic consequences. Building features that users request but do not use wastes development resources. Those resources matter more in capital-constrained environments where every sprint carries opportunity cost.
The gap also affects pricing strategies. Users who say they will pay rarely do at the rates they state. Revenue projections based on willingness to pay surveys tend toward overoptimism. Actual conversion requires testing real prices with real users making real decisions about their money.
For products serving developing markets, where users have tighter budgets and less margin for error, understanding revealed preferences becomes even more critical. A feature that tests well but adds complexity can tank adoption rates. A price point that surveys suggest is acceptable can prove to be a complete barrier in practice.
The gap between what users say and what users do is not a flaw in your research methodology. It is a fundamental aspect of human behavior that no amount of better interview questions or survey design can eliminate.
Economics in developing markets requires acknowledging this gap and building processes that account for it. Treat user research as hypothesis generation rather than validation. Use behavioral data to test those hypotheses. Launch small and iterate based on actual usage rather than predicted usage.
The users are not lying to you. They are telling you what they believe about themselves. Your job is to design for what they actually do.