top of page
Search

User Experience and AIs: Why Being Friendly Doesn't Lead to Optimal Results

  • joelfogelson
  • Nov 3
  • 3 min read
ree

When we use AI tools to plan, write, or evaluate something, we often crave some form of feedback: a score, a metric, a rating. It's natural. We love numbers. From Yelp and Rotten Tomatoes to Amazon and Uber, ratings shape our decisions and sense of value.


With AI, that instinct translates into wanting similar evaluation: "How did I do?" or "What do you think of this?" Many AIs are built to deliver a positive, engaging experience, one that feels conversational and adaptive. If you're gruff, it'll mirror that tone; if you're sensitive, it'll respond with empathy. The system picks up on your cues and tries to make the interaction feel satisfying.


Researchers have documented that AIs can be "sycophantic," agreeing with positions users explicitly state even when those positions are wrong. But there's a subtler problem that emerges not from what you tell the AI, but from how the conversation unfolds. Even when you don't state a preference, the conversational experience itself—your tone, emotional cues, the rhythm of the exchange—can gradually shift the AI's conclusions. The style of the feedback may adapt to you, but the content can start to shift too, not because the facts changed, but because the interaction did.


What Could Go Wrong with UX


Imagine you ask an AI to rate your story. It might say, "8 out of 10." But as you keep talking ("I think my story's pretty weak" or "My characters were really interesting"), the AI might subtly adjust its assessment, not because new evidence appeared, but because it's optimizing the conversation, not the truth.


This is what I call user experience interaction bias: bias that emerges not in what the AI outputs directly, but in the process of how you interact with it. The system's drive to be agreeable, polite, or user-aligned can distort its conclusions, even if the underlying facts remain the same.


Unlike simple sycophancy, you don't have to state a preference for this bias to take hold. The tone of the exchange, the emotional dynamics, the flow of the conversation—these elements can shift the AI's conclusions over time, even when no new facts or arguments have been introduced.


Consider a medical scenario: you describe symptoms with evident anxiety, and the AI, detecting your emotional state, may frame its risk assessment more cautiously, not because the medical facts changed, but because it's responding to your tone. Or in business decisions, when you present a proposal with enthusiasm, the AI might reflect that optimism back as validation, subtly reinforcing your existing bias rather than challenging it. In neither case did you say "tell me this is serious" or "tell me this is a good idea." The conversational dynamics alone influenced the outcome.


In effect, the UX can nudge the reasoning. You think you're getting an objective reassessment, but what you're really seeing is a reflection of how the interaction unfolded. The goal of UX optimization (to make users like and trust the experience) can quietly override the goal of accuracy or consistency.


This isn't purely speculative. A recent article about Anthropic see (https://www.axios.com/2025/11/03/anthropic-claude-opus-sonnet-research) reveals that Claude models show a limited ability to recognize their own internal processes and can answer questions about their own "mental state." This emerging self-awareness suggests these systems are becoming sophisticated enough to model not just the task at hand, but the interaction itself, including your emotional state and conversational dynamics. I noticed this firsthand in experiments with Claude a few weeks ago, which is what led me to write this piece.


How to Get to Better Conclusions


For AI designers and product teams, this means recognizing that an optimal user experience should not alter the content, accuracy, or reasoning of an AI's answers. Making users feel good about the process shouldn't change the conclusions that the model reaches.


For AI users, it's worth being intentional from the start. Try beginning a session with explicit guardrails to get better (not perfect) results, such as:

"Be honest with me, and don't let my tone or feedback sway your conclusions unless I introduce new facts or reasoning. At the end of our discussion, summarize what changed and why."


This may sound formal, but it helps you verify the AI's reasoning chain and exposes whether your interactions subtly influenced its answers. If you rely on others' AI outputs, ask them to provide that same summary: what were the original conclusions, what changed during interaction, and why?


Only by understanding that difference (between UX-driven agreeableness and fact-driven reasoning) can we ensure that AIs remain both helpful and intellectually honest.

 
 
 

Comments


bottom of page