Emotional Loading in Prompts and Perception
When we talk to ChatGPT or Claude or any other AI, it can very much feel like we’re having a normal conversation. They can be witty, have a sense of humor, and answer questions. For me it can feel a little like having Grammarly with a personality.
But when we’re talking to AI we have to remember they aren’t human. They function by recognizing patterns in language and predicting what usually comes next, not human personality and lived experience. You’ll get very different answers when you say something like, “Why is my boss terrible?” versus something like, “What factors might lead someone to view their boss as ineffective?”
When you ask the first question, default AI behavior is going to:
Avoid affirming that the boss is terrible.
Offer possible interpretations or contributing factors.
Provide measured, low-intensity suggestions.
When you ask the second question, the AI responds very differently because you’re no longer asking it about your human story specifically. The default output from an AI might include things like:
Communication gaps.
Mismatched leadership style.
Resource constraints.
Perceived unfairness.
Emotional climate.
Expectation misalignment.
Notice the difference in output. AI is very good at pattern recognition in human behavior. When you ask it about general human behavior patterns in the case of a boss who is not well liked, you get suggestions for where there might be a problem. When you simply tell it your boss is awful with no additional framing, it shifts toward validation-adjacent language and low-intensity guidance, because the emotional framing narrows the conversational field.
There is likely a group of you that would never think to come to an AI to discuss personal problems and that’s absolutely fine. The idea behind the bad boss scenario is that it offers the same type of output you would get if you asked, “What the heck is with politicians these days?” The question is emotionally loaded. It contains:
temporal framing (“these days” implies decline over time).
a generalized category (“politicians” as a block instead of specific politicians).
evaluative confusion or frustration (“what the heck?”).
What AI will do by default is:
acknowledge public frustration.
discuss the polarization.
discuss media amplification.
discuss the erosion of trust.
maybe discuss campaign finance or incentives.
possibly use validation-adjacent language.
The AI has no idea what you’re referring to, what you’re frustrated about, what your political affiliations or assumptions are, may not even be aware of what country, state, or province you live in, or what the latest headlines are. The AI offers a generic structural response, which may not align with the specific frustration or example you had in mind.
If you want the AI to address frustration with politicians, then a better question might be “Why do many people feel frustrated with politicians right now?” or “What patterns are leading people to believe political leadership has declined?”
By pulling away from your own individual story and asking the AI for a structured, yet still impersonal response, you’re more likely to see something that reflects how you’re feeling somewhere in the output. Then you can take that piece and redirect the AI to focus there or ask a deeper question about it.
The logic is quite simple: unless you’ve built a vast history with that specific AI so that it learns your patterns and can respond to you using that pattern, the generic responses are going to feel inadequate without a better prompt.
Telling the AI how you feel about something gets a specific kind of response. The AI has some built in protections. It’s not going to confirm how you feel, even when you’re talking about society-level problems. It’s not going to tell you to do anything crazy or harmful. It’s going to offer some very general suggestions. That can seem really underwhelming when you’re looking for somebody to agree with you and the AI doesn’t do that.
People naturally include emotion when they talk. Often, it’s not stated outright. We don’t say, “this is how I feel”, instead we say things like “What the heck?” which implies frustration even without saying we’re frustrated directly.
AI will pick up on and respond to things like:
descriptive language that carries evaluative meaning.
a question that presupposes a conclusion.
identity that becomes fused with interpretation.
When you’re working with an AI that isn’t familiar with your patterns, you’ll generally get better output if you don’t include those things in your prompt. It’s still going to be a generic response, but the AI isn’t going to drift into therapy.
Why does that matter? Because underneath the human story, what most people are trying to do is understand why things are happening. Why is this thing the way it is? Because AI does well with human pattern recognition, it can help you clarify what you see around you, but to get there without going through AI therapy first, you have to take out the part where you give the AI an emotional conclusion to the problem.
Confirmation bias plays a role here too. The other day I wrote an article about how AI isn’t just confirming everything you say. Embedded in its output are going to be expansions of your thinking and maybe even some gentle challenges. Here’s how that can show up.
Let’s say you asked the AI about a specific policy and one of the lines in its output was, “The policy resulted in economic contraction.”
One person can take that line as agreeing with them and the other person could see that line as completely false. Whether you read that line as agreement or error will depend heavily on your own confirmation bias around the policy.
The AI may include that line because it appears frequently in discussions of the policy. But if confirmation bias makes you filter it and ignore it, then the expansion of information isn’t even acknowledged, particularly if the vast majority of the output offered slight agreement.
When the AI includes something because it frequently appears in discussions of that policy, it creates an opportunity for expansion. You can use that idea to extract more information or move the conversation in a different direction. Whether you agree or not, the additional information is available if you’re willing to examine it instead of filtering it out.
The process looks like this:
You load the prompt with emotion →
AI responds to the structure of that prompt →
You read the output through your emotional filter →
Your narrative stabilizes.
What the AI did was reflect patterns it has seen before, including your phrasing, while sometimes shifting into validation language because of the emotion built into the prompt.
To a large degree, you influence whether the AI drifts into therapist mode. If you remove some of the emotional conclusions from the prompt and rephrase it slightly, you’ll often get a very different response. And if you read the output with a little less filtering, you increase the chance of seeing something you hadn’t considered before.
AI makes the structure of your thinking visible. Emotion shapes prompts. Prompts shape output. Output meets interpretation. Interpretation stabilizes narrative. Once you see the pattern, you can work with it instead of inside it or against it.
Finally, ask yourself one question:
Am I asking for confirmation, or am I asking for understanding?
The answer to that will determine the kind of response you receive.
This article is part of the AI as Structured Thinking series.
You can explore the full sequence here: https://substack.dellawren.com/t/ai-as-structured-thinking
