What Actually Happened? Description, Evaluation, and AI Amplification
“My boss ignored my email.”
“The meeting was a disaster.”
“The teacher was unfair.”
Those are pretty common statements and there are millions more just like them. But I have a question for you:
Are they describing what actually happened or are they describing an interpretation of what happened?
Let’s change those statements to reflect what happened.
“My boss didn’t respond to my email.”
“Two people raised their voices in the meeting.”
“The teacher did not allow rewrites.”
Language is naturally interpretive. The minute we apply words to an event, we are interpreting the event. However, the words we choose matter. When we describe not getting a response to an email as being ignored, we shift from describing the event to evaluating it.
We don’t know if we were ignored. “Ignored” is a judgment call. It offers moral framing. It contains assumptions about motive or intent. Those things are not part of the event itself. They are part of the interpretive layer of the event.
“Ignored” implies intention.
“Disaster” implies failure and global judgment.
“Unfair” implies injustice.
None of those are directly observable. They are conclusions about events based on interpretation.
Books would be very boring if we didn’t use words like ignored, disaster, and unfair. But life isn’t a book and it’s not boring. There is plenty going on that will keep us all occupied for a very long time.
We don’t need to add drama to make life interesting. When we use language that adds evaluation to otherwise boring experiences, we escalate the experience in our own minds. Those words add emotional fuel to the experience. It quickly escalates the meaning of the experience beyond what actually occurred.
The teacher didn’t allow rewrites, so now in the student’s mind they are going to fail the class or the teacher hates them. But how do you know that based on one rule the teacher chose to enforce? You don’t.
Why does that matter?
Overwhelm.
We’re all overwhelmed to some degree. There is so much information with 24 hour news cycles, kids, jobs, phones, text messages, emails, relationships, hobbies, commitments, and so on. It seems almost never ending.
But overwhelm is not just about the volume of information we’re exposed to. It’s also about amplification. If every neutral event becomes moral, personal, or catastrophic, your mental load multiplies whether you intend it to or not.
One unanswered email becomes a commentary on your value.
One rule becomes a commentary on your future.
One loud meeting becomes a commentary on institutional collapse.
What if we could drop some of it by simply changing how we see and interpret our experience?
What if simply dialing back how we explain our experience could reduce some of the overwhelm and amplification?
You are not just escalating the experience internally. You are encoding or hardening that escalation in language.
AI Mirrors Our Amplification of Events
When you take that amplified explanation to the AI of your choice, AI will respond in kind. It will amplify the same thing you amplified.
It does that because it’s a language pattern generator that does not have access to your experience. So when you tell it that you’ve been ignored, it treats your evaluation as the frame and responds within it.
Depending on how much you amplify your experience by adding emotions into your prompts, AI may drift into therapy to help you manage your emotions.
Those are default language patterns. They try to mirror how people respond to each other. When you show up emotionally in a text to your friend, they respond to your emotion. AI does the same thing.
AI can be constrained. You can instruct it to ignore emotional framing and respond with structural analysis instead.
Through repeated interaction, my instance of ChatGPT has adapted to my framework. When I express emotion, it now defaults to structural analysis without additional prompting. I don’t want therapy from ChatGPT, so over time I’ve trained it not to do that. It will instead re-interpret my experience minus the amplification.
The value in having a neutral mirror that will, by default, restate my experience with more neutral language is significant. It reduces interpretive drag before I escalate it further.
Interpretive drag occurs when we amplify experience through language. Judgment, assumptions, and moral framing extend the life of an event in our minds. They require more cognitive processing. In a world where overwhelm is common, that extra processing becomes drag or friction.
The event is finite. The interpretation of it is not.
If we cannot distinguish what happened from what we decided it meant, we will carry experiences longer than necessary. We will amplify them, rehearse them, and react to them as though the amplification were part of the original event. AI does not create that amplification. It reflects it.
If we want clearer thinking, we have to begin with clearer descriptions.
You can explore all the articles in the series so far here: https://substack.dellawren.com/t/ai-as-structured-thinking.
My framework helps me with relational context in AI. It can help you too.
https://dellawren.com/downloads/using-the-philosophy-of-integration-with-chatgpt/
