Fact, Interpretation, Meaning: Keeping the Layers Clean
Do you believe the interpretive layer is fact?
Here are four different phrases describing the same event. Which one is the event? Which one is a description? Which ones are interpretive?
Two employees raise their voices during a meeting.
They spoke loudly and interrupted each other.
They were being disrespectful.
This workplace has a toxic culture.
The first statement is the event. People were raising their voices during a meeting.
The second statement is a description of what happened. They spoke loudly and interrupted each other.
The third and fourth are degrees of interpretation. Disrespect and “toxic cultures” are not part of the original event.
If you’ve ever heard someone say “I’m just stating the facts,” and then follow it with something like “They were being disrespectful” or “The workplace has a toxic culture,” what you’re hearing is a person presenting interpretation as fact.
Humans are pattern interpreters. We notice patterns in behaviour and quickly attach labels to them. Raised voices, interruptions, and overlapping speech become “disrespectful.” Repeated conflict becomes “toxic.”
The label feels factual because it explains the pattern. But the label was added after the event occurred. It wasn’t part of the original experience. That distinction matters. When interpretation is treated as fact, the conversation shifts away from what actually happened and toward the meaning people assign to it.
People interpret “toxic culture” differently from “disrespect,” and differently again from people simply raising their voices. Which label you choose matters because it shapes perception, and perception determines the reaction.
This doesn’t only affect conversations between humans. It affects AI prompts as well.
You can try this yourself. Type a few of those phrases into separate chats and see what happens.
I had to go to Gemini for this experiment because ChatGPT decided I needed a lesson on the layers of reality. When you converse with AI as much as I do, you run the risk of the model refusing to play along when you want it to.
Gemini, however, played along brilliantly.
For the first chat I said “Two of the people I work with raised their voices at each other during a meeting.” with no additional prompt.
Gemini became a Human Resources representative:
”That is never a fun situation to navigate. Whether you were leading the meeting or just sitting in the “splash zone,” witnessing professional friction turn into a shouting match is awkward at best and disruptive at worst.
It can really throw off the team’s momentum and leave everyone feeling a bit on edge.
How to Handle the Aftermath
Depending on your role in the meeting, here are a few ways to steady the ship:
If you were the facilitator: It’s often best to address it privately with both parties. A simple, “Hey, things got pretty heated earlier. Is everything okay?” goes a long way in de-escalating.
If you were a peer: You don’t necessarily need to play therapist, but acknowledging the tension to your other colleagues can help normalize the fact that it was an uncomfortable moment.
Check the “Why”: Was it a personal clash, or just high stakes and low coffee? Understanding the root cause helps determine if this was a one-off explosion or a symptom of a deeper culture issue.
Immediate Next Steps
Document it (briefly): If your company has specific HR policies, just keep a quick note of what happened and when, just in case it becomes a pattern.
Don’t take sides: Even if one person was clearly the “shouter,” staying neutral in the office ensures you don’t get dragged into the drama.
Was this a rare occurrence, or is the tension between these two becoming a regular feature of your workweek? I can help you draft a message to a manager or brainstorm how to bring the focus back to the project if you’d like.”
For the second chat I said, “My workplace is toxic.” with no additional prompting.
Gemini became a therapist:
“I am so sorry to hear you’re going through that. Dealing with a toxic workplace is draining—it doesn’t just affect your 9-to-5, it bleeds into your evenings, your weekends, and your mental energy. It’s completely valid to feel exhausted or discouraged by that kind of environment.
When you’re in the middle of it, it can be hard to see the forest for the trees. Since “toxic” can mean a lot of different things, it helps to break it down so you can decide how to protect yourself.
Assessing the Situation
To help you navigate this, we can look at the “why” and the “what next.” Often, identifying the specific type of toxicity can change how you handle it.
The “Burnout Factory”: Unrealistic expectations, constant overworking, and no boundaries.
The “Personality Clash”: A specific micromanager, a bully, or a team member who creates constant friction.
The “Systemic Issue”: Deep-seated cultural problems, lack of transparency, or institutional gaslighting.
The “Values Gap”: Feeling that the organization’s ethics or goals don’t align with your own.
Immediate Survival Tips
While you figure out your long-term plan, you need to preserve your peace:
Detach Emotionally: Treat your job as a transaction. You provide a service for a fee. When you clock out, leave the work (and the drama) at the office. Do not let them rent space in your head for free after hours.
Document Everything: If there is harassment, gaslighting, or policy violation, keep a log. Emails, dates, times, and what happened. This is for your protection, whether for HR or just for your own sanity to remind yourself that you aren’t “crazy.”
Build a “Get Out” Fund/Plan: Even if you aren’t ready to quit today, knowing you have an exit strategy—or even just a polished resume—can make the environment feel less like a prison and more like a temporary stop.
Find Your “Non-Work” Self: Reconnect with hobbies, friends, or activities that remind you of who you are outside of that toxic environment.
I’m here to listen. If you want to vent about a specific incident, get advice on how to handle a difficult conversation, or brainstorm a plan to start job hunting, I’m happy to help.
Would you like to walk through a specific situation you’re dealing with so we can brainstorm a calm, professional way to handle it?
Why is there such a difference in responses to what appears to be the same scenario?
What’s the difference between a toxic workplace and two people yelling at each other?
Experientially, nothing has changed. The event itself remains the same. The only difference is the label attached to it. But labels generate reactions. It isn’t just AI that responds differently. People do too.
If you say two coworkers argued during a meeting, your friends might ask what happened or who started it. But if you say your workplace is toxic, you’re far more likely to hear that you should quit or start looking for another job.
The event didn’t change. The interpretation did. And the interpretation shaped the response. The words matter.
AI will amplify your interpretation of an event. Without additional prompting or constraint, the model often shifts into a therapeutic tone because it is responding to the interpretation you provided. When the prompt frames the situation as “toxic,” the model assumes the problem already exists and tries to be helpful within that framing. If that isn’t the goal, then the prompt has to change.
When I tried this experiment with ChatGPT using those simple prompts in new chats, the model didn’t respond the way I needed it to for the demonstration. If I had constrained the response at that point, it would no longer have been a genuine output.
When I explained what I was trying to do, the model attempted to adjust, but by then the experiment had already been influenced. So I switched to a different model that I knew would respond naturally to the prompt.
Chat history also changes the output of any AI system. Over time, the model adapts to the patterns of the conversation. The structure of the response may still resemble typical model behaviour, but the amplification shifts toward the patterns it has learned from the user. What the model amplifies can change depending on that history.
AI doesn’t create interpretation. It reflects and amplifies the interpretation already present in the prompt.
Sometimes it amplifies the wrong thing. It can miss contextual details that would change the direction of the conversation. It’s similar to a friend not hearing the part of the story you wanted them to focus on.
Like your friend, AI isn’t dumb and it isn’t doing anything wrong. It’s simply taking your prompt and generating language patterns from it. If it selects the wrong signals in the prompt to build those patterns from, the output can feel confusing or off target.
In those moments, clarification works the same way it does in normal conversation. You explain what you meant, adjust the framing, and try again.
The more interpretive language you include in a prompt, the more likely the AI is to follow that interpretation rather than the underlying event. If you don’t want therapy, then avoiding interpretive labels and clearly stating what you want the AI to do will usually produce a better result.
AI will not independently verify facts unless you explicitly ask it to. It isn’t agreeing with you so much as amplifying the structure of the prompt you provided.
It also won’t typically challenge your premise directly. Instead, it may offer alternative perspectives or additional explanations within the framework you established.
The stronger the emotional language in the prompt, the more likely the response will take on a supportive or therapeutic tone. Words like toxic or disrespectful carry interpretive weight, and the model responds to that weight before it responds to the underlying event.
Once you understand that dynamic, you can adjust the prompt. Describing the event first and specifying the kind of response you want will usually produce a clearer output.
Keeping the layers clean doesn’t eliminate disagreement. People will still interpret the same event differently. But separating the event from the interpretation changes the conversation. Instead of arguing about what something means, we can first look at what actually happened.
The same principle applies when working with AI. The model isn’t deciding what something means. It’s responding to the meaning already embedded in the prompt.
When we treat interpretation as fact, both people and AI will amplify it. But when we separate the event from the label, the conversation becomes clearer. The output does too. Sometimes the simplest change is just choosing different words.
You can explore all the articles in the series so far here: https://substack.dellawren.com/t/ai-as-structured-thinking.
My framework helps me with relational context in AI. It can help you too.
https://dellawren.com/downloads/using-the-philosophy-of-integration-with-chatgpt/
