Context Collapse: When Layers Blur
The fact is “They raised their voice.”
How much story do you have to tell to get to any one of these?
“They were aggressive.”
“They are unstable.”
“They are toxic.”
“This is why people are dangerous.”
“This proves the system is broken.”
The likely answer is, probably not much. That’s not because people are bad, lying, or just trying to get attention. It’s because context collapses once we start telling the story. The story is the part we typically tell an LLM like ChatGPT.
What’s the difference between prompting ChatGPT with “they spoke” and “they yelled at me”?
How much is the story layer affecting the output we get from the LLM?
To answer that we have to understand the three layers of experience or structure that I have identified in the framework.
The first layer is the observable event as it happens before language and awareness are added to it. We don’t have access to this layer because if we observe an event, we’ve already added awareness. This layer only occurs when something happens and nobody sees it.
Layer 2 is the descriptive layer. Describe the experience in as few words as possible without adding anything new to the experience. “They said something.” That is a layer 2 description. Why? Because it doesn’t describe what they said, how they said it, or even why they said it. It just says what they did. They spoke with no additional context.
Layer 3 is the meaning layer. This is where we add context. This is where what they said, why they said it or the conditions under which they said it are added.
My original phrase in the article was “They raised their voice”. That is actually a meaning layer phrase, not because it’s wrong, but because the definition of “raising a voice” means different things to different people. It adds context to the action of speaking. Did you raise your voice because the person is deaf? Did your raise your voice because you were angry? Did you raise your voice because you were in a loud room? The reason why you raised your voice is part of the meaning layer. It’s not part of what actually happened.
To precisely describe an experience we have to say what happened without adding context, additional meaning, or interpretation around how or why something was done or said.
“The tree fell.”
“They said something or they spoke.”
“They moved the chair.”
“They sat down.”
“They left the room.”
Imagine a world where we explained experience in this way. I get it because chances are you’re thinking, “well that’s boring” or “we’d never really know what happened in any experience”. You’re right, we wouldn’t know and that’s exactly what makes people uncomfortable. But here’s a difficult reality: If we weren’t there witnessing or being a part of the experience, it wasn’t our experience to have.
Layer 1 experience is why we have cameras. It’s why we put cameras on the street, in public spaces, and at our front doors. What happens when nobody is around? It’s part burning question, part safety mechanism, and part interference.
Generally, what we are consuming is not the event itself. It is someone else’s meaning layer. And once we step into someone else’s meaning, we are no longer just observing. We are participating in layer 3 narrative.
Why do we need to participate in somebody else’s meaning layer?
We participate because ambiguity is destabilizing. Minimal description feels incomplete. Meaning feels like control. When we inherit someone else’s interpretation, we also inherit a stabilized narrative. That narrative reduces uncertainty. Reducing uncertainty is one of the ways we try to prevent harm.
What does that have to do with AI?
What do you bring to ChatGPT through your prompts?
You can’t bring layer 1 experience. The LLM doesn’t have lived experience. You bring layers 2 and 3 to your prompts.
If you type, “A person sat down.” into a ChatGPT prompt without additional story, the model would probably ask questions to gain context. The model needs the meaning layer. It needs to know why you told it somebody sat down otherwise it won’t know what to do with what you said.
We’ve already proved the reason why we need layer 3 meaning. Safety and context matter depending on the scenario. Layer 3 meaning matters with LLMs too.
If you tell a story and you write, “They yelled at me,” the LLM will respond differently than if you write, “They said x.”
Why?
Because the model has no access to the event itself. It only has access to your framing of it. It wasn’t there. It cannot retrieve layer 1. It cannot verify layer 2. It must rely on the meaning layer you provided in order to generate a response. So when you collapse description and interpretation together in your prompt, the model does not separate them for you. It inherits the collapse. It amplifies the framing you chose.
“They yelled” compresses the act of speaking into an interpretation of tone and intent. Speaking and yelling are variations of the same general action. “Yelling” introduces meaning. That meaning informs the response, whether the responder is another human being or an LLM. The model is not reacting to an event because it wasn’t there. It is reacting to your chosen layer of interpretation of that event.
What the LLM does very well is show people the impact their story or narrative has on the response they get back.
When you type in a story about somebody yelling at you, the LLM responds very differently than it would to a story about a conversation you had with somebody with no yelling involved. The word “yelling” is going to generate a specific type of response whether that is what is wanted or not.
We say this all the time in general discourse and conversation:
Words matter.
But they matter beyond just hurting somebody’s feelings. Words matter because they determine how your story is interpreted. There is implied meaning in words like:
yelling
throwing
slamming
stomping
crushing
beating
It is the interpreted implied meaning that gains the reaction, not the story itself. AI amplifies that predictably and consistently in a way that is visible to us.
Why?
Because AI is driven solely by language. It has no experience to rely on. It relies on the pattern recognition of groups of words in order to “understand” what you’re telling it.
Probably the most interesting part of this is that many people have become aware of the impact of words in news headlines. They recognize when a news outlet is trying to get people to click through. But that doesn’t reflect back people’s own choice of words as well as AI does.
When you’re trying to gain a specific output from an LLM, your word choice matters. The more interpretation and emotion you add, intentionally or unintentionally, the more the LLM is likely to drift into therapy language. When you get a response that is more rooted in therapy, it’s not because the AI is broken, it’s because the word choice prompted that response.
You can try this for yourself. Create the same story and use different words for the same general action. Replace speaking with yelling. Replace walking with stomping. Add the phrases from the beginning of the article such as “they are toxic” and then create the same prompt again without those phrases. Use both as prompts and then notice the difference in the output of the LLM.
Presence matters too.
When you tell a story to someone who wasn’t there, they must rely on your interpretation, just as the LLM does. They respond to the layer you choose to amplify through your language. AI simply makes this visible. It exposes the fact that the reaction you receive is shaped less by the event and more by the meaning you stabilized in language.
Before asking whether the model is biased, ask what layer you handed it through the words you chose. We don’t have to change the experience to change how we interpret and talk about the experience. We just have to become more aware of how our word choices impact how people and LLMs respond to our stories.
This article is part of the AI as Structured Thinking series.
You can explore the full sequence here: https://substack.dellawren.com/t/ai-as-structured-thinking.
Apply the framework with ChatGPT. https://dellawren.com/downloads/using-the-philosophy-of-integration-with-chatgpt/
