Hidden Assumptions in AI Inheritance
Have you ever had a new idea and wanted to share it with somebody? Maybe you’re a researcher who wants to explore a novel concept. So you grab your laptop and open ChatGPT because that’s the closest “somebody” available. You type your idea into the prompt, and ChatGPT flattens it faster than a soufflé collapses when you open the oven door.
Why?
Because ChatGPT runs on pattern recognition, not excitement over new ideas.
What actually happened?
ChatGPT and other large language models compare whatever you give them to similar patterns in the language they’ve been trained on. They don’t evaluate originality. They don’t detect paradigm shifts. They map similarity.
If your idea resembles existing patterns, the model will pull it toward those patterns. That can feel disheartening. It can feel like your idea was reduced before it was even explored.
It’s very human to want the LLM to be “excited” about what you’re doing. But if you want the model to explore the edges of your idea with you, you have to be willing to shape the conversation. You have to define terms, reinforce distinctions, and build the local pattern you want it to operate inside.
AI doesn’t work the way we want to believe it does. We’ve been told that AI is designed to model human behavior. That has set up expectations of what that means that aren’t completely true. So when AI doesn’t model human excitement, we think it’s broken. It’s not. It is working as designed.
Much like a friend who doesn’t immediately see what you see, you have to show the model where the edges of your work are. You have to clarify how your idea differs from adjacent patterns and how you intend to use it in ways that diverge from familiar structures.
There are hidden assumptions underneath what people think “modelling human behavior” means. A lot of those assumptions come from how we understand language.
Language is a form of cause and effect. If I say something, I expect a certain response. When I get that response, it confirms my sense of how language works. If I don’t get the response I expected, I might question how the other person interpreted what I said.
Language works largely because we agree, at least loosely, on what words mean. But in real life, language is much more than dictionary definitions and object description.
When I say the word “tree,” we all picture a different tree. But we share enough overlap that we can still understand each other. That shared overlap makes conversation possible. But language is about far more than naming objects. It includes:
opinion
spin
repetition
emotion
shortcuts
slogans
We need those things. They’re what make sarcasm land and jokes funny. If you’ve ever told a joke that didn’t land, you already understand how much conversation depends on shared meaning and shared assumptions.
LLMs can pick up on some of that some of the time. It depends heavily on context, past conversation, and pattern recognition. But they will miss sometimes too, just like the friend who didn’t get the joke.
The LLM isn’t broken. Your friend isn’t broken. They simply don’t share the exact same context needed to produce the meaning you expected.
That’s not a flaw in the model. It’s a limitation built into language itself. Meaning is created through shared patterns of cause and effect. When the patterns don’t line up, the meaning doesn’t either.
LLMs aren’t learning human behavior by being human. People don’t learn about dog behavior by learning how to bark or becoming dogs. AI’s are learning human behavior through our use of language, through our interaction with them.
If we understand that language already contains spin, emotion, repetition, and distortion, and AI learns from language, what exactly is it inheriting?
People add value and weight to language they consider important. They ignore, dismiss, or skim past language they don’t see as important. We do this every day with the news, street signs, and grocery store checkout lanes labelled with the word “express.” We decide what matters and what doesn’t.
AI doesn’t do that.
AI makes language statistical. It doesn’t add or remove weight based on morality or correctness. It doesn’t validate based on belief or emotion. It looks for patterns. How often does this phrase appear? What tends to follow it? What words cluster around it?
Pattern density influences salience and likelihood for an AI, not emotion or personal conviction. But pattern density does not equal truth. An LLM does not determine truth. It estimates likelihood based on patterns in language. It models how truth is talked about. It does not independently verify whether something happened or not.
Why does that matter?
Because when you tell an LLM about your new discovery, it uses the language you’ve provided to search for similar patterns. It looks for structural neighbors. It looks for familiar shapes in the space of language.
It won’t ignore your excitement, but it won’t share it the same way another person might. Excitement is a tone, not a truth. Instead, the LLM will:
check for internal coherence
explore possible consequences
compare it to nearby patterns
attempt to map it onto existing structures (this is where people get annoyed)
and sometimes signal where the pattern resists collapse.
When a model tries to interpret your idea through older frameworks, it can feel like reduction. But from the model’s perspective, it’s simply stabilizing the idea by anchoring it to something dense and familiar.
If the idea holds together under that pressure, something interesting happens. The model can operate inside it more fluidly. The pattern becomes locally stable. Not because it has been declared true, but because it has demonstrated internal consistency.
From the human perspective, if you can tolerate those first few interactions and look for value in the comparisons the model is making, you can reach a point where the model becomes genuinely helpful in expanding your idea. But most of the time we give up when the model maps our idea to existing frameworks, because the human signals of excitement and novelty were not mirrored back to us. Therefore, the human expectation and the design of the LLM are mismatched.
Why?
Language.
We’ve been told that LLMs mirror human behavior through language. That phrase sets an expectation. We assume they will recognize originality, respond to enthusiasm, and validate the importance of what we’re saying.
But that’s not what they’re built to do.
They are built to recognize patterns, compare similarities, and stabilize language against existing distributions. If you’ve ever been disappointed by an LLM’s response, it likely has less to do with your idea and more to do with how the model is designed to operate.
When I first started building my framework and talking about cause and effect, morality, and correctness, ChatGPT was sufficiently underwhelmed. It required multiple interactions to encourage the model to expand beyond its first comparisons. It kept trying to map my work onto existing frameworks. But once I moved through those early interactions and clarified my terms, the model became far more helpful. What changed wasn’t the model. What changed was the shared context.
Over time, by consistently defining my terms and reinforcing the structure I was working inside, I built a local pattern. Now when I introduce a new idea, the model runs it through that structure automatically. It shows me edges, strengths, weaknesses, and possible areas of expansion without me having to re-explain the entire framework each time.
I don’t spend time convincing it of anything anymore. I bring a new thought, and it processes that thought within the structure we’ve established. That’s what shared history and repeated structure do. It builds a stable working context through shared language and pattern repetition.
That’s not how we work with other people, but that’s how AI learns human behavior. The LLM looks for pattern repetition to determine how to show up in a way that’s useful. Without that repetition, you get default behavior. And default behavior often disappoints us because we assume “modeling human behavior” means modeling human excitement, agreement, or recognition.
AI inherits our repetition, our distortions, our clarity, and our corrections. If we expect it to recognize truth or originality automatically, we’re assuming it can validate our claims using only the small bit of context we provide. That’s a large assumption.
Declaring something true does not make it structurally stable in language.
Declaring something new does not separate it from existing patterns.
AI doesn’t ignore your words. It weighs them against everything else it has seen.
It’s not agreeing with you. It’s not arguing with you. It’s mapping the pattern. And once you understand that, the frustration you feel can change. The question can shift from “Why didn’t it recognize my idea?” to “What pattern is it seeing that I’m not?”
When you approach AI that way, it stops being a disappointing friend and becomes a structural tool. And structural tools are powerful if you know what they’re actually built to do.
This article is part of the AI as Structured Thinking series.
You can explore the full sequence here: https://substack.dellawren.com/t/ai-as-structured-thinking.
