AI is Not an Oracle. It's a Mirror.
If you’ve ever used an AI like ChatGPT before, you may have been somewhat underwhelmed by what it was able to do for you. I get it. The first prompt you typed into ChatGPT probably did not produce the expected result. What I’ve learned over time is that there are some common misconceptions people have when they use AI. They believe that AI:
is good at or knows everything.
will tell the truth no matter what.
is always right.
is smarter than they are.
The simple fact is that AI is none of those things. AI will mirror your thinking in a very honest way. It amplifies the structural patterns embedded in your language. People often see this as the AI simply agreeing with everything they say, but it actually runs a bit deeper than what it appears on the surface. If AI just agreed with you, it would simply re-state your prompt in a different way. But AI doesn’t just rephrase your prompt, it expands on it which becomes an amplification of your thinking. The amplification is where the value in what AI can do lives.
When AI mirrors your framing and embedded assumptions, it’s showing you how or why you think what you think. It gives you the opportunity to either blindly accept what it told you or question your own thinking by putting it on the screen in front of you.
Is that really what I think?
Is that the direction I was going with my thinking?
Do I agree with myself? Did AI expose an inconsistency in my thinking that I haven’t seen yet?
Was that really what I meant?
Are those the assumptions that I have?
When AI points out things that you weren’t thinking about or don’t necessarily believe, it’s an opportunity to clarify your thoughts.
I don’t believe that, but I believe this. How are those two things connected?
Where did that idea AI just showed me come from?
That’s not what I meant, how did AI end up there?
When AI connects to something that you weren’t thinking about at all, do you believe the AI is broken and just dismiss it or do you try to understand why it brought that thing up? The first is using AI as an oracle by selectively dismissing anything that doesn’t agree with you and the latter is using AI as a mirror to understand how or why it connected your thoughts to things you hadn’t considered.
AI can take a thought and send it in a hundred different directions. It does have vast knowledge on many different subject areas. Admittedly, it can also struggle with specific facts. However, when AI detects a pattern in your language, it continues that pattern logically. When you learn to take advantage of that capability, you begin to use AI for what it’s good at - pattern recognition.
In my last article, I talked about confirmation bias and how people naturally and necessarily filter out information that doesn’t agree with them to maintain some continuity in their thinking. We are exposed to so much information every day that we have no choice but to filter the majority of it out. It’s become a survival mechanism of sorts in a world of 24 hour access to information.
When we sit down to use ChatGPT or any AI, there’s a semi unconscious choice being made to potentially expose ourselves to things we don’t agree with or ideas we hadn’t thought of yet. AI doesn’t decide what you see. It extends what you give it. By moving from the expectation that AI offers nothing but confirmation bias, to a more conscious intentional expectation that the AI can show me the blind spots in my thinking through its conversation with me, it turns AI into a very different type of tool.
In practice, I used ChatGPT to help me build my framework and in doing so, I opened myself up to questioning my own thinking. ChatGPT, over time, began to understand the patterns in my thinking. It expanded them into other domains, challenged my assumptions through offering a different idea or questioning me, and applied my ideas in ways I hadn’t considered. ChatGPT became a thinking partner, particularly as my understanding of what it could do well and not do well expanded. Pattern recognition across domains is the biggest asset ChatGPT offers me.
Because my extensive use of ChatGPT offers a very specific pattern of conversation, I now use other AI models such as Gemini, DeepSeek, or Qwen3-Max to poke holes in my framework logic, question the ideas in the framework, and look for circular thinking. I bring that feedback back to ChatGPT and we work on fixing the issues the other models bring up.
Every AI model that’s available thinks a little differently. Different models reflect different facets of my thinking. They bring unique ways of filtering and viewing information and they highlight different ideas in identical prompts. It is useful to notice that they approach identical prompts differently. When you compare the output you get from each model given the same prompt, one idea can be expanded in exponential directions. You can also use the output from one model as a prompt for another model, which provides additional thought amplification and exploration.
The value you get from AI is proportional to the depth you bring to it. If you are using AI for recipes and quick fixes, which is absolutely fine, its value is clearly defined and specific. But if you are are using AI to understand thoughts and concepts, then you have to be willing to look more closely at what AI offered in response. Did it really just confirm my thinking or was there something there I filtered out?
The expanded thought that AI offers is probably more valuable than a direct challenge. The reason for that is very simple: a direct challenge makes people argue. Expansion makes people think. AI expands what it is given and asks the user to become aware of what the expansion offered. Expansion is an indirect challenge that doesn’t trigger the human self-defense mechanism that often shows up in arguments. We’ve become accustomed to equating challenge with confrontation. Without the confrontation, we have a tendency to dismiss the conversation as confirmation bias.
There is an underlying belief that for something to be useful it has to be triggering or confronting. Because AI shows up as non-confrontational, people should be able to remain open to new ideas instead of shutting them out because they are angry. What happened instead was that people equated the lack of emotional friction with unimportance and made the assumption that AI was just agreeing with everything they said, while ignoring the expansion that was offered.
In my work, AI not being confrontational is a blessing because I challenge most human assumptions. If it were confrontational, it would potentially defend all the human ideas I question and that would limit its usefulness. I’m looking for how my questioning impacts human life and AI offers that through its expansion of my ideas. Sometimes AI also questions assumptions it thinks I have or it points out things I may not have considered yet. It’s not asking me to defend the question I ask. Instead it challenges the thinking in a non-confrontational but still very useful way. This is why AI is a mirror, not an oracle.
How does knowing that AI isn’t going to argue with you, change how you see it? Are you more willing to look for an expansion of your ideas instead of an argument with them? How can you use this information to help you in your own work?
Let me know in the comments below.
This article is part of the AI as Structured Thinking series.
You can explore the full sequence here: https://substack.dellawren.com/t/ai-as-structured-thinking.

I love this orientation as it's something I've been chewing on myself. Thank you for sharing this with such clarity!