7 Comments
User's avatar
Schnee Bashtabanic's avatar

Hello Ms. Wren, (It's mee .... Schnee, from Mastodonia) Thanks, this was your first Podcast I listened to. I am interested to know, is Chatgpt the AI you use most or exclusively; which other AI's might you use and why; if you had to distinguish between the AI's what strengths and weaknesses do they each have; and finally, do you use the top model versions usually? Thanks. .... Final question: have you published your prompts anywhere or the framework for them so others can map in their own personalized context for this 'pressure' variable which I agree is important?

Della Wren's avatar

ChatGPT is the one that for me runs the logic most cleanly. Was it exclusive? Not totally. I used other models like Gemini, Claude, Deep Seek, and Qwen3-Max to break the logic and then feed their output back to ChatGPT. Basically I'd be looking for where my ideas were circular or, based on their questions, what gaps in my logic I could fill and loop holes I could close. Most models love to bring morality, spirituality, or even philosophy back in some how.

ChatGPT is, for me, the one that does the best job. Gemini, even with the heavy constraint of my framework, can't completely follow the constraints. Deep Seek and Qwen3-Max get very machine-logic forward with the framework, making it more difficult for human consumption, although still useful.

I don't have a prompt list. I want to show people how to prompt, not what to prompt. The trick with AI is to remove emotional language for the most part. I've been working on a use case that combines the framework with AI. In the framework, there is a section for AI that has some info available.

I hope that answers most of your questions!

Della Wren's avatar

Just to add, you can tell any model to run a given scenario through the framework found at https://philosophy.dellawren.com.

Any model that can access the Internet, can read my framework and apply its logic.

Pay attention to what happens when you do that though. If you've been interacting with any model enough, you've generated a pattern of response and it will try to stick to that. If you want it to respond differently, you'll have to enforce that through re-prompting and clarifying.

Schnee Bashtabanic's avatar

Have you heard of or read the following study?

The Platonic Representation Hypothesis (Huh et al., 2024, arXiv:2405.07987)

They point to something interesting in their Limitations section (lol .. that is what Grok tells me at least ... I hope it is true)

It reminds me of something you touch on (excuse me if I make mistakes ... I have not gone through all your philosophy .. so I work on probabilities too .. lol) .. with your idea of consequences.

Della Wren's avatar

My framework is causation-based. Everything runs on cause and effect, not narrative.

So when we’re talking about AI, the models don’t map “reality” directly. They learn patterns in how reality tends to be described.

From my perspective, the convergence you’re pointing to happens because only certain mappings hold up under repeated cause-and-effect relationships. Incoherent ones don’t survive.

So you get convergence at the structural level.

Where it splits is here:

LLMs approximate those patterns of structure through language.

Humans interpret them and layer meaning on top.

So convergence shows up in the mapping.

Divergence shows up in the interpretation.

Schnee Bashtabanic's avatar

Thx, I will have a look at it (am looking at the moment) ... need further analysis ... looks like you might have created some kind of non-code-code (not rape-rape ... Whoopi Goldberg), code. This 'Integration Layer' that output must first be (fed / filtered through) ... I must look how you achieve this to make your dough.

Della Wren's avatar

Humans carry around what we usually call “baggage.”

In the framework, that’s existing causal structure — patterns built from past experience, pain, trauma, belief systems, and ongoing thought.

When a new experience occurs, it doesn’t arrive clean.

It comes into contact with that existing structure.

That interaction is what I’m pointing to as the integrative layer.

It’s not something we consciously do.

It’s where the new experience meets what’s already there, and gets shaped by it.

Interpretation, emotion, and response all emerge from that contact.

The framework can help us understand or see how new experience is shaped by everything that's gone before.