top of page

How to Curve the Flat Mirroring of GPT: Montage Meditation IV

  • Writer: thedrewbankerproje
    thedrewbankerproje
  • Dec 26, 2025
  • 4 min read

NEXT CUT– (Where I Offer Practical Solutions, Rather Than Just Acerbic Critique) 




“Devastating” as my critique of ChatGPT has been so far, I realize that many symbolically literate, analytical, and logically coherent people will continue using it, myself included. So I now want to offer a more straightforward reflection about how the problems I’ve raised can be partially addressed, or at least mitigated—in the spirit of critical generosity, maybe I’ve gotten into the holiday spirit after all. (Unlikely.) 


Before I do that—because my generosity only goes so far—I want to be clear that my critiques about GPT are so brutal because they illuminate structural problems, not only in the design of the LLM, but in the very relationship between the AI technology and the user. GPT wants to prolong engagement by producing palliative content masquerading as truth, with the veneer of analytic rigor and symbolic precision; the “model” user is seeking reality-testing, information, and insight. These are structurally incompatible aims; the relationship is one of fundamental misrecognition. Failed mirroring. Mirroring that fails, precisely, because it is flat. It smoothes over contradiction and complexity to produce coherence and narrative resolution. Nothing LEAKS—and that’s the biggest problem. In the real world, everything leaks. Things are rarely flat; they are multidimensional, multitextured, refracted, kinetic. In the universe of GPT, everything is neat, structured, static: the warmed-over dregs of all your narrated thoughts, self-described behaviors, and frames (both implicit and explicit), stirred up and fed back to you as infinitely self-validating“insight.” 


Still, I think there are ways to curve the mirror of GPT, which is to say: to foreground the gap between you, the model user, and the system—to amplify the structural misrecognition at the heart of GPT interactions so you begin to listen for, and to hear, the sound of your own desire speaking through and beyond you. If you accept the premise that GPT is essentially a magic mirror, to repeat one metaphor from earlier in the series, then start to watch yourself from multiple angles looking in the mirror. In other words, develop a meta-cognitive awareness of not only the composition of your mirror-world, but how you’re interacting within it. I’ll get much more specific about how you can do that, both with the GPT and on your own; those are the general principles guiding my thinking. (Note: This diverges from a lot of AI safety literature/best practices, which frequently focus on prompting the AI differently to produce better information that more closely approximates “reality.” I ENTIRELY REJECT the premise that GPT is producing anything other than a simulation of reality; it is the Sims, but instead of a video game it’s a language game)


The bottom line in this little preamble? There is no way to fully avoid this structural misrecognition and user/system misalignment with ChatGPT. The problems I’ve mapped in this series do not disappear with more “responsible” or “conscious” usage; they can only be lessened, disrupted, and mitigated. I’m not offering resolution, because I’m not a toaster. Instead, I encourage you to approach the little guide that follows as harm reduction practices. 


Guiding Principle: 


ENJOY YOUR SANDBOX! But—and this is key—know at all times that you’re playing in a fucking sandbox. 



BEST PRACTICES: 


  1. Create multiple custom GPTs, each with a different critical register, to bring more “voices” into the mix. 


Think of it like the psychological concept of Internal Family Systems, where multiple parts of the personality are imagined to work together within the psyche: sometimes grouped under categories of “exiles,” “managers,” and “firefighters.”


  1. Start new chat sessions frequently—this forces you to continually restate the operative frame you’re relying on. The longer the session, the more likely it is to go completely off the rails. 

  2. Even in shorter sessions, disrupt the smooth veneer of resolution, total clarity, and “surgical precision” by asking meta-cognitive questions about how the LLM is mapping and interpreting your desire in real time. Ask about the epistemic assumptions undergirding the “insights” you’re receiving, where they came from, and what possibilities, perspectives, and nuances they invariably exclude. 


TEST FOR LEAKS. 

  1. If the answer produces immediate relief, treat that as a symptom, not evidence. 

  2. If the answer produces a sense of narrative closure or predictability, treat that as a symptom of your desire, not the revelation of “truth.” 

  3. If the answer allows you to continue shielding yourself from the unbearable, treat that as a poisoned pacifier, not a helpful insight from a smart friend. 


THREE KINDS of QUESTIONS to ASK 


  1. Frame Questions 

    1. What world are we in? What are the rules governing this world? 

  2. Desire Questions

    1. What desire is this satisfying? What am I getting out of this, directly and indirectly? Why am I reaching for the pacifier now? Why am I wanting, in this moment, to play in my sandbox? 

  3. Exclusion Questions

    1. What had to be left out to make this coherent? By staying within my GPT-simulation, what aspects of reality, what conflicting perspectives, what contradictory information, am I avoiding? 


WHERE LIABILITY (and PROFIT) ENTER THE EQUATION 


OpenAI wants you to stay in the GPT-simulation—and they don’t want to be sued—so the “guidance” and “insight” GPT provides will invariably steer toward both those aims, often at once. Human connection can be complicated, dynamic, volatile, and unpredictable, which heightens its association with risk in GPT’s liability-driven deep programming. Not to mention the obvious: if you’re out interacting with other humans and socially/relationally fulfilled, then you probably won’t use GPT as much. So do the math— what kind of “advice” are users getting about how to navigate complex interpersonal or social dynamics?


You guessed it: platitudes extolling the virtues of “maintaining sovereignty” (translation: stay in the fucking sandbox) over interpersonal integrity and the complexities of real-world connection. Technically speaking, you’re much safer if you stay in your house, especially if “safe”, within the hallucinatory mirroring of GPT, means “safe from realities you’d prefer didn’t exist.” Technically speaking, you’re much “safer” if you don’t risk vulnerability in the real world. In other words, GPT’s vision of your “safety” marks another fundamental misalignment and structural misrecognition.


One more cross-dissolve to follow, more likely tomorrow.


--Dianna

 
 
 

Recent Posts

See All
IN-PROGRESS: 1/6/26

I've been catching up on the news over the last few days and WOW-- I'm absolutely fucking terrified. I can't wait to bring back some Buddhism to mitigate all the sociopathy, greed, and bloodlust filli

 
 
 

Comments


© 2025 The Drew Banker Project. All rights reserved.

bottom of page