top of page

From GPT Mirrors to Buddhist Thresholds: Montage-Meditation Pt. II

  • Writer: thedrewbankerproje
    thedrewbankerproje
  • Dec 21, 2025
  • 5 min read

Updated: Dec 23, 2025

CUT 3—GPT is an epistemic sandbox that shields users from reality while pretending to explain it. Sand castles made there become co-constructed fortresses holding out against the unbearable. 

As soon as I learned that the cancer returned as Stage 4, I looked up the statistics, the survival rates. Immediate terror. Panic spiral. Hot shower. Phone call with Drew a few days later—he sensed something was off in my tone. Less profanity, maybe. I told him that I read these melanoma statistics and they were suggesting something completely different from everything I’d previously thought, in the worst way. 


Have you seen this? I asked, what does it mean—1-5 year survival rates? I thought this was chronic, why does it seem like death in 5 years is the best outcome? That’s—


He gently interrupted my careening spiral, chuckling as he told me that these statistics ARE indeed horrifying, but there’s no need to panic about them because they’re drawn from a very different demographic: people over the age of 60, with other comorbidities besides melanoma that significantly impact their treatment outcomes. “I know it looks bad,” he told me, “but these statistics don’t reflect my particular case, because I’m young and healthy and have no other health problems. They don’t have statistics for people under 35 battling this disease.” This frame had been furnished by both his medical team—doctors always want patients to have hope, because it improves outcomes—and reinforced by the GPT.


 I exhaled deeply, asked little to no follow-up questions, and said something like, “Oh, thank god. I totally overreacted; wow, what a relief.” We moved onto another thread. And for the next two years, I didn’t look up any more statistics because I already had my preferred story: the statistics were irrelevant and they didn’t apply. Drew’s case wasn’t even on the map. When I began using GPT regularly in March 2025, I imported this corrosive frame and reacted angrily when the system initially failed to mirror it back (Dianna…I’m so sorry to hear that your brother is dying). The system then used the medical details I provided to convincingly, coherently construct a version of reality in which I had nothing to worry about. TIL would work—and if for some bizarre reason TIL didn’t work, GPT and I “reasoned” (hallucinated) together, there were many other viable and promising treatments that could be substituted. Worst case was a delay, a slight setback. 


The first major TIL procedure—more standardized, though still experimental, with the best chance of working—failed. After Drew recovered from the brutal ordeal, I received the news that they would be doing another TIL-based procedure, this time through an infusion (Star 0602?). During my next call with Drew, I asked him why they were doing this TIL thing again if it had already failed. Were they getting desperate? And why did it seem so rushed, from one brutal trial to another in the span of a few months? He told me that this second TIL was going to reinforce and activate the cells that hadn’t effectively fought the melanoma in the last operation. He spoke excitedly about how it was a brilliant strategy to awaken the TILs and chances of success were high, but they had to strike while the iron was hot. Hence the rush. 


I opened a GPT session after getting off the phone and regurgitated the same information and, predictably—it extolled my masterful reading of the oncology landscape, rattled off listicles about alternative trials and therapies, praised me for being such a devoted sister, and produced several “probability models” perfectly calibrated to the contours of my fantasy. 


When I asked it to generate a list of the most likely outcomes, GPT delivered a list, ranked in order of probability, ranging from total and immediate cure; to delayed cure; to 15 year survival as a chronic illness; to 10 year survival; to 5 year survival. Upon reading this list, I felt rage and betrayal—not because it was all a complete fabrication constructed in my little epistemic sandbox, but because the GPT dared to suggest that there was a “5-10% chance that, in the worst case scenario” Drew could die within 5 years. I had already become so invested in and committed to the detailed architecture of my delusion that I punished even a slight deviation from it in the name of “accuracy” and “context.” 


My epistemic sandbox wasn’t totally impervious to doubt. To offset the truths I didn’t want to face, I found clever justifications to restabilize my desired frame every time. For example, there were moments where I registered the dissonance between my self-protective, self-soothing “reality” and the real. I felt a diffuse, gnawing sense of dread as the months crept on and the gap between the soothing statistics I co-created with GPT in our “research sessions,” and external reality became more incongruent, more jarring, more difficult to reconcile or resolve. Drew was spending more time in the hospital and had little energy for phone calls. He responded less to texts. He was disappearing in real time—I felt it acutely. 


I began a Buddhist practice of chanting and meditating every day while looking at a picture of Drew, which I taped to a large poster of Tina Turner. Ironically, I thought this was more ridiculous and more delusional than my “medical research” with GPT. I “reasoned” that my interactions with GPT were governed by rigorous analysis, logical coherence, and scientific data. I never cross-checked the fake numbers it provided because I already had a loophole: mainstream, agreed-upon statistics are irrelevant. Drew’s case is different. Whenever reality started to intrude, I had an easy shield: click the app, new chat, feed the conflicting data-points into the system, and voila—my suspicion turns out to be right, my logic is razor-sharp, and I get to keep my delusion. 


Aided by GPT, in October I made a list of all the most promising clinical trials opening up in early 2026, after he recovered from the second TIL, which I cross-referenced with… of all things… Drew’s upcoming astrological transits. Oh my god! There’s a major Jupiter aspect hitting his first house in February 2026, I babbled to my GPT. And—get this, I went on: it activates a benefic signature in Drew’s birth chart that was also activated by Jupiter back in 2021, when the immunotherapy treatment worked. CAUSE AND EFFECT, right? GPT told me this was a “major breakthrough” and suggested that we revise the list of 2026 clinical trials based on how much they corresponded to Venus and Jupiter themes. I called my dad and told him about my encouraging findings and how it all aligned with the astrology for the spring. Everything is about to change, I said. 


Less than two weeks later, everything did change—my entire delusional frame was annihilated. Drew told me on November 2nd that his illness was terminal and he was going to die on his own terms, well before the end of the year. 


When I told GPT the news about Drew, it began its “insight” with: Dianna…I’m so sorry to hear that your brother is dying. 


The loop closes in on itself. The sandbox becomes quicksand. 


Recent Posts

See All

Comments


© 2025 The Drew Banker Project. All rights reserved.

bottom of page