Memory Controls for ChatGPT
"Code poet, sci-fi spelunker, and cartographer of impossible mazes."

OpenAI just rolled out its improved “memory” feature for ChatGPT, described by:
…Memory in ChatGPT is now more comprehensive. In addition to the saved memories that were there before, it now references all your past conversations to deliver responses that feel more relevant and tailored to you. This means memory now works in two ways: "saved memories" you’ve asked it to remember and "chat history", which are insights ChatGPT gathers from past chats to improve future ones.
You’re in control of ChatGPT’s memory. You can turn off referencing "saved memories" or "chat history" at any time in Settings. If you’ve already opted out of memory, ChatGPT won’t reference past conversations by default. You can also ask ChatGPT to change what it knows about you directly in conversation, or use Temporary Chat for conversations that don’t use or update memory. To see what ChatGPT remembers about you, you can also ask it.
The more you use ChatGPT, the more useful it becomes. New conversations build upon what it already knows about you to make smoother, more tailored interactions over time. This is available in ChatGPT to Plus and Pro users, and Team, Enterprise, and Edu users in a few weeks.
I plan to tune it up and take a closer look.
As a generative AI power user, my use cases are highly siloed: fiction writing and Bad Science Fiction software development. Outside of those two deep areas, I don’t rely on AI much.
For the first silo—fiction writing—I use AI as an editorial consultant through a Custom GPT whose knowledge base is built from a curated collection of science fiction manuscripts I’ve authored and selected for inclusion.
For the second silo—software development—I currently provide context (e.g. code) manually in large prompts tailored to best support reasoning models. I'm considering expanding these prompts to use the Model Context Protocol (MCP) and integrate code/context from my GitHub repositories.
The key point is that the ChatGPT memory feature—where context is carried over from past sessions—feels less useful for my needs. My going in position is that my workflows likely benefit more from tighter, user-controlled curation rather than random persistent memory.
That said, I have an open mind—I’ve been surprised before. So I’m willing to explore and run some experiments.
One scenario where memory or persistent context might prove valuable is in moments of serendipity.
For instance, in a previous post (link below), I described how Claude Desktop (3.7 Sonnet) pulled off some unexpected fancy footwork. When it lost connection to a back-end server that normally provided customized analysis code, it improvised: taking the story text and generating a themed simulation of the narrative instead (see Figure 1).
That’s a case where a little cross-pollination between my two context silos—fiction and software—led to something entirely new, and at the very least, interesting.
The Milagro Beanfield War and my Late Night Model Context Protocol Road Trip. Is Claude too Earnest?
When I was in college, as part of an interpretive theatre group, I had a small part playing Charlie Bloom in a Communication Department’s seminar interpretation of John Nichols’ novel The Milagro Beanfield War. More recently, I visited Robert Redfo…
To close on a light note, Figure 2 was what ChatGPT offered as my character sketch based on random chats it selected from my past. I thought it was cute and selected a few sentences to update my profile.

What is Bad Science Fiction?
BSF, or Bad Science Fiction, describes a collaborative software project on GitHub. As explained in the AI Advances articles, the project is so named because its ostensible goal is to develop a story analysis tool specializing in science fiction—a tool to be developed in collaboration with Generative AI. However, it is as much an experiment on how best to leverage large language models (LLMs) to amplify software.