OpenAI is introducing a significant enhancement to ChatGPT’s memory capabilities, moving beyond the previous model where users manually saved certain details. The latest update allows ChatGPT to automatically draw from the entirety of a user’s past conversations to inform future interactions, creating a more personalised experience. The feature, known as long-term or persistent memory, is currently being rolled out for ChatGPT Plus and Pro subscribers, although it is not yet available in the UK, EU, Iceland, Liechtenstein, Norway, or Switzerland due to regional regulatory restrictions.
Previously, ChatGPT’s memory function was straightforward: users would decide what information to save, ranging from tone and style preferences to personal interests and ongoing projects. Now, alongside these user-added "saved memories," the AI will independently generate insights from the “chat history” to better understand and anticipate needs. The ambition behind this development is to make ChatGPT increasingly useful and tailored over time.
Rohan Sarin, Product Manager at AI speech technology firm Speechmatics, underscores the value of this type of memory in fostering a deeper relationship between user and machine. Speaking to TechRadar, Sarin said, “Personalization has always been about memory. Knowing someone for longer means you don’t need to explain everything to them anymore.” He illustrated this with an example: ChatGPT might, when asked to recommend a pizza place, factor in a user’s known fitness goals and suggest options accordingly, demonstrating a subtle form of contextual understanding that goes beyond direct instructions. Sarin added, “That’s how we get close to someone. It’s also how we trust them.”
OpenAI’s CEO, Sam Altman, has expressed similar sentiments on social media, describing memory as a way to develop AI systems that “get to know you over your life, and become extremely useful and personalised.” The potential for enhanced assistance is clear; however, it also raises questions about the nature of such dependence.
Despite its advances, AI memory differs from human memory in crucial ways. Humans instinctively filter and compartmentalise information depending on context—distinguishing between private, professional, important, or ephemeral details—whereas AI systems lack non-verbal cues and nuanced understanding. Sarin notes that without this capability, ChatGPT might continue to reference outdated or irrelevant information, leading to awkward or inappropriate suggestions. He warned, “Our ability to forget is part of how we grow. If AI only reflects who we were, it might limit who we become.”
In professional settings, persistent memory could considerably improve productivity. Julian Wiffen, Chief of AI and Data Science at data integration platform Matillion, told TechRadar, “It could improve continuity for long-term projects, reduce repeated prompts, and offer a more tailored assistant experience.” However, Wiffen also highlighted important concerns regarding privacy, control, and data security. “I often experiment or think out loud in prompts. I wouldn’t want that retained – or worse, surfaced again in another context,” he said. The risk is especially pronounced in technical environments where sensitive code or proprietary information might be inadvertently remembered by the AI, potentially breaching intellectual property rules or compliance requirements, particularly in regulated industries.
OpenAI has emphasised that users will continue to have some control over memory, with options to delete individual memories, disable the memory feature completely, or use a new “Temporary Chat” mode that prevents memory retention altogether. Yet, Wiffen expressed reservations about the sufficiency of these controls, citing a lack of fine-grained transparency and the challenges of ensuring compliance with data protection laws such as the EU’s GDPR. He remarked, “Even well-meaning memory features could accidentally retain sensitive personal data or internal information from projects. And from a security standpoint, persistent memory expands the attack surface.”
Different AI platforms have taken varying approaches to memory. For example, the AI assistant Claude avoids persistent memory outside the current session, trading personalisation for increased privacy and control. AI tools like Perplexity focus solely on retrieving real-time information rather than remembering past interactions. Meanwhile, Replika, designed for emotional companionship, embraces long-term emotional memory to foster deeper connections. These distinctions reflect the diverse goals and use cases across AI applications.
As AI conversations and interactions become increasingly intertwined with personal and professional lives, the evolving nature of memory in AI is poised to transform user experience substantially. The ability of ChatGPT to remember extensive histories could lead to greater efficiency and relevance, but it also introduces complex questions about data management and the implications of relying on AI as a persistent digital companion. The TechRadar report highlights both the capabilities and challenges of this cutting-edge development, noting that the benefits are closely tied to the need for clear safeguards and transparent control mechanisms.
Source: Noah Wire Services