A recent court order in the New York Times' copyright infringement lawsuit against OpenAI has ignited significant privacy concerns for millions of ChatGPT users worldwide. The order mandates OpenAI to indefinitely preserve user chat data, including conversations that would typically be deleted, a directive that OpenAI argues fundamentally conflicts with its user privacy commitments. The legal battle highlights the growing tension between intellectual property rights, AI development, and individual data privacy.
The New York Times filed its lawsuit in December 2023, alleging that OpenAI and Microsoft unlawfully used millions of its copyrighted articles to train their large language models, including ChatGPT. As part of the ongoing litigation, a U.S. Magistrate Judge on May 13, 2025, issued an order compelling OpenAI to "preserve and segregate all output log data that would otherwise be deleted on a going forward basis." This ruling overrides OpenAI's standard 30-day data deletion policy for consumer chats and API content.
OpenAI has strongly appealed the order, with CEO Sam Altman stating on social media, "We will fight any demand that compromises our users' privacy; this is a core principle." The company contends that complying with the order forces it to "disregard legal, contractual, regulatory, and ethical commitments to hundreds of millions of people," and poses significant technical and financial burdens. This includes users of ChatGPT Free, Plus, Pro, and Team subscriptions, as well as API users without a Zero Data Retention agreement.
Cybersecurity expert Bob Gourley, CTO and co-founder of OODA LLC, emphasized the inherent risks of connecting sensitive data to large language models. In a recent tweet, Gourley stated, > "Connecting ChatGPT or other models to your data comes with some serious privacy implications. From a legal risk perspective, cases like the NYT vs OpenAI lawsuit is already putting all user queries at greater risk of exposure." He further warned against including personal or business financial information in such chats.
Gourley, whose firm OODA extensively uses AI, stressed their cautious approach: > "although we use AI extensively at OODA, we are careful with how we do it and the use cases we leverage AI for and do not use it with sensitive data. And don't use connectors." He advocates for using local models for inference over personal data to maintain control. This perspective underscores a broader industry debate on balancing AI innovation with robust data security and privacy protocols.
The court's decision and OpenAI's appeal underscore a critical juncture for AI governance, forcing a re-evaluation of data retention policies and user consent in the age of generative AI. The outcome of this case is expected to set significant precedents for how AI companies manage user data, potentially influencing future regulations and industry practices globally.