OpenAI CEO Sam Altman has issued a significant warning to users of ChatGPT, stating that conversations held with the artificial intelligence chatbot do not possess legal privilege protection and could potentially be used as evidence in court proceedings. The caution, initially highlighted by Cointelegraph and reiterated by Altman in a recent podcast appearance, underscores a critical gap in current digital privacy frameworks.
Altman's concerns stem from the growing trend of individuals, particularly younger users, relying on ChatGPT for highly personal advice, including relationship issues and life coaching. Many engage with the AI as they would a therapist or a trusted advisor, sharing sensitive and confidential information.
Unlike interactions with licensed professionals such as doctors, lawyers, or therapists, which are protected by established legal privileges like doctor-patient confidentiality or attorney-client privilege, conversations with AI chatbots currently lack such safeguards. "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it," Altman stated, emphasizing the absence of a similar concept for AI.
The OpenAI chief described the current situation as "very screwed up," advocating for the urgent development of a legal and policy framework that extends privacy concepts to AI interactions. He warned that if a user discusses "your most sensitive stuff" with ChatGPT and subsequently faces a lawsuit, OpenAI "could be required to produce that" information.
This disclosure comes amidst broader discussions and legal challenges concerning AI data privacy. OpenAI, for instance, has faced court orders to preserve user chat data in ongoing legal disputes, such as a copyright lawsuit brought by The New York Times. The lack of clear legal precedent for AI conversations means companies may be compelled to disclose user data upon legal demand.
Altman stressed that this issue, largely unforeseen even a year ago, has rapidly become a "huge issue" requiring immediate attention from policymakers and the tech industry. Users are advised to exercise discretion when sharing sensitive information with AI platforms until clearer legal protections are established.