
OpenAI has moved to clarify its policies regarding ChatGPT's ability to provide professional advice, refuting widespread speculation that the company recently implemented a new ban on the chatbot offering legal and medical guidance. Karan Singhal, who leads OpenAI's Health AI team, stated emphatically that "model behavior remains unchanged" and that claims of a new policy shift are "not true."
The clarification comes after a flurry of social media posts and news reports suggested that OpenAI's updated terms of service, revised on October 29, introduced fresh restrictions. Singhal emphasized that ChatGPT has consistently been positioned as a resource to help users understand complex information, not as a replacement for licensed professionals. "ChatGPT has never been a substitute for professional advice," Singhal noted, "but it will continue to be a great resource to help people understand legal and health information."
OpenAI's usage policy explicitly states that its services should not be used for the "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional." This guideline, according to the company, is not a novel restriction but rather a consistent stance aimed at ensuring user safety and mitigating liability. The recent update primarily unified policies across OpenAI's various products and services.
The company's position underscores the inherent limitations of AI models in high-stakes domains where accuracy, context, and professional judgment are critical. As AI adoption grows, regulatory bodies and industry stakeholders have increasingly scrutinized the responsible deployment of such technologies, particularly concerning sensitive areas like healthcare and legal counsel. OpenAI's continued emphasis on these boundaries aims to manage user expectations and align with evolving ethical and regulatory landscapes.