Election Commission of India Mandates AI Labels for Campaign Videos by October 2025

Image for Election Commission of India Mandates AI Labels for Campaign Videos by October 2025

Journalist Carla Marinucci has underscored the urgent need for legislation to mandate clear labeling of all artificial intelligence (AI) videos, emphasizing its critical role in consumer and voter protection. > "This👇Legislation is needed — for consumer/voter protection — requiring all AI videos to be clearly labeled," Marinucci stated on social media. This call aligns with a growing global movement to regulate AI-generated content, particularly in the context of elections and public trust.

In a significant move addressing these concerns, the Election Commission of India (ECI) issued an advisory on October 24, 2025, requiring political parties to clearly label all AI-generated or altered campaign videos. This mandate, effective immediately, aims to curb the spread of hyper-realistic synthetically generated information that could mislead voters and disrupt electoral integrity. The ECI stressed that such content, especially that depicting political leaders making sensitive statements, poses a "deep threat" to fair elections.

Similar legislative efforts are underway in the United States, where bipartisan bills have been introduced to require the identification and labeling of AI-generated online content. The "Protecting Consumers from Deceptive AI Act" and the "AI Labeling Act of 2023" seek to implement digital watermarks or metadata for AI-created images, videos, and audio. These proposals aim to protect consumers, children, and national security from misinformation and deepfakes, which have been used to mimic public figures and spread false narratives.

California has also enacted several laws, effective January 1, 2025, to safeguard voters from deceptive AI-generated media. These include provisions for large online platforms to block or label "materially deceptive" election-related content. Furthermore, the U.S. Federal Communications Commission (FCC) has expanded the scope of the Telephone Consumer Protection Act to address AI-generated voices in robocalls, following incidents like AI deepfakes of President Biden used to suppress voter turnout.

The concerted global push for AI content labeling reflects a broad consensus on the necessity of transparency to maintain public trust and combat misinformation. From national election bodies to federal and state legislatures, authorities are increasingly recognizing the imperative to distinguish authentic content from AI-generated fabrications. This regulatory trend is poised to establish new standards for digital media, ensuring that consumers and voters are adequately informed about the origin of the content they encounter.