Meta Removes AI Video Falsely Announcing Irish Presidential Candidate's Withdrawal and Election Cancellation

Image for Meta Removes AI Video Falsely Announcing Irish Presidential Candidate's Withdrawal and Election Cancellation

Meta has removed an AI-generated video designed to mimic an RTÉ news bulletin that falsely depicted Irish presidential candidate Catherine Connolly withdrawing from the upcoming election and claiming the election was cancelled. The video, which circulated on Facebook and Instagram, was taken down for violating Meta's policies on manipulated media and election interference. This incident highlights the escalating challenge of artificial intelligence being used to spread political misinformation, particularly as elections approach.The deceptive video falsely claimed that Catherine Connolly, an independent Teachta Dála (TD) for Galway West, had announced her withdrawal from the 2025 Irish presidential race, further stating that the election itself would be cancelled. As reported by Techmeme, citing The Irish Times, Meta acted after the content was flagged by users. RTÉ, Ireland's national public service broadcaster, confirmed the video was not authentic and expressed concern over the misuse of its branding.Catherine Connolly's campaign team swiftly condemned the AI-generated content, describing it as a "dangerous form of misinformation" and a serious attempt to mislead voters. Connolly, a prominent figure known for her advocacy in human rights and social justice, officially declared her candidacy for the presidency with a platform focused on strengthening democratic institutions. Her campaign emphasized that she remains fully committed to her presidential bid.Meta confirmed the removal, underscoring its commitment to protecting election integrity by combating deceptive AI content. The company has recently updated its policies to implement stricter measures against deepfakes and manipulated media that could mislead voters or interfere with democratic processes. These measures include proactive detection, user reporting mechanisms, and rapid removal of content that falsely depicts individuals in political contexts.The incident serves as a stark reminder of the increasing threat posed by sophisticated AI-generated content in political campaigns globally. As election cycles intensify, technology companies face growing pressure to develop robust strategies to identify and remove deepfakes that aim to deceive the public and erode trust in information. This event underscores the critical importance of verifying content, especially during sensitive electoral periods.