AI Imposter Contacts Five Officials Posing as Marco Rubio, State Dept. Warns

Image for AI Imposter Contacts Five Officials Posing as Marco Rubio, State Dept. Warns

An unidentified imposter has utilized artificial intelligence to mimic the voice and writing style of Secretary of State Marco Rubio, contacting at least five high-level officials, including three foreign ministers, a U.S. governor, and a U.S. member of Congress. The incident, which began in mid-June, prompted the State Department to issue a warning to U.S. diplomats, according to a cable dated July 3. The aim of the impersonation appears to be to manipulate officials and gain access to sensitive information or accounts.A The imposter created a Signal account using the display name "Marco.Rubio@state.gov" and sent both text messages and AI-generated voicemails to the targeted individuals. While the hoaxes were reportedly unsuccessful and "not very sophisticated," the State Department is investigating the matter and has advised all employees and foreign governments to be vigilant. The FBI has also issued broader warnings about malicious actors using AI-generated voice and text messages to impersonate senior U.S. government officials.A This incident follows a similar attempt in May where an individual impersonated White House Chief of Staff Susie Wiles, contacting lawmakers and executives. Experts in digital forensics, such as Hany Farid of UC Berkeley, note that AI voice cloning requires minimal audio and can be easily deployed, making such impersonations accessible even to less sophisticated actors. The ease of creating convincing deepfakes poses a growing challenge to information security.A The State Department emphasized its commitment to safeguarding information and continuously improving cybersecurity measures to prevent future incidents. U.S. diplomats are urged to report any impersonation attempts to the Bureau of Diplomatic Security, while non-State Department officials should alert the FBI’s Internet Crime Complaint Center. This event underscores the escalating "arms race" between those developing AI for malicious purposes and those working to detect and prevent such deceptions.