Guard Against Deceptive AI-Generated Information

Generative AI tools have become much more popular and easy to use in recent years, and we have already seen numerous examples of these tools being weaponized to create false and misleading election information. You can help limit the spread of misinformation by being aware of deceptions and the ease with which generative AI can spread incorrect information.

Thanks to Mekela Panditharatne and Shanze Hasan at Brennan Center for Justice for these suggestions.

  • Verify information from multiple sources including authoritative sources and credible independent fact-checkers, especially with unfamiliar websites. Go to a site like PolitiFactAP Factcheck, or another site verified by the International Fact-Checking Network — or the Artificial Intelligence Incident Database operated by the Responsible AI Collaborative to try to verify the authenticity of content or to submit content to such sites or databases for verification.
  • Be especially careful with emotionally charged content, since emotions can cloud critical judgement.
  • Sometimes AI-generated content, like deepfakes, is marked with a disclaimer. Look for markings like “this image has been manipulated.”
  • Be cautious using search engines that provide results with generative AI. Some search engines, including Microsoft Copilot and Perplexity, provide generative AI “chatbot” responses to searches. Google also has a new “AI overview” feature that works the same way. The information in these responses often contains factual errors. Go directly to the source (such as instead.
  • Check information before sharing it. You can limit the spread of misinformation by double-checking that what you share or boost is accurate.

Some of this information originally appeared in the article, “How to Detect and Guard Against Deceptive AI-Generated Election Information” from the Brennan Center for Justice.


Leave a Reply

Your email address will not be published. Required fields are marked *