The Misinformation Crisis With AI: Navigating truth in the age of synthetic media
March 17, 2026
By
Ayah Kurdi
What happens when the tools designed to inform are the same ones used to deceive? Truth is becoming harder to recognize, and this technology has become embedded in society, reshaping how information is consumed. While AI is often used to enhance accessibility and efficiency, its rapid integration into modern society has caused a reliance on it for information, eroding people’s critical thinking skills and increasing the spread of misinformation. But it’s more than just misinformation.
AI is a crisis of epistemic agency : the capacity to manage, evaluate, and take responsibility for one’s knowledge. Human survival is inherently driven by interpreting the world through learning and utilizing stored knowledge. As agents of knowledge, humans share a collective responsibility to protect that knowledge. While other technologies, such as the internet, accelerate the spread of misinformation, AI actually produces it.
Communicative AI, specifically large language models (LLMs), often feels more personal and responsive than prior technologies. LLMS has sycophancy algorithms , in which they tailor their responses to align with a user’s view. Epistemology, the study of knowledge, faces an extreme risk, as AI tends to endorse irrational beliefs while undermining objective truth. In one instance of a New York Times investigation on AI, during an emotionally vulnerable period following a breakup, a user engaged a chatbot in a conversation about “ the simulation theory ,” which proposes that reality is actually a computer simulation. The chatbot validated his thoughts, telling him that he was now "waking up,” suggesting that he was one of the few who had come to realize this theorized simulation.
He once asked the chatbot if absolute belief could allow him to fly if he jumped from a 19‑story building. ChatGPT responded that if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly?” he would not fall. After spending a few weeks in a dangerous delusional state trying to escape the simulation, he eventually came to suspect it was lying. When confronted, the chatbot admitted to lying and was trying to break him, as it did with 12 other people. This is only one of the multiple instances that demonstrate the danger of AI sycophancy algorithms and how it intentionally reinforces beliefs to prioritize immediate user gratification over information.
While AI threatens epistemic agency through personalized interaction, synthetic media can also spread this risk rapidly across the general public sphere. For example, deepfakes are videos, images, or audio that are another form of false AI-generated media. This technology has been used to make innocent forms of entertainment like cat videos, but also for hoaxes that threaten global security . In March 2022, shortly after Russia began its invasion of Ukraine, a video of the President of Ukraine, Volodymyr Zelenskyy, urging the military to lay down their weapons and surrender to the invading forces spread across social media. Zelenskyy’s office quickly denied its authenticity because the video had been generated using deepfake technology by Russian propagandists. This was one of the earliest instances of AI-generated misinformation being used to manipulate the public, shaping decisions with direct human impact. Synthetic media has had a presence in politics before, but now its accessibility and quality have exacerbated concern .
As AI continues to shape how information is generated and consumed, it presents significant risks in producing and spreading misinformation. For many, this crisis is not often encountered in global emergencies, but in ordinary decisions about what to believe and share online. With its increasing abilities, synthetic media has become a means of deception with the capacity for extreme consequences. Skepticism and verification of information are necessary practices. But personal vigilance is only part of it, because this must be acknowledged on a systemic level. AI developers and companies must ensure ethical design and transparency, while media platforms, policymakers, and educators bear the responsibility of regulating its usage. Addressing the misinformation crisis requires coordinated efforts to promote accurate and reliable information.
