Pentagon Plans AI Propaganda Machine to Control Public Narratives
- ural49
 - Sep 9
 - 2 min read
 

The latest revelations about U.S. Special Operations Command (SOCOM) expose a dangerous push to weaponize artificial intelligence for global propaganda. According to documents obtained by The Intercept, SOCOM is seeking machine-learning systems capable of “suppress[ing] dissenting arguments” and running influence operations with minimal human oversight. This plan is marketed as part of its “Advanced Technology Augmentations to Military Information Support Operations,” but in practice it reads like a blueprint for mass psychological manipulation.
The military argues it cannot keep pace with the speed of online discourse. As the SOCOM document bluntly admits, “The information environment moves too fast for military members to adequately engage and influence an audience.” Instead of acknowledging the limits of such manipulation, SOCOM proposes building systems that “control narratives and influence audiences in real time.” These systems would scrape online conversations, generate messages tailored to individuals or groups, and even target those who attempt to expose the propaganda. “This program should also be able to access profiles, networks, and systems of individuals… attempting to counter or discredit our messages,” the document states.
The hypocrisy is staggering. Officials justify this agenda by pointing to Chinese or Russian information campaigns, claiming America must respond in kind. Yet the record shows the U.S. has long run covert propaganda—whether by smearing China’s Covid vaccines as “fake” or secretly operating anti-Russian social media accounts that quickly collapsed into embarrassment. As Heidy Khlaaf of the AI Now Institute warns, framing these systems as merely “defensive” is misleading: offensive and defensive uses are “two sides of the same coin.”
Experts also highlight how unreliable these tools are. Large language models frequently invent falsehoods and reinforce biases, raising the risk of backfiring. As Emerson Brooking of the Atlantic Council put it, “AI tends to make these campaigns stupider, not more effective.” The pursuit of deepfakes and automated propaganda simulations of entire societies only expands the danger.
Link: The Intercept



Comments