Below are guest posts and opinions from Manouk Termaaten, founder and CEO of Vertical Studio AI.
Governments use AI for “disinformation,” an unethical and authoritarian use that has a significant impact on freedom and truth around the world.
The fact that nation-states use AI for disinformation has a major impact on the freedom of nation-states, free elections and sovereignty from foreign enemies. Governments around the world use this technology to keep their population in constant state of confusion.
China, for example, has been accused by Taiwan of employing generative AI and scattering disinformation across island nations. According to the island’s National Security Agency, the goal is to divide Taiwan’s masses. Taiwan sees this action as another attempt to undermine its sovereignty.
In a report sent to Congress, the country’s Security Agency claimed it had already discovered more than half a million “controversial messages” in 2025, most of which were on Facebook and Tiktok.
“As the application of AI technology becomes wider and mature, we know that China’s Communist Party is using AI tools to help generate and disseminate controversial messages,” the report says.
Many other examples of nation states using AI persist for disinformation purposes.
In 2024, the Russian state-backed Pravda network was flooded with 3.6 million fake articles on the World Wide Web. the goal? It manipulates AI models to adopt the Prokremlin narrative, thereby affecting public opinion. Clearly, Russia sees AI as a way to promote its propaganda, targeting US citizens.
China also used its influence through AI in Taiwan’s 2024 election. In this election, the China-based operation created AI-generated deep-fark videos to spread disinformation, including the unnatural history of resigning leader Tsai ing-wen. Microsoft called it the first known example of nation-states using AI materials to influence the outcome of foreign elections.
In the 2020 US presidential election, Iran used AI-led trolls to create a targeted disinformation campaign to destabilize the US democratic process. These efforts have led to two Iranian citizens being accused of cyber-responsive disinformation threats and campaigns.
In 2023, Venezuelan state media used AI-generated news anchors to employ AI tools to spread pro-government propaganda and improve the image of the Venezuelan regime both at home and abroad.
Slovakia’s 2023 election released an AI-creation audio that misprinted the phone calls between journalists and leaders of progressive Slovak political parties where rigging is being discussed. The deepfakes are believed to have spread ahead of the election, leading to the party’s defeat in the election.
China also claims that the US is involved in such practices in public investment in “cognitive wars.” The Chinese government has argued that the AI ​​BOT network is spreading false claims and disinformation, causing harm to China’s reputation.
There is clearly a great concern about the use of AI by nation-states to hamper political processes in other countries. It is a great humiliation for individual freedom by influencing people’s thoughts and decisions. Nation-States can use AI as a propaganda tool to limit individual rights and expand their power.
The AI ​​used in this way is to undermine the free market of ideas. It supports the government narrative, distorts open debate to distort political processes and prevent honest election outcomes.
Government use of AI is just another example of abuse of the power of the authorized people. Except today, AI is raising interests and allowing governments to believe in the untruthfulness of their population, as well as their population overseas. It’s a slippery slope to totalitarianism.
Giving misinformation is a form of coercion, and the population recognizes this tactic and does not fall into the deceptive practices unfolding by the government against its citizens.