One of the chief security threats facing Canada today is information integrity. Authoritarian states, namely Russia and China, are now deploying AI as their weapon of choice in a cognitive contest against Western democracies, using disruptive narratives, influence operations, and disinformation to weaken democratic societies, including Canada, from within.

Renowned journalist Maria Ressa, recipient of the 2021 Nobel Peace Prize, underscored the dangers posed by this “mother of all battles” in a speech to world leaders gathered at the United Nations General Assembly in mid-September, warning that “without facts, you can’t have truth. Without truth, you can’t have trust. Without all three, we have no shared reality, and democracy as we know it is dead.”

Fighting AI-enabled disinformation requires a two-pronged defence, one at the individual level and the other state-led. As chair of the G7 for 2025, Canada has an opportunity to lead a coordinated international response, promoting shared standards for information transparency and accountability. It should also move to empower Canadian citizens to respond critically to disinformation through a national media literacy campaign tied to the federal government’s renewed AI Strategy.

The threat of large language model grooming

Large language models (LLMs), the technology behind generative AI, are being deliberately manipulated by authoritarian states through a tactic known as LLM grooming. This form of information manipulation involves producing vast quantities of AI-generated text containing false or skewed narratives that contaminate the data that informs AI systems.

Over time, these poisoned datasets subtly shift how AI models represent reality resulting in AI systems that begin to echo an authoritarian regime’s preferred framing on important issues, while appearing neutral and objective.

A 2025 Washington Post investigation found that Russian-linked content farms, collectively known as the Pravda network, produce tens of thousands of engagement-driven articles every week, masquerading as independent media outlets. In 2024 alone they published more than 3.6 million pro-Russia pieces designed to pollute the internet with disinformation that AI systems later absorb.

Earlier this year, a NewsGuard study revealed that 10 major AI chatbots, including ChatGPT-4, Microsoft Copilot, Gemini, and Meta AI, reproduced falsehoods from Pravda networksources roughly one-third of the time, and sometimes cited its articles directly.

Eroding public trust

When Canadians turn to AI tools for reliable information, these subtle manipulations can erode public trust and distort democratic debate.

The recent Polish presidential election demonstrated how AI-enabled disinformation operations can effectively target democracies. In the lead up to the vote, Poland’s national cybersecurity agency reported a spike in coordinated disinformation campaigns linked to the Pravda network. Thousands of fabricated articles portrayed NATO as deliberately escalating the conflict in Ukraine and depicting Polish leaders as corrupt or incompetent. When Polish citizens later asked chatbots about NATO or the war in Ukraine, several AI systems, including ChatGPT-4 and Microsoft Copilot, echoed these false narratives.

Information manipulation is no longer hypothetical. AI-driven authoritarian disinformation is actively shaping what millions of people see, read, and believe every day.

Canada a prime target for information manipulation

For Ottawa, these lessons are urgent. Canada’s open media environment, bilingual information ecosystem, and NATO ties make it a prime target for such AI-driven disinformation operations. Domestic debates over natural resources, Indigenous rights, and foreign policy offer fertile ground for manipulation that aims to deepen societal divisions.

Allies have shown that resilience requires both technological regulation and public education. Poland’s FakeHunter initiative combines AI detection with human fact-checkers to debunk falsehoods in real time, while Estonia has integrated deepfake detection, rapid-response cyber units, and mandatory media literacy into its defence strategy.

To protect its information integrity, Canada must act now to mandate transparency and traceability for AI-generated content. As the 2025 G7 chair, Ottawa can use its leadership position to drive a coordinated international response to AI-enabled disinformation and promote shared standards for transparency and accountability.

Canada and its G7 allies should engage directly with US-based private-sector AI firms to implement policies that require AI chatbots to list the sourcing of their outputs. For example, the Google Search summary feature, which appears at the top of a results page, does not clearly list the source of the summarized information. This restricts the user’s ability to judge the reliability of the information provided, leaving them susceptible to intentionally misleading content such as that produced by the Pravda network.

The emerging political power of social media influencers

Tylenol, Trump and the crisis of public health communication

Fight disinformation to strengthen our democracy

The recent announcement of a new AI strategy task force by Canada’s minister of artificial intelligence and digital innovation is an opportunity to align domestic regulation with this global agenda. A national media literacy campaign tied to this AI strategy can equip Canadians with tools for better understanding and responding to this AI-powered disinformation threat.

This campaign should highlight the threats Canadians face online every day from foreign information manipulation and interference campaigns, reinforce the importance of information integrity as a core democratic principle, and encourage every Canadian to question what they read online, verify sources before sharing, and approach AI-generated content with healthy skepticism.

An educated understanding of AI-related threats, backed by critical thinking, must now be seen as a form of national defence.

Maria Ressa warned that “if we lose information integrity, we lose everything,”and the clock is ticking. By combining national leadership with civic vigilance, Canada can help ensure that truth, trust, and shared reality remain the foundation of its democracy.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

More Like This:

Categories:

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence.

Jack Rath photo

Jack Rath

Jack Rath is the global security officer at the Montreal Institute for Global Security (MIGS). He works to understand threats emerging technologies, including AI. He earned his undergraduate degree from the University of Sydney, and is completing a master’s in international security at Sciences Po, Paris.

Related Stories