Each generation has learned to figure out the dominant media of its time. Boomers learned to decode TV advertising. Gen X questioned the news. Millennials fact-checked viral posts. Gen Z learned how to spot inauthentic influencer branding. 

Gen Alpha – individuals born after 2010 – is facing something unprecedented. Artificial intelligence (AI) is reshaping how content is created and shared, and young people today must learn to distinguish what is real. Today’s children are surrounded by content that looks and sounds real, yet is entirely generated by AI.  

A new literacy challenge: When fake looks too real 

The pace and realism of artificial intelligence are accelerating. Tools like Google Veo 3, for example, can generate high-resolution photorealistic videos with striking accuracy all from a single text prompt. The results can resemble anything from casual street interviews to reimagined historical events. The lighting is natural, the gestures eerily lifelike and the pacing believable. Earlier digital fakes were easier to identify with obvious signs like visual glitches or awkward animation. Now those visual giveaways are becoming harder to spot. Members of Gen Alpha, at an age when they are least equipped to assess what’s on their screens, are growing up with content realistic enough to trick experts. 

This isn’t the same as watching a CGI (computer-generated imagery) live-action Disney remake or playing a hyper-realistic video game. It’s true that children can sometimes confuse fantasy with reality. But by the time they are five or six, they typically understand that content defying basic logic — like talking mammals or magic spells — is imaginary. These cues help their developing minds separate fiction from fact. 

Children’s reasoning becomes more refined between the ages of seven and eight. They start applying a mix of logic, context, personal experience and trusted input from others to what they see, although it is still inconsistent. But just as that ability sharpens, AI-generated content removes the very cues they rely on.  

It mimics the look and feel of real footage, can imitate the voices or appearances of trusted people and blends seamlessly into their feed in between YouTube videos and TikTok clips. Since children’s ability to evaluate media is still developing, this level of realism makes it harder for them to tell if they are watching a person or a program pretending to be one.  

And it’s not just children. Many adults sometimes struggle to tell the difference, especially when content looks credible. Even when it is labelled as AI-generated, the small display warnings are often missed, misunderstood or ignored by viewers.  

The effects become harder to ignore as Gen Alpha continues using this content to form an understanding of the world. This past June, Alberta police issued a provincewide warning after Cybertip.ca reported nearly 4,000 sexually explicit AI-generated deepfake images and videos of youth between 2023 and 2024. This has raised concerns about how AI is being used to exploit and harass young people.  

The same advances making video generation more accessible are also driving its misuse in exploitative and deceptive ways. Children are encountering misinformation as well as faulty AI-generated “educational” science, history and current events videos. Research shows that when teenagers lack the tools to evaluate digital information, it limits how they participate, learn and make informed decisions online.  

These gaps in digital competence are tied to educational and civic outcomes, such as school performance, access to online opportunities, as well as political and societal participation. These disparities may persist without digital literacy in schools, parental guidance at home and clearer safeguards from platforms. 

Building AI literacy where kids learn and live 

Addressing these challenges requires action across multiple fronts. Provinces and schools boards in Alberta, British Columbia and Ontario have begun piloting AI education initiatives. However, there is no consistency across jurisdictions, nor is there a unified framework to support teachers, guide parents and ensure that students develop the ability to understand, evaluate and use AI responsibly throughout Grades K-12. 

In most classrooms, AI digital literacy remains optional, fragmented or absent altogether. School boards offer professional development, but teachers note that concerns about AI can’t be meaningful addressed in the limited time provided. A national survey commissioned by the organization Actua showed that less than half (48 per cent) of educators interviewed felt equipped to use AI tools in the classroom.  

Some 46 per cent felt confident teaching responsible AI use and 42 per cent felt ready to teach students how to use artificial intelligence effectively. 

School librarians have raised similar concerns. They point out that many students lack the foundational skills to critically assess AI-generated content, even as smart tools become more integrated into learning environments. 

Globally, a 2023 review of AI literacy efforts found that most programs neither assess what students actually understand nor give much attention to the broader socioeconomic consequences of poorly applied machine learning. Without structured support and dedicated training, the responsibility falls unevenly across schools and classrooms. This leads to inconsistent learning conditions and widens existing gaps in AI literacy. 

A Canadian blueprint for trustworthy AI governance

 How to legislate on artificial intelligence in Canada? 

SERIES: How should artificial intelligence be regulated? 

The burden on parents is just as heavy. They are expected to manage children’s exposure to increasingly advanced AI tools that generate voices, images and videos. At the same time, they must evaluate and consent to a growing number of apps and devices that collect their children’s data. Yet many parents lack the knowledge, tools or guidance needed to make informed choices. Before expecting parents to help children use AI wisely, we need to give adults the resources and confidence to understand it first. 

 Towards a more equitable AI future 

Co-ordinated national efforts are needed to ensure all schools have access to trained educators, inclusive AI curriculums and the digital infrastructure for equal learning opportunities in classrooms and at home. AI tools like writing assistants or text-to-speech programs can support learning and improve accessibility for students with different needs. But those benefits only matter if children understand how the tools work and can judge the reliability of the information they produce.  

The groundwork for a stronger, more cohesive countrywide approach to AI literacy for youth should include: 

  • A national K-12 AI strategy that aligns provincial efforts and ensures consistent instruction across provinces. 
  •  Required AI training for teachers entering the profession and as part of ongoing professional development to give educators the skills needed to use AI in the classroom confidently and responsibly. 
  •  Lessons on deepfakes, evaluation of AI-generated media and principles of data rights and consent as part of AI literacy education taught at age-appropriate levels throughout Grades K-12.  
  • Expanded access for families to bilingual AI literacy resources that contain clear, plain-language guidance to help parents support their children’s use of AI at home and complements what children are learning in school. 
  • Clearer and consistent labels on AI-generated content — including deepfakes — across digital platforms to support transparency and young users’ awareness.   

The digital world is changing quickly. If Canada wants the next generation to grow up informed, capable and confident in what it sees, AI literacy must become a priority. The longer we wait the harder it becomes to teach what should have been learned from the start.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

Elruma Dixon photo

Elruma Dixon

Elruma Dixon is a bilingual juris doctor graduate from the University of Ottawa and a youth advocate. Her work and interests bridge law, artificial intelligence, and public policy, with a focus on AI literacy and equitable access to education.

More Like This:

Elruma Dixon photo

Elruma Dixon

Elruma Dixon is a bilingual juris doctor graduate from the University of Ottawa and a youth advocate. Her work and interests bridge law, artificial intelligence, and public policy, with a focus on AI literacy and equitable access to education.

Related Stories

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.