Artificial intelligence tools, including ambient listening devices or AI scribes, are transforming the health-care sector. However, they have also opened a new area of clinical risk in terms of privacy, accuracy and potential bias.

To address this, the Office of the Information & Privacy Commissioner for British Columbia released guidelines in January for health-care organizations that have adopted or plan to adopt these tools. At the same time, the Information and Privacy Commissioner of Ontario released guidelines for all provincial entities, including the health-care sector, on privacy and AI tools in general.

Since Canada does not have a comprehensive AI regulatory framework in this area after Bill C-27 died in the previous Parliament, health-care providers and their institutions must ensure that their use of these devises does not violate the Personal Information Protection and Electronic Documents Act, health privacy laws and various other frameworks and guidelines.

What is an AI scribe?

The Ontario Medical Association defines an AI scribe as “a digital tool that’s designed to automate time-consuming tasks, like data entry or note-taking. AI scribe technology uses artificial intelligence to summarize or capture spoken conversations and compile them into electronic and clinically relevant medical notes.”

Physicians then review the AI-generated information before it is added to the patient’s record.

The OMA says a provincial study has shown an average decrease in documentation time of 70 to 90 per cent with AI scribe tools when compared to physicians completing the paperwork on their own.

However, scholars and practitioners such as Rahul Mehta warn that these AI scribes often prioritize conversation flow over clinical nuance and that could lead to the omission of critical details that may have been mentioned briefly.

A Canadian blueprint for trustworthy AI governance

A made-in-Canada approach to AI

What the Canadian government is missing on AI

There may also be errors where the device may add incorrect information due to noise, accents, multiple speakers or even misheard words. Contextual errors may occur during the transcription process because the AI scribes lack an understanding of clinical reasoning. It is possible that the devices may incorrectly transcribe a physician’s prescription, details and more.

As well, there’s always the danger of AI hallucination, which happens when the software generates an output that appears to be correct or plausible but is false or misleading.

More needs to be done

The Ontario privacy commissioner and the Ontario Human Rights Commission recommend proceeding with caution because the risks and benefits remain unclear. That’s a step in the right direction, but overall AI governance measures are both lacking and needed.

Last May, Prime Minister Mark Carney established a new cabinet position, the minister of artificial intelligence and digital innovation, and appointed Evan Solomon to that role.

However, one month later, Solomon said his goal was to put less emphasis on AI regulation and more on finding ways to harness the technology’s economic benefits. In addition, the 2025 federal AI Strategy Task Force was dominated by industry and lacked equal participation from independent researchers, equity-seeking groups and civil society.

A growing trend

The speed of innovation in the health-care sector without clear regulations about the use of these tools is concerning.

Many third-party vendors are also offering AI tools beyond scribes, with features such as recommendations for diagnosis, medications, treatment plans and more. As a result, new risks may emerge because even small errors can have significant consequences in terms of transparency, privacy, safety, equity, access, accountability and informed consent.

Transparency

Last July, Sunnybrook Health Sciences Centre in Toronto announced its emergency department had started a trial of DAX Copilot, an AI scribe. The Ottawa Hospital, the Royal Victoria Regional Health Centre in Barrie, Ont., and Hamilton Health Sciences, among others, are doing the same.

The trials are still underway but according to Sunnybrook and Microsoft’s FAQs, DAX Copilot is expected to reduce the time spent for documentation by around seven minutes per encounter. They say this should allow doctors more time to interact with patients.

The hospital is looking to both technology and private donors to address its long wait times, though it remains unclear whether the AI pilot specifically received third-party funding or whether patients and staff were consulted before adoption.

Risks to privacy and safety

Using ambient listening devices and AI in clinical settings poses real privacy risks.

At Sunnybrook, recorded clinician-patient conversations are retained for 30 days before secure destruction. Since these tools capture confidential health information, explicit patient consent is required before use. When patients do consent, they agree that their data can be used to train the AI model. Those who wish to opt out are informed of this option at registration and may do so at any time.

Overall, many issues also remain unaddressed nationwide, including data protection, informed consent challenges in the emergency department and the accuracy of documentation created by this tool.

For example, the B.C. privacy commissioner warns that health-care practitioners should not rely on implicit consent and that expressed consent should be in writing.

Patients are also encouraged to review their medical records for accuracy, which begs the question of whether the onus is partially shifted to the patient to ensure the accuracy of their medical records.

Equity and accessibility

Research on AI scribes has found that there is a documentation gap between what is said during consultation and what is transcribed. For example, in the United States, there are higher error rates for African American patients than their white counterparts.

Patients from different cultures, disadvantaged groups or individuals who have low literacy levels may face challenges when advocating for themselves. Moreover, due to power asymmetry, they may be more susceptible to potential errors in their recorded conversation as well as being more likely to opt in.

What does this mean for Canadians?

In Ontario, health information custodians who plan to develop, procure or use AI systems must follow Ontario’s Personal Health Information Protection Act when releasing health information via AI scribes and tools, and third-party vendors may be subject to the terms of the federal Personal Information Protection and Electronic Documents Act.

The provincial privacy commissioner recommends that data custodians stay updated on technological developments and risks.

As AI evolves to include treatment, diagnosis, prescription and lab test recommendations, there needs to be more action to ensure privacy, data security, human oversight and consent.

Do you have something to say about the article you just read? Be part of the Policy Options discussion, and send in your own submission. Here is a link on how to do it.

You are welcome to republish this Policy Options article online or in print periodicals, under a Creative Commons/No Derivatives licence. Photographs cannot be republished.

Helen Beny photo

Helen Beny

Helen Beny is a postdoctoral research fellow for the Open Air project at York University, where she examines key challenges in the governance of artificial intelligence and intellectual property law in Africa. She holds a PhD in political science, specializing in comparative public policy from McMaster University.

Related Stories