As AI scribes flood healthcare, experts stress need for responsible adoption

Ontario’s information and privacy commissioner Patricia Kosseim
Data privacy event unpacks how AI could save doctors time yet erode patient trust.

Wednesday morning, to mark Data Privacy Day, Ontario’s privacy watchdog hosted an event focused on the use of AI in healthcare at Toronto’s MaRS Discovery District.

On stage, healthcare, privacy, and legal experts unpacked some of the promise and peril of increasing AI adoption across Ontario’s healthcare system. Speakers also outlined some of the steps that healthcare providers ought to take to address risks, ensure that sensitive personal health information is protected, and maintain the trust of patients while using AI tools.

“We need big-bang reform here 
 It’s not working for what we need right now, and we need to fix this.”

The importance of establishing and keeping that trust when deploying AI was a key theme across panels. As Wellesley Institute CEO and Dr. Kwame McKenzie noted on stage, “outcomes are better if people trust their healthcare system and their doctor.” But AI, if misused, runs the risk of eroding that trust at a time when those systems already face big capacity challenges.

McKenzie acknowledged that trust and confidence in public services are a problem at the moment. “Increasingly, people are reluctant to give their data because of trust and confidence issues,” he said. “Including [patients] in the conversation increases the chance that we’re going to get good, comprehensive data.”

“Things are moving quickly, and so we probably have to get our house in order really quickly” when it comes to tackling AI governance and privacy in healthcare, he added.

In an interview with BetaKit at the event, Ontario’s information and privacy commissioner (IPC) Patricia Kosseim noted that AI scribes are becoming “more and more prevalent” across the province. In healthcare settings, AI scribes are typically used to capture and automatically transcribe patient-clinician conversations and generate summaries suitable for inclusion in electronic medical records.

These tools, which some physicians have hailed as a game-changer, can be beneficial: Kosseim noted that AI scribes can save doctors time by reducing their administrative burden and help them focus more on “patients rather than paperwork.” They can also help those patients and their third-party caregivers stay informed.

RELATED: Canada Health Infoway’s AI Scribe Program hooks up primary care clinicians with Canadian healthtech

But, as Kosseim and other speakers said, the use of AI scribes can also introduce new risks around data privacy, accountability, and bias. In recognition of this, the IPC office published a detailed, 35-page guide on how to adopt this sort of tech safely and legally. It calls for healthcare providers to be transparent with patients about when and how they are using these tools, what their limitations are, and what safeguards they have installed.

“The reason why we put out this guidance is to try to support innovations like these, but make sure that they’re done in a responsible, ethical, and privacy-preserving way,” Kosseim said.

Kosseim’s office has already investigated a privacy breach resulting from the unauthorized use of US-based Otter’s transcription software by a physician at an Ontario hospital. It found Otter recorded a doctors’ meeting and sent info about the patients discussed to dozens of current and former hospital staff, some of whom should not have had access to that data.

Reflecting on that incident, Kosseim said it demonstrates the importance of hospitals and doctors implementing clear policies to govern the appropriate use of tech by healthcare professionals working at their facilities and practices.

RELATED: Ontario bets on made-in-province healthtech with streamlined innovation pathway

“One of the big risks that we’re seeing more and more of, and this case, is an illustration, is this kind of ‘shadow AI,’” Kosseim said. Shadow AI refers to AI tools downloaded directly by employees, often without their employer’s knowledge.

Colleen Flood, dean of the faculty of law at Queen’s University, said on stage that accountability is also a concern, adding, “I’m the dean of a law school, and all my little lawyers will run off and work for vendors and write horrible contracts to try to download the responsibility onto the clinician.”

Flood emphasized that healthcare organizations’ contracts with AI software providers will need to be “carefully reviewed and considered.” While large hospitals possess the in-house legal “firepower” to do this, individual family doctors will likely require outside help, she said.

“It’s really important for [healthcare] providers to remember that, despite the pressures, it’s important not to jump at the newest bell and whistle out there in the hope that it’s going to transform their practice or make their lives tremendously easier, without thinking through the risks that they’re onboarding,” Kosseim said.

Flood sees a lot of opportunity and risk when it comes to AI in health, but argued that in order to capture that value, Canada will need to finally modernize its outdated privacy laws—which Flood said even she sometimes struggles to understand, and the feds intend to address.

“We need big-bang reform here 
 It’s not working for what we need right now, and we need to fix this,” Flood said.

Feature image courtesy Bard Azima of Livingface Photography.

0 replies on “As AI scribes flood healthcare, experts stress need for responsible adoption”