Wednesday morning, to mark Data Privacy Day, Ontarioâs privacy watchdog hosted an event focused on the use of AI in healthcare at Torontoâs MaRS Discovery District.
On stage, healthcare, privacy, and legal experts unpacked some of the promise and peril of increasing AI adoption across Ontarioâs healthcare system. Speakers also outlined some of the steps that healthcare providers ought to take to address risks, ensure that sensitive personal health information is protected, and maintain the trust of patients while using AI tools.
âWe need big-bang reform here ⊠Itâs not working for what we need right now, and we need to fix this.”
The importance of establishing and keeping that trust when deploying AI was a key theme across panels. As Wellesley Institute CEO and Dr. Kwame McKenzie noted on stage, âoutcomes are better if people trust their healthcare system and their doctor.â But AI, if misused, runs the risk of eroding that trust at a time when those systems already face big capacity challenges.
McKenzie acknowledged that trust and confidence in public services are a problem at the moment. âIncreasingly, people are reluctant to give their data because of trust and confidence issues,â he said. âIncluding [patients] in the conversation increases the chance that weâre going to get good, comprehensive data.â
âThings are moving quickly, and so we probably have to get our house in order really quicklyâ when it comes to tackling AI governance and privacy in healthcare, he added.
In an interview with BetaKit at the event, Ontarioâs information and privacy commissioner (IPC) Patricia Kosseim noted that AI scribes are becoming âmore and more prevalentâ across the province. In healthcare settings, AI scribes are typically used to capture and automatically transcribe patient-clinician conversations and generate summaries suitable for inclusion in electronic medical records.
These tools, which some physicians have hailed as a game-changer, can be beneficial: Kosseim noted that AI scribes can save doctors time by reducing their administrative burden and help them focus more on âpatients rather than paperwork.â They can also help those patients and their third-party caregivers stay informed.
But, as Kosseim and other speakers said, the use of AI scribes can also introduce new risks around data privacy, accountability, and bias. In recognition of this, the IPC office published a detailed, 35-page guide on how to adopt this sort of tech safely and legally. It calls for healthcare providers to be transparent with patients about when and how they are using these tools, what their limitations are, and what safeguards they have installed.
âThe reason why we put out this guidance is to try to support innovations like these, but make sure that theyâre done in a responsible, ethical, and privacy-preserving way,â Kosseim said.
Kosseimâs office has already investigated a privacy breach resulting from the unauthorized use of US-based Otterâs transcription software by a physician at an Ontario hospital. It found Otter recorded a doctorsâ meeting and sent info about the patients discussed to dozens of current and former hospital staff, some of whom should not have had access to that data.
Reflecting on that incident, Kosseim said it demonstrates the importance of hospitals and doctors implementing clear policies to govern the appropriate use of tech by healthcare professionals working at their facilities and practices.
RELATED: Ontario bets on made-in-province healthtech with streamlined innovation pathway
âOne of the big risks that weâre seeing more and more of, and this case, is an illustration, is this kind of âshadow AI,ââ Kosseim said. Shadow AI refers to AI tools downloaded directly by employees, often without their employerâs knowledge.
Colleen Flood, dean of the faculty of law at Queenâs University, said on stage that accountability is also a concern, adding, âIâm the dean of a law school, and all my little lawyers will run off and work for vendors and write horrible contracts to try to download the responsibility onto the clinician.â
Flood emphasized that healthcare organizationsâ contracts with AI software providers will need to be âcarefully reviewed and considered.â While large hospitals possess the in-house legal âfirepowerâ to do this, individual family doctors will likely require outside help, she said.
âItâs really important for [healthcare] providers to remember that, despite the pressures, itâs important not to jump at the newest bell and whistle out there in the hope that itâs going to transform their practice or make their lives tremendously easier, without thinking through the risks that theyâre onboarding,â Kosseim said.
Flood sees a lot of opportunity and risk when it comes to AI in health, but argued that in order to capture that value, Canada will need to finally modernize its outdated privacy lawsâwhich Flood said even she sometimes struggles to understand, and the feds intend to address.
âWe need big-bang reform here ⊠Itâs not working for what we need right now, and we need to fix this,â Flood said.
Feature image courtesy Bard Azima of Livingface Photography.
