Making AI work beyond the lab

Vector-Research-Symposium
Researchers at Vector Institute are turning AI breakthroughs into real-world solutions.

Breakthroughs in artificial intelligence happen every day, but most never make it beyond the lab.

Despite Canada’s deep bench of AI talent, the challenge lies in the gap between research and real-world impact.

“I truly believe that the work our researchers are doing today are leading indicators of where the field is going tomorrow.”

Deval Pandya, Vector Institute

The Vector Institute is working to close that gap. Founded in 2017, Vector is an independent, not-for-profit organization dedicated to advancing AI research, developing AI talent, helping businesses adopt AI, and ensuring the responsible use of AI in society.

The organization works with Canadian industry and public institutions to ensure they have the talent, knowledge, and resources to excel in using AI.

Vector’s research community includes more than 860 members across 16 universities in Canada. At Vector’s recent Remarkable symposium, dozens of these researchers shared breakthroughs spanning everything from large model compression to medical diagnostics. 

The four projects selected from that showcase demonstrate how Canadian AI research is moving beyond the lab and into the real world.

Using AI to improve prostate cancer detection

Delays in cancer detection can mean the difference between life and death. 

Prostate cancer is the most common cancer among Canadian men and the third leading cause of cancer-related deaths—yet when caught early, the five-year survival rate is 91 percent.

The problem isn’t treatment. It’s time.

MRIs are a key tool for diagnosis, but long wait times—up to 102 days in Toronto—create critical bottlenecks. Ultrasound-guided biopsies, the alternative, often miss tumors due to low sensitivity. Many cancers go undetected until it’s too late.

A team from Queen’s University, the University of British Columbia, and the Vector Institute is developing an AI-driven ultrasound tool designed to catch what traditional imaging might miss. 

By highlighting suspicious tissue and guiding clinicians to high-risk areas, the tool could offer a faster, more accessible diagnostic pathway. 

Vector-Researchers
At Vector’s recent research symposium, dozens of researchers shared breakthroughs spanning everything from large model compression to medical diagnostics. (Photo by Leanna Vu, Vector Institute)

The team is led by Dr. Parvin Mousavi and Dr. Purang Abolmaesumi, and includes Vector researchers from PhD programs at the University of Toronto, the University of Waterloo, the University of British Columbia, and Queen’s University, including Mohamed Harmanani, Amoon Jamzad, Minh Nguyen Nhat To, Paul F.R. Wilson, Fahimeh Fooladgar, and others. The team’s solution uses deep learning to highlight suspicious tissue and guide clinicians to areas of concern in real time.

Early studies show promising results: the AI model detects prostate cancer with 77 percent accuracy, offering a viable alternative to MRIs.

“This is about access to healthcare,” said Harmanani, Researcher at Queen’s University and machine learning researcher at Vector. “Instead of your wait time being very high for an MRI, you would have access to a tool… that would make ultrasound scans more thorough because it has the model to reflect on. It would guide clinicians undergoing the procedure and give them some insights.” 

The team is now focused on preparing the tool for eventual use by medical imaging companies, urology clinics, and diagnostic centres. Before commercialization, the models must undergo rigorous clinical validation to ensure they are fair, accurate, and reliable.

“Our eventual goal is to use AI with ultrasound to potentially match MRI-level performance at a lower cost,” Harmanani added. “Early results on internal testing data are promising.”

Smarter AI, safer data

Data fuels almost every industry, but privacy concerns keep much of it locked away. 

In fields like healthcare and finance, organizations sit on vast amounts of valuable information, yet strict regulations and competitive barriers prevent them from sharing it.

A team from the University of Waterloo and the Vector Institute is tackling that challenge through a federated system for generating synthetic data. Jointly led by Karl Knopf, and Haochen Sun, and supported by Vector researchers from the University of Waterloo including Dr. Xi He, Dr. Masoumeh Shafieinejad, Shubankar Mohapatra, Shufan Zhang, and others, the project enables organizations to simulate realistic datasets without revealing sensitive information.

Karl-Knopf
Karl Knopf presents at Vector Institute’s research symposium during Remarkable 2025. (Photo by Leanna Vu, Vector Institute)

The tool uses encrypted collaboration techniques built for messy, real-world numbers, and the implications stretch across industries. Hospitals could build global datasets to improve diagnostics, financial institutions could detect fraud more effectively, and aviation firms could analyze flight risk without exposing proprietary information.

“The goal of our tool is to try to allow situations where organizations might not trust each other immediately, but they still want to get good usable synthetic data,” said Knopf, who is a Researcher at the University of Waterloo. “If I have synthetic data of, let’s say, transplant information, then I could build better tools to help predict who should receive transplants.”

Knopf noted that the work remains in its early stages, with the first research paper still under review. Commercialization is a longer-term prospect, contingent on further refining the concept and conducting more rigorous testing of the prototype.

AI that runs leaner

AI models are powerful, but they come at a cost. Every query—whether it’s a chatbot response or a complex data analysis—triggers millions of calculations. The bigger the model, the more computing power it demands, driving up costs and energy consumption.

Pruning helps streamline these systems by removing unnecessary calculations. Traditional pruning, however, comes with a trade-off: delete the wrong components, and the AI loses accuracy.

Stephen Zhang and Vardan Papyan, researchers at the University of Toronto and Vector Institute, have developed OATS (Outlier-Aware Pruning Through Sparse and Low-Rank Decomposition), a novel compression method that reduces model size by up to 60 percent without retraining or accuracy loss.

While OATS still removes parameters from the model, it also restructures how information is stored in a way that preserves accuracy while reducing computational load. Unlike other pruning methods, it doesn’t require retraining, making it faster and more efficient to implement. 

For companies looking to deploy AI without investing in expensive cloud infrastructure, OATS could offer a way to run high-performance models on smaller, more affordable hardware. Scalable, cost-effective, and immediately usable, this aims to make AI more accessible.

A trustworthy AI toolkit for science

These days, most of us use AI to help with everyday tasks like finding recipes, planning trips, or writing emails. We generally trust what it tells us, even if it occasionally gets things wrong. The stakes are low, and the convenience usually outweighs the risks.

But what if you’re a scientist using AI? A wrong answer isn’t just inconvenient, it could lead to flawed research or faulty conclusions. Right now, there’s no standard way for scientists to measure how trustworthy an AI system really is.

That’s why researchers at the University of Toronto developed the Trustworthy AI Toolkit for Science (TRAITS). The team, which includes researchers Ashley Dale, and Vector researchers Daniel Persaud, and Jason Hattrick-Simpers, have designed to help scientists figure out if they can trust the results they get from AI. 

Ashley-Dale-TakenBy-Leanna-Vu-IPhone
Ashley Dale presents TRAITS at Remarkable 2025. (Photo by Leanna Vu, Vector Institute)

Unlike most trust metrics, which are mathematical or statistical, TRAITS introduces new benchmarks that focus specifically on scientific relevance, such as whether a model respects physical laws or behaves consistently in a scientific context.

Dale, a Schmidt postdoctoral researcher at the University of Toronto, thinks TRAITS could benefit not just scientists, but industries like tech, healthcare, and manufacturing as well.

Whether the goal is to build a model that makes recommendations in healthcare, science, or anything in between, “you want to have some confidence that you are getting the best recommendation possible,” said Dale. “So implementing this kind of trustworthiness analysis lets you do that, whereas people who don’t implement this are kind of playing a random game.”

Dale said the team is collaborating with a startup on the first public release of the TRAITS software package and is in early talks with several larger companies exploring related applications. These are signals, Dale added, that industry interest is starting to build.

From lab to deployment

While the researchers above are part of Vector’s national research community, the institute also drives applied AI through its in-house AI Engineering team, led by Deval Pandya, who was recently named on The Globe and Mail’s Canada’s Best Executives list for 2025.

Pandya’s team collaborates directly with industry sponsors to build deployable tools for sectors like telecommunications, health, financial services, and industrials—bridging the final gap between academic innovation and real-world application.

“I truly believe that the work our researchers are doing today are leading indicators of where the field is going tomorrow,” said Pandya.


PRESENTED BY
Vector Institute - Logo

Vector is driving research excellence and leadership in AI. Subscribe to our newsletter to learn more.

Feature image by Jennifer Jenkins, Vector Institute.

0 replies on “Making AI work beyond the lab”