Canada’s AI regulation will be “airtight” on bias, racism, and hate, Solomon says

At Queertech breakfast, AI minister says feds will regulate lightly on innovation and tightly where harms emerge.

Canada’s AI Minister, Evan Solomon, says that the country’s forthcoming AI regulation will be “airtight” when it comes to bias, racism, and hate. 



“If AI is built around narrow teams and narrow use cases, [by] people with narrow experiences, they will give narrow results.”

AI Minister Evan Solomon

Solomon made the remarks at a breakfast hosted by Queertech and the Canadian Queer Chamber of Commerce in downtown Ottawa’s Rideau Club on Tuesday morning. Solomon is in charge of delivering Canada’s refreshed AI strategy, which was initially promised to drop before the end of 2025 but has been repeatedly delayed.

At the breakfast, Solomon said inclusivity can be Canada’s “competitive advantage” and revealed some of the government’s intentions for the upcoming AI strategy. Solomon recalled in his address how he got pushback for saying Canada’s AI regulation will be “light, right, and tight” early in his ministerial tenure. 

“The first thing people seize on is ‘light,’ as if we’re not going to regulate and not going to protect,” Solomon said, adding that the government will regulate “light” where innovation is needed. “We’re going to be tight when there’s bias, racism, hate; we are going to be airtight, and we’re going to get it right so we can balance that innovation and our values and our protection.”

The minister said inclusion is “not a word that we toss around in the culture war,” but rather that building with diverse perspectives around the table results in trusted and “positive” technology. 

“If AI is built around narrow teams and narrow use cases, [by] people with narrow experiences, they will give narrow results,” Solomon said. 

RELATED: Queer tech founders explore move to Canada as US abandons DEI

Alongside defending the government’s commitment to diversity both generally and in tech regulation, Solomon revealed some considerations going into the AI strategy, including that the federal government “is looking closely” at the right to deletion and algorithmic transparency. Solomon said algorithmic transparency will help determine if there is a built-in bias against marginalized groups, so that systems can be confidently adopted. Studies have shown that algorithmic biases, where systems are not well-trained on datasets representing marginalized groups such as people of colour or members of the LGBTQ+ community, can cause harms like reduced access to health care or employment. 

“We will not get this [AI] right unless we trust it, and … if it’s not inclusive, we won’t trust it,” Solomon said. 

In an “off-script” moment, Solomon also recounted his run-ins with “tech bros,” noting how he’s often prioritized in meetings despite being accompanied by more experienced female colleagues. He said that this attitude is “consequential BS” that drives young women away from STEM careers and other communities away from being entrepreneurs. 

“The data of inclusion and the economic contribution is so positive that to ignore it is not just narrow-minded and destructive, it’s just economically stupid,” Solomon said to applause. “Countries that don’t get that are going to lose; our values of inclusion are our competitive advantage.” 

Feature image courtesy Alex Riehl for BetaKit. 

0 replies on “Canada’s AI regulation will be “airtight” on bias, racism, and hate, Solomon says”