Nearly 20 percent of Canadian AI developers believe they have never encountered a single safety consideration for LGBTQ+ people in their work. Some even seem hostile to the suggestion that tech might not be inclusive.
AI Minister Evan Solomon
“If AI is built around narrow teams and narrow use cases, [by] people with narrow experiences, they will give narrow results.”
The findings come from QueerTech’s Inclusive AI Development Research Report, which was released this week in partnership with Abacus Data. The report found that less than half of AI developers believe their AI products meet the needs of LGBTQ+ users, compared to 65 percent who believe they meet the needs of the general population.
The data was collected through an online survey of 100 AI product developers in Canada in December of 2025. While the sample size was small, QueerTech said the respondents worked across leadership, engineering, product, operations, and responsible AI roles at multiple stages of the AI product lifecycle.
The report highlights the disconnect between how developers prioritize inclusive AI and how it’s implemented in reality. It also revealed a streak of bigotry among Canadian AI developers. QueerTech said that 11 percent of submitted responses were homophobic, transphobic, or generally hateful toward the LGBTQ+ community in tone.
A sample of responses to a question about how developers manage non-binary representation in AI systems shows it was met with anything from uncaring “why would we?” to hostile “not high on our list, sounds like a DEI woke nightmare company.”
A lack of management for gender diversity in AI systems is already having repercussions. At a November QueerTech event, BetaKit heard from HR and recruitment professionals who were seeing candidates discriminated against for roles in tech, due to their pronoun use or gender presented in their resumes.
At an advanced panel discussion on the report in Ottawa last week, QueerTech co-founder and CEO Naoufel Testaouni said about half of the responses could be attributed to ignorance, where there’s an opportunity to educate, but the other half were malicious. Testaouni said the latter are where the danger lies.
“Those are the people who are building these technologies,” Testaouni said. “If this is what they are thinking, and saying it publicly on a survey, then imagine all the other people also around them who are building this that we don’t even hear from.”

In keynote remarks before the panel, AI Minister Evan Solomon said inclusion is “not a word that we toss around in the culture war.” He posed that building with diverse perspectives around the table results in trusted and “positive” technology.
“If AI is built around narrow teams and narrow use cases, [by] people with narrow experiences, they will give narrow results,” Solomon said.
Solomon also backed up Testaouni’s point about toxicity in tech circles.
“They throw around the term ‘tech bro’ for a reason,” Solomon said, recounting how he’s often prioritized in meetings despite being accompanied by more experienced female colleagues. He said that this attitude is “consequential BS” that drives young women away from STEM careers and other communities away from being entrepreneurs.
RELATED: Canada’s AI regulation will be “airtight” on bias, racism, and hate, Solomon says
While the hostile responses were a sour spot in the report, it also showed a significant disparity between those who said inclusive AI was a moderate to high priority for their products (97 percent) and the actual support to make that happen. Forty-three percent of respondents reported limited to no formal processes to address bias in their work, and 71 percent reported limited to no organizational support for equitable representation.
Respondents said the top barriers they face to more inclusive development include insufficient resources or budget, competing priorities, and difficulty measuring return on investment.
For Microsoft director of cloud and AI sales David Beauchemin, who was also on the panel, inclusive AI’s return on investment comes in the form of consumer trust.
“You can’t build a tool that only serves a portion of people or … you’ll lose the trust,” Beauchemin said. “If we lose the trust in AI systems, we’ll lose the trust in Canada leading the way.”
Feature image courtesy Queertech.
