Blair Attard-Frost is an assistant professor of political science at University of Alberta and a fellow at the Alberta Machine Intelligence Institute (Amii).
If Canada is trying to build public trust in AI, why is the country’s recent AI strategy consultation an untrustworthy mystery box?
In October 2025, Canada’s AI and Digital Innovation Minister Evan Solomon launched a 30-day, whirlwind public consultation on a renewed national AI strategy. As part of this consultation, Solomon sought public input on the new AI strategy through an online survey, while also establishing an AI task force composed of 28 experts who were asked to prepare reports on a variety of AI policy issues.
If it was subjected to a scientific peer review process, ISED’s methodology would be severely scrutinized.
Noting the short timeline and restrictive format of the consultation process, legal scholar Teresa Scassa referred to the process as a “mad rush to a largely predetermined conclusion.” The consultation format was widely criticized for its prioritization of business interests, its lack of attention to human rights and labour concerns related to AI, and its disregard for deliberative public dialogue: the short, 30-day time window and specialized knowledge required to meaningfully answer many of the survey questions posed barriers to public participation.
In response, over 150 individuals and organizations signed an open letter urging Solomon to extend the consultation timeline, reconstitute the AI task force into a more equitable structure, and re-write the survey to better represent the concerns of a broader range of stakeholders. Several signatories to the open letter have since organized a grassroots consultation process that is grounded in stronger public engagement methods, entitled “The People’s Consultation on AI.”
Last week, findings from the government’s consultation were published in the form of a high-level summary of over 11,000 public submissions and 32 reports from AI task force members. According to the summary report, Innovation, Science and Economic Development Canada (ISED) integrated four large language models (LLMs)–Cohere Command A, OpenAI GPT-5 nano, Anthropic Claude Haiku, and Google Gemini Flash–into what it calls an “internal classification pipeline.” This pipeline was used to automate analysis of topics and sentiments expressed in the public submissions and the task force reports.
ISED notes that human reviewers “validated and refined the AI-generated analyses, ensuring accuracy and comprehensive representation of all perspectives.” Additionally, some of the summary report’s text is AI-generated: ISED states that “outputs of that pipeline were used in drafting this report, with elements paraphrased or taken directly, because of the capability of the pipeline to provide high-level, public language summaries of the inputs.”
RELATED: We read every submission from Canada’s AI task force: here’s what they said
If it was subjected to a scientific peer review process, ISED’s methodology would be severely scrutinized.
Canadians already have low levels of trust in AI, and one of the pillars of Solomon’s new AI strategy is to build public trust. The methodology used to analyze the consultation data and create the summary report is counterproductive to that goal. It is unclear what prompts, data flows, classification criteria, and thresholds were involved in the analysis process. It is unclear what human-review procedures and validation methods were used to ensure the accuracy, reliability, and integrity in the analytics pipeline and its outputs. It is unclear why four models were necessary for this analysis instead of one or two.
Many data protection features of the pipeline are also unclear. In the summary report, ISED states that over 64,400 responses were captured from across the 26 questions included in the consultation survey, but ISED does not specify if any measures were taken to screen responses for personally identifiable information or sensitive business information. Respondent names have been removed from the dataset of public responses, but the dataset still contains respondent demographic data, descriptions of respondents’ jobs and employers, and personal anecdotes that could be used to infer personal identities or business information.
The government’s summary contains conflicting recommendations that reflect sharply different visions for the future of AI and the future of Canada.
Respondents were not given an opportunity to opt out of having their data processed by LLMs, and ISED does not specify if any respondent data was used to train or fine-tune any LLMs. Despite ISED’s claim that its pipeline demonstrates “responsible, Canadian-centred AI adoption,” three of the four LLMs used in the analytics pipeline were developed by US tech companies: Google, OpenAI, and Anthropic. Was any personal data or business data transferred to foreign jurisdictions at any point during data collection and analysis? Was Canadian privacy, security, or sovereignty put at risk as a result of this analysis process? ISED states only that its analytics pipeline is an “internal” pipeline, but offers no further clarity on the data architecture of the pipeline.
In addition to the methodology, the content of the summary is also limited in its transparency and reliability. The summary is packed with hundreds of vague action items surfaced from across the survey responses and Task Force reports, but lacks a clear discussion of implementation priorities. At times, the summary highlights the importance of upholding privacy, safety, democracy, environmental sustainability, and Indigenous sovereignty through robust regulation and thoughtful governance frameworks – at other times, it highlights the importance of speedy AI adoption, rapid economic growth, and “streamlining regulatory frameworks to accelerate infrastructure development.” These are conflicting recommendations that reflect sharply different visions for the future of AI and the future of Canada.
It is not clear from the summary how frequently these kinds of conflicting sentiments were expressed relative to one another. ISED offers no indication of how it might weigh or act upon these kinds of divergent stakeholder preferences and policy tradeoffs. Instead, the summary flattens nuance and political complexity into a false consensus.
RELATED: An insider’s take on Canada’s AI task force report
The use of LLMs in authoring the summary is cause for further concern. ISED notes that text outputs from its analytics pipeline were used to draft the summary, but does not specify how much or what parts of the summary are direct text outputs from LLMs. Nor does ISED specify the procedures through which text outputs were verified by human reviewers. These gaps undermine the summary’s informational integrity and trustworthiness.
Large-scale public consultations with thousands of respondents certainly do face time and resource constraints; there is a role for automation here if it is used rigorously and responsibly. Automated analysis of sentiments and topics is common in qualitative research, and best practices and principles for achieving a trustworthy analysis are well-established: methodological transparency, independent verifiability, and traceability of data are key drivers of trustworthiness. Sacrificing transparency on the altar of agility is not a winning strategy for building trust.
Between the flawed consultation format and the opaque LLM daisychain used to generate a report on the consultation, Canada’s new AI strategy is off to a bad start. Minister Solomon may find it difficult to accomplish his goal of courting public trust with a strategy that is founded upon black box decision making. Trust will not be built from a mystery box of made-in-USA big tech LLMs–trust-building must begin with government transparency and meaningful public engagement.
The opinions and analysis expressed in the above article are those of its author, and do not necessarily reflect the position of BetaKit or its editorial staff. It has been edited for clarity, length, and style.
Feature image courtesy ALL IN.

