Canada launches voluntary code of conduct for the responsible use of AI

François-Philippe Champagne, Canada’s minister of innovation, science, and industry.
Cohere, OpenText, Coveo, Ada among first signatories.

Cohere, a competitor of ChatGPT creator OpenAI, announced it is signing on to Canada’s new voluntary artificial intelligence (AI) code of conduct.

François-Philippe Champagne, the country’s minister of innovation, science and industry, announced Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, which is effective immediately. Champagne shared the news at the All In Conference on AI in Montréal.

Software company OpenText also announced its partnership with the federal government on the code. “Canada’s AI Code of Conduct will help accelerate innovation and citizen adoption by setting the standard on how to do it best,” said Mark J. Barrenechea, CEO and CTO of OpenText.

Others who signed included Coveo and Ada. UPDATE (12/7/23): On December 7, 2023, eight more organizations committed to the code, including IBM, AltaML, CGI, and Scale AI.

The code outlines key measures organizations can adopt to mitigate limitations of AI, and foster principles such as transparency, fairness and equity, and accountability.

Anne Thériault, Coveo’s Vice President, Legal, CISO, DPO and assistant secretary, told BetaKit that for the past decade, Coveo has always been thorough and responsible in the development of its platform. “We work with large global enterprises who care about using AI to better serve their customers, in a way that is ethical and protects their data,” she said.

“We now welcome the introduction of a Code of conduct that supports the development and management of responsible and robust AI systems to maintain trust towards customers. We are supportive of other brands being held to the same standards.“

The code identifies measures that organizations are encouraged to apply to their operations when they are developing and managing general-purpose generative AI systems.

The code outlines key measures organizations can adopt to mitigate limitations of AI, and foster principles such as transparency, fairness and equity, and accountability.

Not everyone is on side with the new code. In a statement on X, Tobi Lutke, Shopify’s CEO, called the code of conduct “another case of EFRAID,” and wrote: “I won’t support it. We don’t need more referees in Canada. We need more builders. Let other countries regulate while we take the more courageous path and say “come build here.”

The Government of Canada has already taken steps toward ensuring that AI technology evolves responsibly and safely through the proposed Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022.

AIDA is meant to protect Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses, and mitigates the risks of harm and bias.

Bill C-27 has already received broad support from a number of Canadian startups and companies eager to adopt the new technology. An open letter in support of Bill C-27 comes just as a survey in April showed that adoption of the technology is growing among Canadian companies. More than one-third (37 percent) are now using AI.

RELATED: CCI calls for Parliamentary Technology and Science Officer to help get AI regulation right

The AI industry has come under pressure to regulate the swift-moving technology, while the federal government has been working hard to formulate a code of ethics that would help govern the tech.

The new code represents a critical bridge between now and when that legislation would be coming into force. It outlines measures that are aligned with six core principles, including safety, human oversight and monitoring, and validity and robustness.

The code is based on the input received from a cross-section of stakeholders, including the Government of Canada’s Advisory Council on Artificial Intelligence, through the consultation on the development of a Canadian code of practice for generative AI systems.

The government will publish a summary of feedback received during the consultation in the coming days. The code will also help reinforce Canada’s contributions to ongoing international deliberations on proposals to address common risks encountered with large-scale deployment of generative AI, including at the G7 and among like-minded partners.

RELATED: Academics, CEOs sign on in support of AI regulation and Bill C-27 as Canadian companies race to adopt the technology

Benjamin Bergen, president of the Council of Canadian Innovators (CCI), said that CCI had been calling for Canada to take a leadership role on AI regulation and it should be done in the spirit of collaboration between government and industry leaders.

“The AI Code of Conduct is a meaningful step in the right direction and marks the beginning of an ongoing conversation about how to build a policy ecosystem for AI that fosters public trust and creates the conditions for success among Canadian companies,.” Bergen added.

At the time of writing, other AI firms were believed to be on board with the new code, with more signatories to it preparing to announce.

A number of letters and calls for governance have come from high-profile members of the industry itself, including from Geoffrey Hinton—often referred to as the Godfather of Deep Learning—and Yoshua Bengio, co-founder of Montréal-based Mila.

With files from Alex Riehl.

Feature image courtesy Flickr.

Charles Mandel

Charles Mandel

Charles Mandel's reporting and writing on technology has appeared in Wired.com, Canadian Business, Report on Business Magazine, Canada's National Observer, The Globe and Mail, and the National Post, among many others. He lives off-grid in Nova Scotia.

0 replies on “Canada launches voluntary code of conduct for the responsible use of AI”