Ten years after Geoffrey Hinton published a research paper on deep learning that helped to revolutionize the world of artificial intelligence (AI), a range of industry experts came together in Toronto and discussed where the technology is today.
Just a few blocks from where Hinton developed his AI research at the University of Toronto, Elevate Festival was back in-person for the first time since 2019. At the tech conference’s AI stage, the various topics and speakers showed how widely-used AI technology is a decade after that paper.
The AI industry finds itself on the precipice of significant change: regulation.
From estimating the likelihood of Kawhi Leonard’s series-winning shot against Philadelphia (it was just 13 percent) to climate change applications and understanding baby cries, AI has become widely disseminated.
Following this evolution, the AI industry finds itself on the precipice of significant change: regulation. Government bodies in Canada, the United States (US), as well as the European Union all currently have legislation on the table that could have implications for the creation and deployment of AI.
In June, the federal Liberal government tabled Bill C-27, wide-ranging privacy legislation that included what would be Canada’s first law regulating the development and deployment of high-impact AI systems.
Similarly, Québec has tabled its own privacy legislation, Bill 64, which could have implications for AI. Internationally, the European Union is working on a proposed Artificial Intelligence Act and the US tabled the “Algorithmic Accountability Act of 2022” in February.
The history of regulating AI in this way is fairly new. Singapore was one of the first countries to act, launching its Model for AI Governance Framework in 2019. Andres Rojas, director of applied AI projects at the Vector Institute, also called the European Union’s act “mostly aspirational.”
RELATED: Privacy commissioner shares recommendations for regulating artificial intelligence
Regulation of AI globally is in its early stages, but the implications for consumers and companies alike are wide-ranging. For companies creating and using AI, Bill C-27, in particular, outlines criminal prohibitions and penalties for companies regarding the use of data and AI systems.
At the Elevate AI stage (programmed by BetaKit, which is also an Elevate media partner), Angela Power, a senior consultant of data governance, ethics, and privacy at INQ Consulting, highlighted the importance of such legislation in order to address increased accountability and transparency, as well as to reduce potential harm to consumers using AI technology.
She argued that these types of potential regulations combined with current industry focus on the topic could lead to a stronger focus on things like ethical AI, which spans privacy, individual rights, and creating unbiased systems.
Rojas emphasized, however, that based on the way the bill is currently phrased, the penalties could be retroactively applied to companies currently developing AI systems. He highlighted the need for government bodies and regulators to work closely with industry stakeholders to come up with solutions that are tenable from both privacy and innovation perspectives.
According to Rojas, the currently tabled legislation tries to address privacy rules for consumers, such as the right to forget, which refers to personal data being erased after it is no longer necessary for the purpose it was collected. But he argued that while many in the industry generally agree with that principle, current AI technologies don’t necessarily have the capacity to operationalize the practice.
While these potentially incoming laws may be new, the topic of conversation around building ethical AI is not. Both Rojas and Power noted that companies they work with have long been considering how to build AI with these principles in mind. As Rojas put it, companies don’t generally set out to create biased AI.
RELATED: Geoffrey Hinton, Yoshua Bengio receive Turing Award, ‘the Nobel Prize of computing’
However, Power also said that governments taking these steps will be beneficial in creating universal standards. Rojas noted that with various governments taking their own unique approaches, there is bound to be conflict, making it harder for companies to understand what they might need to abide by.
One area where there may be less conflict these days is the conversations taking place between those creating potential regulations and industry stakeholders. Rojas argued that government bodies have typically taken a siloed and prescriptive approach to creating rules and regulations for privacy and AI. But he has recently seen and been part of conversations where industry players are being consulted.
Noemi Chanda, partner as well as privacy and cyber risk advisor at Deloitte Canada, said the conversations that are taking place are key to building trust with communities outside of the AI sector.
With the various tabled legislations still in the early stages, the potential impacts for companies remain foggy. What is clear: the AI sector is headed towards a new era, and all three panellists agree that companies should not wait for the government to enforce the regulation. Rather, they should start to build privacy and ethics into their technology now.