Despite rapid AI growth, CDL conference emphasizes need to move slow, “not break things”

MK4INTEL

Amid the exponential growth of artificial intelligence in both the fields of research and business, some in the industry are carefully considering how emerging technology will influence the way we will continue to live and work.

This week, 24 industry veterans took to the stage at The Machine Learning and the Market for Intelligence at the University of Toronto’s Rotman School of Management, to deliver their insights on how artificial and machine intelligence will affect power and society.

“We want to think about ways that we can reduce the price of disruption and creation of innovation, and [if we can], we’ll probably get more innovative.”

The conference, now in its fifth year and hosted by Canadian accelerator Creative Destruction Lab, aims to investigate applications of machine intelligence in a variety of domains, like public health, enterprise strategy, self-driving vehicles and more. This year’s event took a particular focus on the wider societal consequences of AI, and how AI will affect emerging issues such as labour market concerns and income inequality.

 

What AI can do

Sanctuary AI is a Vancouver-based company with a mission to create ultra human-like robots. Suzanne Gildert, the company’s co-founder, spoke about the achievements and possibilities of this technology, and whether she thinks the workforce could ever be fully automated.

She defined the current workforce by its ‘work primitives’: abilities, such as finger-wrist speed; skills, such as monitoring machinery; and knowledge, such as facts and figures that are field-specific. She said looking more closely at these primitives could greatly enhance the way humans have automated work for centuries.

“Maybe there’s a different way of thinking about automating work,” she said. “Maybe we can look at this level underneath this hidden layer of the secrets of the human worker, and we can go after those abilities, skills, and knowledge categories. Maybe it would be a different or an improved way of optimizing work.”

Gildert used these work primitives to predict what a general-purpose robotic solution would look like, and in her assessment, it would look very much like a human.

“When you go through all the definitions in those abilities, skills and knowledge categories, and you sit down and try and design a product that could move all those sliders in those categories, it’s really hard to come up with a design that doesn’t look like a person,” she said.

Suzanne Gildret

Suzanne Gildert with a robot copy of herself (Image courtesy: suzannegildert.com)

Sanctuary has been working with improving the basic abilities and skills categories in robots, looking at limb movement, object manipulation objects, hand-eye coordination, and basically everything in a human’s database of abilities. Other companies have also made big strides in this area in recent years. OpenAI recently designed a robot capable of solving a Rubik’s Cube on its own, with researchers saying this development brings robots a step closer to human-level dexterity. The challenge, Gildret said, is that there’s still not a good roadmap for artificial intelligence, and no one really knows how to define it or measure it.

Her proposal involved a database of 120 categories spread across different abilities, skills, and knowledge. Once these can become measurable and improved, she said, companies can create a tech roadmap that gets them to something that is indistinguishable from the current concept of a worker. Gildert’s roadmap suggested that the workforce can be fully-automated with this approach, but what is not clear is whether this will create one general-purpose program, or a diverse set of solutions.

“Will it be completely indistinguishable from a human?” she asked. “Or will it be some kind of weird worker that doesn’t have all the qualities that we can send it to the human? That’s really important to think about, because if it is indistinguishable, it means we are defined by the work that we do. I think that’s a very interesting thing.”

Economic impacts of AI

Joshua Gans, a professor of strategic management and holder of the Jeffrey S. Skoll chair of technical innovation and entrepreneurship at the Rotman School of Management, argued that there are two prices to consider when thinking economically about innovation: the creation price and the destruction price.

The creation price refers to the fact that society has to give some people more in order to get them to innovate, while the destructive price means that any innovation comes with costs and distributional consequences.

A study from the Stanford Institute for Economic Policy Research suggested that what Gans calls the “creation price,” has risen significantly. The study found that companies are investing far more today than they were in the past to achieve the same results. The destruction price can be seen in General Motors shutting down a number of North American plants last year and concurrently upping its investment in automation.

“Like the good economists that we are, we want to think about ways that we can reduce the price of disruption and creation of innovation, and [if we can], we’ll probably get more innovative,” stated Gans.

Joshua Gans

Joshua Gans talks AI and inequality at #MK4INTEL (Courtesy: Creative Destruction Lab)

He said the way economists think about innovation is the same way they think about international trade, in that trade is generally good, but also competitive. Reducing uncertainty at earlier stages, overcoming impediments to commercialization, and allowing for more interoperability, are approaches economists should consider in lowering the costs of innovation, Gans said.

Uncertainty still plays a crucial role in weighing these costs and benefits to innovation, he added, laying out two levels of uncertainty when it comes to innovation. The first pertains to whether an innovation will be successful, and the second is about which jobs and skills are going to be affected.

RELATED: Brookfield Institute reveals career pathways model for jobs disrupted by technology

“What uncertainty tends to encourage is less extreme thinking and more marginal thinking, which softens any debate,” he noted. “It is very, very difficult to argue policies based on incomplete foundations, but there is a language we can use for that, and that’s the language of insurance.”

For Gans, this means thinking about the skills that need to be adapted and when they should be adapted in order to create flexibility. Some of these skills could include coding, he said, which is a good way of “teaching clear analytical thinking” that can be transferrable or applicable to many sectors. Gans added that increasing young people’s exposure to innovation, and directly increasing skills at a young age, is a way society can insure that an innovation will not augment the costs for workers and businesses.

Societal impacts

Jack Clark is the policy director of OpenAI, an AI research company attempting to find the path to safe artificial general intelligence. AI computation has increased by 300,000 fold in breakthrough AI systems in the last five or six years. He said this is an important time in history, as AI has crossed into a different trendline, which will have new effects on society.

“The impacts of AI are going to shape society’s views on AI, and how well society can adapt to it.”

“The impacts of AI are going to shape society’s views on AI, and how well society can adapt to it,” he said. “We should maybe assume that unusual things are going to happen in the future, because I don’t know about any of you, but I’ve been surprised for the last few years by the things that AI has done.”

In the last few years, Canadian companies and institutions have called for a solid framework for ethical AI, submitting their own versions of what such a strategy would look like. Along with international partnerships by the federal government, the Université de Montréal and the Fonds de recherche du Québec, both announced a set ofethical guidelines for the development of AI. Some of the principles of those guidelines included respect for autonomy and protection of privacy and intimacy.

Clark said one of the ways we can approach the impacts of these AI technologies, is to not release everything at once, but rather to release it in stages in order to generate information at each point in time and allow for more research. He said OpenAI has been described as a company that “wants to move slow and not break anything.”

“I like this idea,” he said, acknowledging the conference audience. “I would like to get the people in this room to think about ways they could move slowly while still generating commercial opportunity, while not breaking things.”

He added that it will not be obvious where the impact of AI is going to come from, because the main impact, so far, has not been malicious, and that the “creative stuff” has outweighed what we should be cautious about. But, he said, we should still be prepared to be surprised.

Image courtesy Creative Destruction Lab

Isabelle Kirkwood

Isabelle Kirkwood

Writer, globetrotter, drone pilot & David Attenborough enthusiast