Q&A: Clearpath Robotics’ Ryan Gariepy on killer robots and Canada’s defence strategy

Clearpath Robotics' Husky Observer
Industry leader says Canada is leaving opportunity on the table in robotics.

Ryan Gariepy has been building Canadian robots for nearly two decades. He has also been an outspoken advocate for how they should and should not be used in a military context.

Gariepy co-founded Kitchener-Waterloo’s Clearpath Robotics in 2009, co-leading the company as CTO until its acquisition by US industrial automation giant Rockwell Automation in 2023 at a reported price tag of about $600-million USD.

These days, Gariepy works as Rockwell’s vice-president of robotics and chairs the Canadian Robotics Council, with a keen eye towards building up the country’s robot-making industry.

BetaKit reporter Josh Scott sat down with Gariepy to unpack his thoughts on the recent killer robot discussions that have been brought to the fore by AI, how robotics can help Canada realize its new defence ambitions, and whether the country is doing enough to capture the opportunity he sees.

The following interview has been edited for length and clarity. 

How would you describe the current state of Canada’s robotics industry? And where are robots making a difference today?

There are a lot of areas where we could be using robots a lot more in Canada, and with that, there could be a lot more robotics companies in Canada. Canada is very well-positioned to be a global leader in robotics. 

Something that Canada has going for us is this mix of a physical, industry-powered economy and a very educated and cosmopolitan populace.

Ryan Gariepy.
Image courtesy LinkedIn

The vast majority of modern mines are going to be using robots to some degree. The same is true for modern manufacturing of any sort, whether it’s cars, whether it’s food, or whether it’s pharma. But they can always be used more.

If you go to any modern plant in Canada, it’s heavily using robotics. But then as you go into the broader supply chain you’re less likely to run into robots there. Robots tend to be concentrated more in the large businesses, not because they can’t help in the small business but because they have more time, possibly more capital, and more ability to take some of that risk. 

What do you make of Canada’s new Defence Industrial Strategy, and how important do you think robotics and physical AI will be to Canada executing on its defence ambitions?

We could ask the Ukrainians. We could also ask anyone who’s had to do a long posting in the Arctic. We have a lot more space, we have a lot fewer people, and our environment is a lot more hostile. That is the perfect place for robotics.

As much as we are committed to increasing the size of our military, we’re a small country that does play on the global stage, which means that we will need force multipliers.

It’s good to see how robotics has been identified as a sovereign capability. We may not be able to act as quickly as Ukraine did—which basically retooled their entire economy around building drones—but we can use our relationships with Ukraine, we can use our established manufacturing capabilities and our natural resources to modernize very quickly and modernize for the next conflict, as opposed to the past one.

Over a decade ago, Clearpath became the first robotics company to pledge not to create killer robots. You wrote that “the development of killer robots is unwise, unethical, and should be banned on an international scale.” Where do you stand on the militarization of robots today?

I support using robots in the military. Logistics, search and rescue, reconnaissance, or training, all of these areas where robots should probably be used. Even weaponized robots, to some degree, for military purposes, are things that I support. At the same time, it’s very important to have reasonable controls and reasonable certifications around how these systems are used.

Ten years ago, there were a lot of conversations where we were saying, “AI is going to make mistakes, and it’s going to be confusing and different, and you’re not going to be able to predict it.” And everyone was like “no, no, that’s not the case.” And now we’re here. Anyone who’s got any sort of media awareness knows that AI makes mistakes, and if your AI is, say, misciting an article, and it’s going to make that mistake, are you sure you want that tool deciding on whether or not to use lethal force? We have a risk problem there. 

As much as we are committed to increasing the size of our military, we’re a small country that does play on the global stage, which means that we will need force multipliers.

There’s also a morality and accountability problem. It’s very important that accountability still lies with a human at some point, and that in the end, you don’t leave people with an out to say, “Oh, it wasn’t me. It was the system that committed that war crime.”

The military is the most experienced when it comes to the appropriate and proportional use of force. We really want to make sure that responsibility [and] accountability remains with the military as opposed to allowing people to push that off on some engineer who wrote some code 10 years ago.

Where should we draw the line in terms of using robots for lethal force?

Over the years, I’ve been part of or peripheral to these discussions. People will use that as a political football. It’s most important to maintain a chain of accountability, certification, understanding, and testing of the technology itself. 

The US Department of Defence and American AI company Anthropic have been embroiled in a public feud since the company refused to allow the military to use its models for fully autonomous weapons and domestic mass surveillance. What do you make of this?

It’s difficult to understand what has been agreed to and not agreed to because you’ve also got OpenAI adding some noise to the conversation. 

But you don’t designate a company as a supply chain risk and then also say you’re effectively going to nationalize them. There are political factors at play.

RELATED: Rockwell Automation completes acquisition of Clearpath Robotics and its OTTO Motors division

On a personal note, I support saying that you should not use the specific kind of technology that Anthropic uses as a key component of autonomous weapons.  I would certainly agree that using an LLM for targeting decisions is not the best way forward. I might also suspect that there are things that the Anthropic team knows that we don’t, which cause them to draw this line. I don’t think anyone in their right mind would decide, particularly these days, to pick a fight with the US Department of Defense if they didn’t need to.

What are you most excited about right now when it comes to robotics?

The thing I’m most excited about is society realizing that robots can help right now. We do not need to wait, and shouldn’t wait until there’s a humanoid knocking on your door to do your laundry. Robots can help, and they can help right now. They can help us be safer, more productive, and more comfortable.

What keeps you up at night?

The thing that keeps me up is how much opportunity Canada is leaving on the table here. We have an opportunity to build a more secure country, and we’re not moving fast enough.

Feature image courtesy Clearpath Robotics.

0 replies on “Q&A: Clearpath Robotics’ Ryan Gariepy on killer robots and Canada’s defence strategy”