Thoughtworks’ Rebecca Parsons on using threat models to prevent AI peril

Thoughtworks Rebecca Parsons
“I don’t think the world will be destroyed by AI but I’m more worried about it than I was before.”

Echoing the zeitgeist of the broader tech industry, much of this year’s Collision conference programming focused on artificial intelligence (AI), along with the promise and perils that come along with it.

BetaKit sat down with Thoughtworks’ CTO-Emerita Rebecca Parsons to cut through the noise. As a long-time tech devotee, Parsons is responsible for steering Thoughtworks’ tech strategy. Prior to Thoughtworks, Parsons was a researcher and college lecturer in computer science.

Parsons spoke to the most significant changes she’s seen in AI development over the decades, before evaluating AI doomsayers and approaches to regulation of the technology.

The interview has been edited for length and clarity.

With the rapid pace of AI development, how has [Thoughtworks’] approach changed within the last year?

“If you were Sam Altman, and Google said: ‘Of course, we’ll stop working on this for six months,’ would you believe them?”

Rebecca Parsons: What has changed is that AI is now visible to everyone instead of just people doing AI research and doing specific projects with AI. A lot of what has changed though, with the exception of the large language model algorithm, most of the stuff we’re doing with AI we could have done 20 years ago except we didn’t have enough memory, not enough data, and the computers were too slow. With all of the data now available on the internet, we can compute things we just couldn’t compute before.

It’s hard to describe the pace, but what has been surprising to me is the rate at which just everybody is now talking about it. The most common question we get from clients is: “How can we use generative AI to help my business?” and we’re getting that across the spectrum. In the past, certain industries have always been early adopters like financial services.
 

Another side to the promises of AI is the potential threats it could bring. There have been several recent open letters and public statements made by experts in the field. What are your thoughts on their messaging?

I think the letter is saying we should pause for six months, just as a [practicality]. If you were Sam Altman, and Google said: “Of course, we’ll stop working on this for six months,” would you believe them? Realistically, there’s no way we can stop them from working on it unless you have some kind of verifiable agreement—I think that one’s a non-starter.

I do think there are ways we can control what happens. My big concern is when people like Geoffrey Hinton come out and say we don’t really understand how it’s working—it could do things I couldn’t think it could do. That got me concerned. It’s one thing for someone in a completely different discipline to say, “AI is gonna take over the world, it’s gonna be a robot apocalypse.” It’s another thing for someone who’s been doing this research for 40 years to say it.

I do think there are things to be worried about. These large language models, they know how to code—that’s what co-pilot is all about. They have access to the internet, they have the ability to plan a strategy. All they need is the ability to execute a strategy and then they could do whatever they want to do.

I do think we need to start applying a threat model perspective to these things so that we can start to say, “How might we mitigate these problems?”

I don’t think in my lifetime the world will be destroyed by artificial intelligence but I’m more worried about it than I was before. One of the things that makes me feel a bit better is the fact that Sam Altman from OpenAI announced that they don’t think making their models any bigger is going to give them an increase in power and capability, so they have to figure out something else. That says that’s their capabilities for the moment until they figure out how to get better. Those capabilities are where they’re going to be for a while, so that’s reassuring. And it’s also reassuring for them to figure out how to continue to increase their capabilities. They’re probably gonna have to understand a bit more about what it’s actually doing and so that addresses the concern that I have.

What would that thread model perspective look like for developers? What responsibility do they have in using that approach when deploying AI systems?

Threat modelling is an approach that is very common in the security community. The basic premise is that you look at all the assets that are available and you’ll identify what are the key assets that somebody might want to go after. Based on your systems’ architecture, you try to figure out what are the most likely attack vectors and you prioritize that. That’s what threat modelling, in general, is all about.

“I think it’s part of our professional responsibility to take into account what are the ways that our technology can be misused.”

The same thing would be applied here. Where do we feel like the world is most vulnerable to these things? What are the most likely vectors that might come through and how do we best mitigate against that? Most developers, I think their responsibility is to make sure that people realize how much faith they can put in the answer. Depending on the question, depending on the language model, they make things up. As a result, you can’t necessarily trust what it tells you.

A couple of lawyers had to throw themselves under the mercy of the court because they filed briefs with citations and cases that didn’t exist because they just asked large language models to [provide them] with a case and it just gave them something that didn’t exist, and they didn’t check.

I think it’s part of our professional responsibility to take into account what are the ways that our technology can be misused and are there things we can do to make that harder to accomplish or put in some other kind of mitigation. Unless you’re a hacker, no one sets out to create irresponsible technology; we do it by instinct. We’re not very good about thinking about what those consequences are because we’re so focused on the problem we’re trying to solve, not the other things that might happen.

Question: You can almost say it’s part of human nature to be so focused on your objective, everything else gets blurry.

Even more so for technologists because we’re inherently problem-solvers. We’re not necessarily the creatives and the ideators and all of that. I think technologists are more susceptible to that in general than some others.

Canada recently tabled legislation that would regulate the development and deployment of AI systems in the country. Part of that is reporting to the government about those potential threats. Do you think Canada is headed in the right direction? What are your thoughts on the pace and progress of AI regulation in Canada?

The problem with regulation of technology in any form is the relative case of political decision-making and technological change. But we do have some examples like [the General Data Protection Regulation] which have made a real difference. I worry about the pace of regulation but I do think it has the potential to at least get minds focused on, ‘Okay here’s some of the downside risks we want to mitigate against and you better prove to me that you’re mitigating things.’

I do think regulation has a role to play. My worry is the relative pace of policy-making versus the pace of technological change.

What is your opinion on the pace of policy-making in Canada? Is it too late that we’re working on this bill now? Should we have had it before this accelerated pace of advanced AI development in the past year?

I don’t think we knew enough about what we would protect against to have made viable regulations ten years ago or so. We might have been late to the game with our approach to privacy regulations but I don’t think we recognized the scale of the problem. We knew what the problem was but not the scale and speed at which industries would embrace big data.

I don’t know if we could have been any earlier. We released one of our reports on December 5th, and nobody had heard of ChatGPT. By January the reactions were, ‘How could you not be paying attention to ChatGPT?’ When we think about how ubiquitous it has become since the end of last year—some legislatures aren’t even in session all year long. That’s why I think there is a disconnect with the pace.

You mentioned that governments might be late to addressing the privacy component in regulating AI systems. What responsibility do AI developers have specific to collecting and managing data?

I don’t think it’s any different for AI than any regular data. There’s a German concept called Datensparsamkeit, but it basically means ‘only collect what you need.’ The fundamental principle is if you’re not storing information, nobody could hack the system and steal it because it’s not there.

One of the approaches that companies had for a while is, ‘I’m just gonna hoover up whatever data I can even if I don’t have a specific use for it yet because I might want it someday so I’ll keep it.’ So they just have this huge honeypot, just waiting for somebody to go after it. From a development perspective, we want to flip that narrative around and say, ‘I’m going to keep what I need and no more. That way, I don’t have to protect what I don’t need.’ I think that’s the first principle that developers need to take into account.

0 replies on “Thoughtworks’ Rebecca Parsons on using threat models to prevent AI peril”