The feds get 4,000 website complaints a day. Can a “responsible” AI chatbot untangle the mess?

At the Ottawa Responsible AI Summit, experts debated security, equity, and who gets a seat at the table.

The Canadian government receives up to 4,000 complaints about its website per day, according to Michael Karlin, the acting director of policy at the Canadian Digital Service (CDS). Could an artificial intelligence (AI) chatbot make surfing the government’s 10 million webpages less of a headache? 

“The dataset you collect now may become a weapon in the not-too-distant future.”

Michael Karlin, Canadian Digital Service

Karlin’s team is working to find out, while ensuring the tool remains safe, secure, and equitable. He explained the process of developing the tool, which is still undergoing beta testing, at the inaugural Ottawa Responsible AI Summit on Wednesday afternoon.  The event brought together academics, entrepreneurs, and government figureheads at Bayview Yards for a discussion on the literal and figurative power of AI, trust, and who gets to decide its limits. 

Powered by OpenAI’s GPT-4 model (Karlin said Cohere’s Canadian-made model wasn’t the right fit), the Canadian government’s AI chatbot prototype allows users to ask plain-language questions and receive relevant information from the Canadian government’s websites, as well as be directed to the relevant webpage, with a caveat that the user should verify the AI-generated answer. As it is now, these kinds of inquiries put strain on government service call centres and in-person offices, which can also be less accessible for those with “complex needs,” Karlin said. 

Michael Karlin

The security and equity considerations behind that chatbot’s development mirrored much of the broad conversations at the Ottawa Responsible AI Summit, which dealt with how data privacy can be safely secured in the AI era, and how AI tools can be equitably deployed.

“Responsible AI is not just about managing risks, it’s about ensuring that the benefits of AI reach everyone,” Kanata-Carleton MP Jenna Sudds said in opening remarks. “And that our system reflects Canada’s diversity, our values, our social strengths.” 

RELATED: Feds sign new agreement with Cohere to explore AI uses within government services

No account will be required to access the chatbot, and the tool will not accept any personal information placed in the query, like a social insurance number or phone number. Karlin said that this was a “design choice,” and that the tool is intended to allow anonymous, depersonalized inquiries with the government until users are ready to identify themselves, such as through the immigration application they were seeking through the tool. 

“If you don’t need personal information, don’t collect personal information,” Karlin said in his presentation. 

On a panel, Karlin explained that his team could have collected more data on equity factors like the gender and race of its testers, but they didn’t want “very large data sets sitting around.” 

“The dataset you collect now may become a weapon in the not-too-distant future,” Karlin said. 

A scalpel, not a chainsaw

On top of the potential for abuse, inequitable AI tools can also exacerbate existing societal biases. Hammed Afenifere, the co-founder and CEO of Oneremit, explained in a panel conversation and in an interview with BetaKit that AI models aren’t built for everyone. 

Training data can make an AI chatbot inherently biased, Afenifere said, like providing an entrepreneur with market data for Western countries like the United States, Canada, and the United Kingdom, rather than Africa, simply because it has less data to work with. Other panellists compared his example to some automatic soap dispensers that can’t detect dark skin

“If we build a responsible AI where it has this context, or [understands] how Africans operate at all, you are able to bring more money into this country,” Afenifere said. 

In his own presentation, Karlin explained that the CDS team is working to ensure that diverse populations using the chatbot get responses for their demographic, like information on programs for Black business owners. However, they also must make sure the chatbot’s responses are not negatively biased. 

“That’s a scalpel and not a chainsaw-based process,” Karlin said. The CDS team is about to consult with different communities to get a better idea of how they interact with government services, so that the chatbot can be tested through that lens. A transgender person may interact with the government for a specific, discrete set of services that are unique to them, Karlin explained.   

“We want to make sure those test questions that we would use are generated by that community, so that we’re not just making it up if we don’t have a trans person on our development team,” Karlin said. 

Who defines “responsible” AI?

The scalpel-not-chainsaw approach may have answered one of the summit’s understated themes: who decides what responsible AI is? The turn-of-phrase “a seat at the table” was invoked throughout the day. Speakers debated who sits at the said table, or laid out the importance of making sure everyone is represented. 

“Imagine a future shaped with AI, shaped to the community … and also built with all of us at the table,” said Somto Mbelu, the founder and program lead of Ottawa Responsible AI Hub, in his opening remarks. 

“Imagine a future shaped with AI, shaped to the community, … and also built with all of us at the table.”

Somto Mbelu

In an early lecture, Carleton University public policy professor Graeme Auld said formalizing industry standards, such as in AI, is not an easy process. He also questioned who gets to sit at those tables.

Afenifere told BetaKit after the summit that he was glad to make people see responsible AI from a different perspective, but he, too, was confused about who gets a seat.

RELATED: Is SaaS dead? Tech leaders debate whether it’s time for requiem or rebirth

“For me, I’m still kind of confused: who is responsible for that? Who is ‘we’?” Afenifere said. He speculated that there will be some kind of committee or government organization in the future responsible for implementing responsible AI policies. 

“That conversation that is still ongoing,” Afenifere added. 

Karlin’s approach to simply bring the proverbial table to the communities that will use the government’s AI chatbot is perhaps more pragmatic than forming committees or organizations. The approach is reflective of the project’s “organic” growth, which began as a proof-of-concept built by one person on a Friday afternoon, Karlin said.

The CDS is taking a “bubble-based” approach to equitable consultation, starting with communities within the Government of Canada itself, like employees who identify as Black or LGBTQ+, then moving on to more community-based testing. Karlin acknowledged that Indigenous communities have “hyper diverse” perspectives on AI, and there won’t be a monolithic approach to community discussions. 

“I don’t want to internally prescribe a perfect way forward if that’s going to look different from community to community,” Karlin told BetaKit.  

The government website chatbot just finished a trial on 2,700 random users, with about a 95-percent success rate, and will undertake a trial on 3,500 next year. However, it has to be prepared to handle millions of queries, and Karlin is conscious of the government providing harmful or unhelpful answers. Last year, Air Canada was found liable for its AI chatbot giving travellers bad advice. Karlin told BetaKit that the project isn’t guaranteed to leave its beta state or even be formally launched. The potential cost of the project is still a concern. 

“It seems like a pejorative thing to think about, but how much should taxpayers pay for a web navigation service?” Karlin said. “We’re building it just to see if it’s possible to do.” 

Feature image by Dennis Jarvis via Flickr, licenced under CC BY-SA 2.0.

0 replies on “The feds get 4,000 website complaints a day. Can a “responsible” AI chatbot untangle the mess?”