Successful use of AI in government means doubling down on human and democratic values

UofT’s Peter Loewen says citizen trust in government AI use is "still a work in progress.”

What is the future of artificial intelligence (AI) in government, and what role can it play in the success of democracies (and the success of Canada) in the 21st century?

In recent years, the notion that technology can create a better world has drawn an increasing amount of skepticism. Governments can’t roll out simple apps. Big technology companies seem hard to trust. The oversights necessary for citizens to trust the embrace of AI by governments are still a work in progress.

Despite these obstacles, I believe we can meet the challenge of the present moment through an insight that is rarely considered when it comes to technological innovation: doubling down on the importance of human and democratic values.

If properly harnessed, AI could contribute up to $15.7 trillion to the global economy by 2030. But new regulatory frameworks are needed to enable the widespread adoption of this technology.

The challenge for governments is that the tools are moving faster than we can write new legislation.

The challenge for governments is that the tools are moving faster than we can write new legislation. To use AI successfully, we need to think differently about how we craft and implement policy—we need innovative regulatory approaches that match the speed and complexity of the task at hand.
 

Governments can benefit from the use of AI, but they have special obligations on how they must do so. Decisions made by public servants—the number of which reaches into the millions every year—could be enhanced by AI and automation to improve responsiveness, consistency, and opportunities for learning from the vast amounts of data collected by public institutions.

This innovation need not come at the loss of human jobs, as AI systems could be used to supplement the work of public servants to reduce wait times, by assessing whether a decision could be automated with the information at hand or requires more detailed scrutiny by a caseworker. In doing so, AI could improve due processes by enabling greater capacity to support cases where nuanced judgement or appeals are required.

However, the use of AI in government poses key challenges for citizen consent. Studies conducted by myself and colleagues show that citizens do not support a single set of justifications for the use of algorithms, and in fact, have a strong bias toward the status quo.

Citizens also judge algorithms more harshly than human decision-makers, and opposition to AI is stronger among those who fear the broader economic effects of technology. In other words, the successful use of AI is tied up in broader debates about what the future of technology and society will be.

What steps are needed to unlock the benefits of AI for the public sector? How can we reach a future where this technology improves the functioning of government, and contributes to a better world for everyone? I believe there are three essential insights we must consider.

The first insight responds to the anxieties held by many about a future in which humans are displaced from jobs by automation, leading to economic uncertainty. When I surveyed thousands of Canadians in 2019 and asked them whether they expected to lose their job to a computer or machine, 10 percent said they expected this to happen in the next five years, and 25 percent expected to be replaced within the decade. These fears were not limited to those working in manufacturing or manual labour positions. But are such concerns well-founded?

The answer depends on how AI-driven automation will be incorporated into organizations. Here, it is important to point out what we might call “the distributional fact” of technology: the use of AI is likely to be spread across many tasks and jobs, rather than concentrated entirely in a few.

When it comes to AI, I believe democratic public services are more culturally ready for the adoption of this technology than any other organization.

Nearly all of us could replace some of the things we do with automation, but important parts would inevitably remain. What if the tasks we could replace are the ones we don’t enjoy, or that cause us stress? Or, most importantly, the ones we really do not do that well, while we keep the functions that are essential to the larger purposes of our work? Suddenly, the implementation of AI appears as less of a threat and more as an innovation towards greater efficiency—like the shift from typewriters to computers, or card catalogues to database software.

The second insight is what we might call “the values premium,” or the increased importance of values and principles in public decision-making. AI is a prediction technology: an efficient and potentially ever-improving system for predicting outcomes using information we have about the past. However, what these prediction machines are not good at determining or understanding is how the humans who are involved in and observe these decisions will interpret them.

Why does this matter? It matters because the values and reasons that underwrite our decisions and actions matter as much as the decisions and actions themselves, and this is more important in governments than in the private sector.

The main protectors of our values and principles will not be machines, but those who put decisions into action—what are sometimes called “street-level bureaucrats.” It is here where there will be a premium on values like trust, transparency, and decency.

In his seminal work, The Decent Society, Avishai Margalit asks us to consider the following scenario: suppose there is a truck delivering food to a village during a famine, and hands each villager a loaf of bread—enough to fill their stomach, at least for the day. Isn’t this a generous and noble act? But now, consider a slight change: instead of handing the bread out, the people delivering it throw it on the ground, so villagers must scramble for it in the dust. The outcome is the same, but the second scenario is different because it isn’t decent—it involves humiliation. The decent society is one in which people are not humiliated.

Government is too often an impersonal organization. When citizens access government services, they often experience indifference, if not contempt. There is a real risk that this experience of indifference will become even more common as more decisions and allocations are left to the seeming caprices of an algorithm.

The use of AI will be better in democracies than in autocracies, and we are wrong to think that countries like China will outperform and eventually eclipse democracies by deploying AI at scale.
 

The important job of public servants, in this context, is to put a great premium on the values of trust, transparency, and decency—on humanity, in other words—to ensure that AI is enhancing the human element of public service, rather than draining it from the system.

Finally, the third insight is what we might call the “democratic advantage:” in short, the use of AI will be better in democracies than in autocracies, and we are wrong to think that countries like China will outperform and eventually eclipse democracies by deploying AI at scale. In fact, there is reason to believe the implementation of AI will amplify the weaknesses of autocracies.

The core problem of autocracies has always been an inefficient feedback mechanism in which the public can express its dissatisfaction to the state. In place of the feedback provided by democratic engagement, autocratic states seek control of citizens. Rather than receiving the real and organic expression of happiness or discontent among citizens, autocratic states impose an order and assume that if things are working—even minimally—then everyone is happy.

But, in this system, the inherent shortcomings of AI, including multiple opportunities for biases to enter the process, and a lack of value alignment among them, will amplify these blind spots, leading to more discrimination and repression, and less outside dissent.

Democracies are not perfect, but they do have a built-in advantage: they invite self-criticism. They create incentives for groups who are marginalized or disadvantaged to mobilize and make political and legal claims to correct those imbalances.

This makes decision-making cumbersome, certainly, but it also makes it self-correcting. This feature is what will give democracies the advantage as we work out the best ways to employ AI for social good. It is also the right reason for us to advocate for maximum transparency, explainability, and justifiability in the public use of AI—precisely so it can be more easily critiqued and corrected.

Government is sometimes viewed as a laggard, behind on the latest trends, management practices, fads, and innovations. But, when it comes to AI, I believe democratic public services are more culturally ready for the adoption of this technology than any other organization, because public services are already set up like human-assisted AI systems.

The work of public servants is to be part of a prediction machine: to be presented with a problem, formulate and test solutions using data, and make recommendations through a series of considerations (or algorithms), which eventually reaches a human who makes a choice from a small number of options.

The human cannot see all the deliberations that led to the decision, but they can know the process and the values that guided it, and they have an obligation to defend and explain not only the decision, but how it was arrived at. All these elements map onto a well-designed system of human-assisted AI.

If this is true, then AI can find productive and ethical uses in government as much as in the private sector, and maybe especially in democratic governments.

Feature image courtesy Pixabay. Photo by Gerd Altmann.

Peter Loewen

Professor Peter Loewen is the Director of the Munk School of Global Affairs & Public Policy and Associate Director of the Schwartz Reisman Institute for Technology and Society.

0 replies on “Successful use of AI in government means doubling down on human and democratic values”