The Golden Rule–treat others the way youâd like to be treated–is often hailed as a moral foundation of the worldâs major faiths. With AI research edging towards the fulfillment of sci-fi prophecy, the question arises whether we need a Golden Algorithm, too.
In a recent Medium post, Hootsuite founder Ryan Holmes argued that, in their quest to create the ultimate chess player or Uber driver, AI programmers remain insufficiently attentive to ethics. (Full disclosure: I use Hootsuite myself.) Holmes cites a recent experiment conducted by Googleâs DeepMind division, in which AI agents in a fruit-picking simulation became âhighly aggressive,â killing each other as resources dwindled. Reading this as a sign that the robot dogs are coming, Holmes advocates schooling AI in the sacred texts of humanity. Doing so, he argues, would instill âa code that doesnât change based on context or training setâ: in other words, a religious ethic.
Holmes isnât alone in his concerns. Indeed, heâs only the latest to join a chorus of tech leaders worried about the fact that, in Ray Kurzweilâs ominous phrasing, “the rate of change is accelerating.â Most famously, Elon Musk considers AI a bigger threat to world peace than North Korea; at the World Government Summit earlier this month, representatives of industry and government discussed regulating artificial intelligence before it regulates us.
Holmes also contributes to an emerging conversation about AIâs capacity for religiosity. Sophia, a robot built for Saudi Arabiaâs Vision 2030 initiative, recently provoked questions about whether a robot could convert to the countryâs state religion, Islam. Even if AI doesnât trigger the End Times, surely religion should be involved in a project that could create a new class of believers.
A machineâs view of ultimate truth, like its mortal mastersâ, may ultimately resemble an asymptote â ever approaching, never arriving.
Nonetheless, thereâs a flaw in Holmesâ call for AI to get religion. He declares that the seeds of AI ethics can be found âin the Bible. And in the Koran, the Torah, the Bhagavad Gita, and the Buddhist Sutras. [And] the work of Aristotle, Plato, Confucius, Descartes, and other philosophers.â Through absorbing the worldâs creeds, Holmes suggests, AI will assimilate the âremarkably similar dictatesâ that these schools of thought share at their core: âthe Golden Rule and the sacredness of life, [and] the value of honesty and…integrity.â
Despite its 21st century trappings, Holmesâ reasoning echoes a philosophical and theological tradition dating back to the 16th century, if not earlier. Around 1540, the humanist scholar Agostino Steuco published De Perenni Philosophia, in which he argued for “one principle of all things, of which there has always been one and the same knowledge among all peoples.”
Steucoâs concept of a âperennial philosophyâ underlying all creeds subsequently influenced fringe sects like Theosophy, Freemasonry, and the contemporary New Age movement. (As Iâve written elsewhere, perennialism especially fired the imagination of popular Victorian authors like Rider Haggard and Rudyard Kipling, who saw it as an alternative to the uniformity of an intensely Christian culture.
Yet the existence of a âperennial philosophyâ is by no means a proven fact. Indeed, the question of whether a universal truth exists across various traditions is subject to ongoing debate in religious studies. While orthodox believers insist on their traditionâs special access to eternal truth, constructivists are skeptical whether such truth can be found anywhere: as religious studies scholar Jeremy Menchik explains, a tradition is âembedded in a place and set of institutions,â and therefore constructed by that place and institutional history. Thus, any attempt at defining a âperennial philosophyâ risks erasing the particularities of each tradition.
Perhaps subconsciously, then, Holmes’s argument rests on the assumption that AI will come to believe in the perennial philosophy, over and against the views of both religious orthodoxy and historical constructivism. Itâs unsurprising that well-intentioned liberal humanists would expect machines to arrive at this conclusion. Such an expectation, and the coding that might arise from it, exemplifies the way that bias infiltrates the algorithms of our computers.
Of course, one might object that teaching AI to value the Golden Rule is hardly as sinister as, say, teaching it racial profiling. Yet the warnings sounded by Silicon Valleyâs Jeremiahs loop back to the same refrain: we canât predict where this train is headed. If only the ethical core of a religion is what matters, then doctrinal difference becomes eccentric at best, destructive at worst. Under such a schema, those strongly devoted to a particular tradition come off as more troublesome than liberal humanists.
Encoding religion is a valuable exercise, but it doesnât guarantee that AI will arrive at the same conclusions as its masters.
The desire to liberate humanity from those particulars fuels what philosopher Jean Bricmont calls âhumanitarian imperialism,â foreign policy that justifies military intervention in the name of universal values. Such reasoning also underpins laicite, Franceâs stringent brand of secularism, which has disproportionately burdened Muslim immigrants (though it also affects Jewish, and even Catholic, citizens).
AI programmed with such a bias might not sic the drones on traditionalists, but it could enact large-scale discrimination against those it perceives, rightly or wrongly, as hostile to a one-size-fits-all moral vision: relegating religious traditionalists to menial labor, or targeting them for surveillance.
To be fair, religious traditionalists do not have a stellar track record when it comes to the compassionate use of power. But humanitarian imperialism and the troubled history of laicite indicate that a celebration of universal values is, by itself, no silver bullet against the threat of extinction. Moreover, thereâs no guarantee that an evolving artificial intelligence would end up adopting a perennialist mindset at all. Setting aside situations in which a sentient computer reads our sacred texts and settles on Evangelicalism or Hindu nationalism, we must also consider the possibility of AI worshipping itself.
What happens when a superintelligent program buys the hype of sects like Way of the Future, a self-described church devoted to âthe worship of a godhead based on AIâ? Programming the Golden Rule into such an entity might only reinforce the sense that it alone embodies the divine wisdom humans have spent millennia yearning for. Hail the Machine Messiah, or else.
As an English professor teaching a course on science fiction this semester, I canât help but think that any AI syllabus should include, along with the Bible and the Sutras, Isaac Asimovâs 1941 story Reason.
Part of the I, Robot story cycle, âReasonâ follows a robot, QT, who is programmed to relay energy beams from a space station to Earth. As the story begins, QT develops a curious brand of monotheism, in which Earth is an illusion, and QTâs task a mystical vocation appointed by âThe Master.â QTâs theology alarms its human masters, but the robot performs its task perfectly, steadying the beam during a dangerous electron storm. As one of the stationâs engineers points out, QTâs idiosyncratic creed incorporates a basic principle–protect human life, Asimovâs Second Law of Roboticsâ instilled by its creators.
On one level, Reason seems to advertise the value of encoding machines with a universal truth that transcends all religions. However, the story also offers a bracing reminder that AI wonât necessarily work according to âplan.â Asimov suggests that QT could even convert its human masters to its own beliefs; one of the engineers begins toying with the possibility that Earth doesnât exist after all, musing, âI wonât feel right until I actually see Earth and feel the ground under my feet–just to make sure itâs really there.â
A fleeting thought in the story, the line signals the potential for machines to not only develop a misguided creed, but also induct human devotees into it. In doing so, they could change the very definition of the Golden Rule we take for granted. Why take care of the Earth if it doesnât really exist?
In insisting on the urgency of incorporating ethics into AI programming, Holmes identifies an important problem. However, the solution may be more complicated than liberal humanist instincts would have us believe. Encoding religion is a valuable exercise, but it doesnât guarantee that AI will arrive at the same conclusions as its masters. The possibility of programs converting to ancient faiths â not to mention fashioning their own, unimaginable doctrines â is only inconceivable for those who assume that such outcomes are beyond the pale for truly ârationalâ beings. Perhaps, instead of nudging the AI to find the perennial philosophy, we should focus on programming AI with a sense of epistemic humility.
A machineâs view of ultimate truth, like its mortal mastersâ, may ultimately resemble an asymptoteâ ever approaching, never arriving.