In a memorable scene in Ex Machina, the 2015 thriller on lifelike robots, the CEO of a Google-esque monopoly describes how his machines learned to be human. For a few seconds, he says, he secretly turned on smartphone cameras across the planet, then compiled the data: âBoom. A limitless resource of vocal and facial interaction.â
Turns out this is hardly sci-fi. As AI grows more sophisticated, developers have recognized that the information, images and videos we voluntarily share on social media and the internet represent one of the richest sources of raw data availableâââa veritable snapshot of humanity at any given moment.
Data like this, including the billions of messages sent across my platform, may one day be the lifeblood of machine learningâââan application of AI whereby machines, given access to enough data, learn for themselves. To âlearn,â these tools typically need access to a vast training set. Often, itâs only after analyzing how an improbably large collection of humans have handled a repetitive behavior (like playing Go or responding to customer service requests or tagging a photo of a dog with the word âdogâ) that humans can be cut out of the process.
Lots of this, of course, is old hat by now. Apple uses AI thatâs been trained on countless users to transcribe your voice and power Siri. Facebook uses AI thatâs learned from past interactions to ensure ads are properly targeted to billions of people. Google has incorporated AI, in some form, into its search engines from the very beginning.
The peril (and promise) of social data
But newer applications of AI, like Microsoftâs infamous Tay bot, hint at the challenge of taking this social data at face value. A chatbot deployed on Twitter in early 2016, Tay was supposed to âlearnâ from user interactions. (âThe more you talk, the smarter Tay gets,â boasted her profile.) But she was beset with racist, anti-semitic and misogynistic commentary, almost from the start. Learning from her environment, Tay began spitting out a string of inflammatory responses, including, infamously, âbush did 9/11, and Hitler would have done a better job than the monkey we have now.â Microsoft developers pulled the plug a mere 16 hours after Tayâs release.
In place of parents and priests, responsibility for this ethical education will increasingly rest on frontline developers and scientists.
This is a simple example. But herein lies the challenge. Yes, billions of people contribute their thoughts, feelings and experiences to social media every single day. But training an AI platform on social media data, with the intent to reproduce a âhumanâ experience, is fraught with risk. You could liken it to raising a baby on a steady diet of Fox News or CNN, with no input from its parents or social institutions. In either case, you might be breeding a monster.
The reality is that while social data may well reflect the digital footprint we all leave, itâs neither true to life nor necessarily always pretty. Some social posts reflect an aspirational self, perfected beyond human reach; others, veiled by anonymity, show an ugliness rarely seen âin real life.â
Ultimately, social dataâââaloneââârepresents neither who we actually are nor who we should be. Deeper still, as useful as the social graph can be in providing a training set for AI, whatâs missing is a sense of ethics or a moral framework to evaluate all this data. From the spectrum of human experience shared on Twitter, Facebook, and other networks, which behaviors should be modeled and which should be avoided? Which actions are right and which are wrong? Whatâs good ⊠and whatâs evil?
Coding religion into AI
Grappling with how to build ethics into AI isnât necessarily a new problem. As early as the 1940s, Isaac Asimov was hard at work formulating his Laws of Robotics. (The first law: a robot may not harm a human being, or through inactivity allow a human to come to harm.) But these concerns arenât science fiction any longer. Thereâs a pressing need to find a moral compass to direct the intelligent machines weâre increasingly sharing our lives with. (This grows even more critical as AI begins to make its own AI, without human guidance at all, as is already the case with Googleâs AutoML.) Today, Tay is a relatively harmless annoyance on Twitter. Tomorrow, she may well be devising strategy for our corporations…or our heads of state. What rules should she follow? Which should she flout?
Hereâs where science comes up short. The answers canât be gleaned from any social data set. The best analytical tools wonât surface them, no matter how large the sample size.
But they just might be found in the Bible.
And the Koran, the Torah, the Bhagavad Gita and the Buddhist Sutras. Theyâre in the work of Aristotle, Plato, Confucius, Descartes, and other philosophers both ancient and modern. Weâve spent literally thousands of years devising rules of human conductâââthe basic precepts that allow us (ideally) to get along and prosper together. The most powerful of these principles have survived millennia with little change, a testament to their utility and validity. More importantly, at their core, these schools of thought share some remarkably similar dictates about moral and ethical behaviorâââfrom the Golden Rule and the sacredness of life to the value of honesty and virtues of generosity.
AI, to be effective, needs an ethical underpinning. Data alone isnât enough. AI needs religionâââa code that doesnât change based on context or training set.
As AI grows in sophistication and application, we need, more than ever, a corresponding flourishing of religion, philosophy, and the humanities. In many ways, the promiseâââor perilâââof this most cutting-edge of technologies is contingent on how effectively we apply some of the most timeless of wisdom. The approach doesnât have to, and shouldnât, be dogmatic or aligned with any one creed or philosophy. But AI, to be effective, needs an ethical underpinning. Data alone isnât enough. AI needs religionâââa code that doesnât change based on context or training set.
In place of parents and priests, responsibility for this ethical education will increasingly rest on frontline developers and scientists. Ethics hasnât traditionally factored into the training of computer engineersâââthis may have to change. Understanding hard science alone isnât enough when algorithms have moral implications. As emphasized by leading AI researcher Will Bridewell, itâs critical that future developers are âaware of the ethical status of their work and understand the social implications of what they develop.â He goes so far as to advocate study in Aristotleâs ethics and Buddhist ethics so they can âbetter track intuitions about moral and ethical behavior.â
On a deeper level, responsibility rests with the organizations that employ these developers, the industries theyâre part of, the governments that regulate those industries andâââin the endâââus. Right now, public policy and regulation on AI remains nascent, if not non-existent. But concerned groups are raising their voices. Open AIâââformed by Elon Musk and Sam Altmanâââis pushing for oversight. Tech leaders have come together in the Partnership on Artificial Intelligence to explore ethical issues. Watchdogs like AI Now are popping up to identify bias and root it out. What theyâre all searching for, in one form or another, is an ethical framework to inform how AI converts data into decisionsâââin a way thatâs fair, sustainable and representative of the best of humanity, not the worst.
This isnât a pipedream. If fact, itâs eminently within reach. Sensational reports surfaced recently about Googleâs DeepMind AI growing âhighly aggressiveâ when left to its own devices. Researchers at Google had AI âagentsâ face off in 40 million rounds of a fruit-gathering computer game. When apples grew scarce, the agents started attacking each other, killing off the competitionâââhumanityâs worst impulses echoed ⊠or so the critics said.
But then researchers switched up the context. Algorithms were deliberately tweaked to make cooperative behavior beneficial. In the end, it was those agents who learned to work together who triumphed. The lesson: AI can reflect the better angels of our nature, if we show it how.
Syndicated with permission from Ryan Holmes’ @invoker Medium account