Hootsuite’s Ryan Holmes: We need to get religious about AI before it’s too late

ai religion

In a memorable scene in Ex Machina, the 2015 thriller on lifelike robots, the CEO of a Google-esque monopoly describes how his machines learned to be human. For a few seconds, he says, he secretly turned on smartphone cameras across the planet, then compiled the data: “Boom. A limitless resource of vocal and facial interaction.”

Turns out this is hardly sci-fi. As AI grows more sophisticated, developers have recognized that the information, images and videos we voluntarily share on social media and the internet represent one of the richest sources of raw data available — a veritable snapshot of humanity at any given moment.

Data like this, including the billions of messages sent across my platform, may one day be the lifeblood of machine learning — an application of AI whereby machines, given access to enough data, learn for themselves. To “learn,” these tools typically need access to a vast training set. Often, it’s only after analyzing how an improbably large collection of humans have handled a repetitive behavior (like playing Go or responding to customer service requests or tagging a photo of a dog with the word “dog”) that humans can be cut out of the process.

Lots of this, of course, is old hat by now. Apple uses AI that’s been trained on countless users to transcribe your voice and power Siri. Facebook uses AI that’s learned from past interactions to ensure ads are properly targeted to billions of people. Google has incorporated AI, in some form, into its search engines from the very beginning.

The peril (and promise) of social data

But newer applications of AI, like Microsoft’s infamous Tay bot, hint at the challenge of taking this social data at face value. A chatbot deployed on Twitter in early 2016, Tay was supposed to “learn” from user interactions. (“The more you talk, the smarter Tay gets,” boasted her profile.) But she was beset with racist, anti-semitic and misogynistic commentary, almost from the start. Learning from her environment, Tay began spitting out a string of inflammatory responses, including, infamously, “bush did 9/11, and Hitler would have done a better job than the monkey we have now.” Microsoft developers pulled the plug a mere 16 hours after Tay’s release.

In place of parents and priests, responsibility for this ethical education will increasingly rest on frontline developers and scientists.

This is a simple example. But herein lies the challenge. Yes, billions of people contribute their thoughts, feelings and experiences to social media every single day. But training an AI platform on social media data, with the intent to reproduce a “human” experience, is fraught with risk. You could liken it to raising a baby on a steady diet of Fox News or CNN, with no input from its parents or social institutions. In either case, you might be breeding a monster.

The reality is that while social data may well reflect the digital footprint we all leave, it’s neither true to life nor necessarily always pretty. Some social posts reflect an aspirational self, perfected beyond human reach; others, veiled by anonymity, show an ugliness rarely seen “in real life.”

Ultimately, social data — alone — represents neither who we actually are nor who we should be. Deeper still, as useful as the social graph can be in providing a training set for AI, what’s missing is a sense of ethics or a moral framework to evaluate all this data. From the spectrum of human experience shared on Twitter, Facebook, and other networks, which behaviors should be modeled and which should be avoided? Which actions are right and which are wrong? What’s good … and what’s evil?

Coding religion into AI

Grappling with how to build ethics into AI isn’t necessarily a new problem. As early as the 1940s, Isaac Asimov was hard at work formulating his Laws of Robotics. (The first law: a robot may not harm a human being, or through inactivity allow a human to come to harm.) But these concerns aren’t science fiction any longer. There’s a pressing need to find a moral compass to direct the intelligent machines we’re increasingly sharing our lives with. (This grows even more critical as AI begins to make its own AI, without human guidance at all, as is already the case with Google’s AutoML.) Today, Tay is a relatively harmless annoyance on Twitter. Tomorrow, she may well be devising strategy for our corporations…or our heads of state. What rules should she follow? Which should she flout?

Here’s where science comes up short. The answers can’t be gleaned from any social data set. The best analytical tools won’t surface them, no matter how large the sample size.

But they just might be found in the Bible.

And the Koran, the Torah, the Bhagavad Gita and the Buddhist Sutras. They’re in the work of Aristotle, Plato, Confucius, Descartes, and other philosophers both ancient and modern. We’ve spent literally thousands of years devising rules of human conduct — the basic precepts that allow us (ideally) to get along and prosper together. The most powerful of these principles have survived millennia with little change, a testament to their utility and validity. More importantly, at their core, these schools of thought share some remarkably similar dictates about moral and ethical behavior — from the Golden Rule and the sacredness of life to the value of honesty and virtues of generosity.

AI, to be effective, needs an ethical underpinning. Data alone isn’t enough. AI needs religion — a code that doesn’t change based on context or training set.
 

As AI grows in sophistication and application, we need, more than ever, a corresponding flourishing of religion, philosophy, and the humanities. In many ways, the promise — or peril — of this most cutting-edge of technologies is contingent on how effectively we apply some of the most timeless of wisdom. The approach doesn’t have to, and shouldn’t, be dogmatic or aligned with any one creed or philosophy. But AI, to be effective, needs an ethical underpinning. Data alone isn’t enough. AI needs religion — a code that doesn’t change based on context or training set.

In place of parents and priests, responsibility for this ethical education will increasingly rest on frontline developers and scientists. Ethics hasn’t traditionally factored into the training of computer engineers — this may have to change. Understanding hard science alone isn’t enough when algorithms have moral implications. As emphasized by leading AI researcher Will Bridewell, it’s critical that future developers are “aware of the ethical status of their work and understand the social implications of what they develop.” He goes so far as to advocate study in Aristotle’s ethics and Buddhist ethics so they can “better track intuitions about moral and ethical behavior.”

On a deeper level, responsibility rests with the organizations that employ these developers, the industries they’re part of, the governments that regulate those industries and — in the end — us. Right now, public policy and regulation on AI remains nascent, if not non-existent. But concerned groups are raising their voices. Open AI — formed by Elon Musk and Sam Altman — is pushing for oversight. Tech leaders have come together in the Partnership on Artificial Intelligence to explore ethical issues. Watchdogs like AI Now are popping up to identify bias and root it out. What they’re all searching for, in one form or another, is an ethical framework to inform how AI converts data into decisions — in a way that’s fair, sustainable and representative of the best of humanity, not the worst.

This isn’t a pipedream. If fact, it’s eminently within reach. Sensational reports surfaced recently about Google’s DeepMind AI growing “highly aggressive” when left to its own devices. Researchers at Google had AI “agents” face off in 40 million rounds of a fruit-gathering computer game. When apples grew scarce, the agents started attacking each other, killing off the competition — humanity’s worst impulses echoed … or so the critics said.

But then researchers switched up the context. Algorithms were deliberately tweaked to make cooperative behavior beneficial. In the end, it was those agents who learned to work together who triumphed. The lesson: AI can reflect the better angels of our nature, if we show it how.

Syndicated with permission from Ryan Holmes’ @invoker Medium account

Ryan Holmes

Ryan Holmes

CEO @hootsuite, Founder @invoke. I like social, startups, grownups, cycling, am learning to walk on hands and addicted to yoga.

One reply on “Hootsuite’s Ryan Holmes: We need to get religious about AI before it’s too late”