Hootsuite’s Ryan Holmes says social networks have an “algorithmic responsibility” to users

social media

If you’re reading this article, it’s probably because an algorithm picked it for you. Facebook, for instance, factors in around 100,000 variables to determine what shows up in your News Feed. It’s an extraordinarily nuanced (not to mention largely top-secret) process. At its core, however, the Facebook algorithm aims to show you updates you’ll like, based on the ones you’ve liked in the past. On the surface, this makes sense. Mark Zuckerberg compared it to “the perfect personalized newspaper for every person in the world.” There’s just one problem. Newspapers aren’t really meant to be personalized. In fact, in some ways, that may defeat the point altogether.

The role of “fake news” — much of it disseminated on social media platforms — has, justifiably, received a lot of attention in recent months. This is a plague and a serious one. But overlooked and intertwined with this is an equally worrying issue with our social news sources. As algorithms mature, growing more complex and pulling from a deeper graph of our past behaviour, we increasingly see only what we want to see.

Beating back fake news

Hard-to-believe and salacious news is nothing new. The National Enquirer has long been blaring racy headlines at us from supermarket checkout lines. The difference is that in the past we knew to take tabloids with a grain of salt. By contrast, when stories about the Pope endorsing Trump or Hillary selling weapons to ISIS circulate on platforms like Facebook, all too often these headlines are taken at face value…especially by those inclined to believe them. For many people, separating reliable sources from bunk isn’t as easy as it once was. The kinds of media we’re exposed to — and the multiplicity of channels we have access to — have, in many ways, outpaced our own media savvy.

Here’s where Facebook and its algorithm may have a role to play. To be clear, this role isn’t as any kind of fact-checker or content gatekeeper. In fact, the idea of Facebook “vetting” what constitutes real news brings with it its own set of problems. But Facebook can use its vast data resources to gauge the authority of the source of news update. Articles from news sites that are patently fake or suspicious could be devalued accordingly, ensuring that they don’t show up in as many people’s feeds to begin with, and that they don’t spread if they do.

Could social networks factor a greater element of “discovery” into their algorithms, infusing our feeds with new and competing ideas, rather than just holding up a mirror to our own?

This concept of assessing “domain authority” is nothing new. It comes from the SEO world, where sites are ranked, often on a 0–100 scale, based on everything from their longevity to how many other pages link back. While this may sound technical, in practice, domain authority is a useful proxy for reliability. The New York Times has a domain authority of 99.79/100, according to one measure. In short, it’s trusted. Ending the Fed, by contrast, the source of some of the most widely shared fake news stories during the election, has a domain authority of 44.90/100, at the time of writing. That disparity should speak for itself.

It would seem that for Facebook to factor this into its algorithm would be neither technically difficult nor especially controversial. The concept is tried and true. It is, in many respects, impartial and requires minimal human meddling. And the immediate benefit is that we’ll see less fake news and more of the real thing, an outcome that should please all users, no matter their political persuasion.

Overcoming our confirmation bias

More dangerous than fake news, however, is all the real news that we don’t see. For many people, Facebook, Twitter and other channels are the primary — if not the only — place they get their news. By design, network algorithms ensure you receive more and more stories and posts that confirm your existing impression of the world and fewer that challenge it. If you think certain political views are unfounded and others are valid, your networks will generally show updates that strengthen that perspective.

To be clear, these algorithms don’t have any sinister objective: the overriding goal is merely to filter through the universe of content we could see and select the updates we’re most likely to engage with. With time, however, we end up seeing an ever thinner slice of the world: the one we’re most comfortable with. This isn’t necessarily an issue if we’re just talking about updates from friends. But when all the news we receive is filtered this way, and that in turn warps how we view and interact with the world, then there is real cause for concern.

Overtime, we end up in a “filter bubble,” insulated from insights that could enlighten or challenge our perspective. Traditional newspapers and media outlets always acted as a filter, to some extent. Editorial gatekeepers determined what was fit to print, and this was often influenced by the publisher’s political leanings. The big difference now is that our filter bubble is growing ever tighter and the tribes we belong to ever smaller and more isolated. In the algorithm-derived news era, we’re not seeing nearly the full picture. Global events go unnoticed. Seismic shifts in the political landscape are overlooked or dismissed … until it’s too late. Decisions are made without all the evidence on the table. In the end, it’s hard to see how this benefits anyone.

social media

Could there be a way to offset this trend? Could the same algorithms that have narrowed our content universe also be used to selectively open it up again? This concept is already familiar to users of streaming music services, for example. You can have your algorithm deliver nothing but ’80s pop if you want, or you can use it to “discover” new, different genres. Could social networks factor a greater element of “discovery” into their algorithms, infusing our feeds with new and competing ideas, rather than just holding up a mirror to our own? Alongside content we’re sure to “Like,” could a certain percentage touch on themes that were contrary, surprising, or representative of distinct views?

This is just one idea and it’s not without hurdles. Won’t users get sick of having their assumptions challenged? Don’t we prefer to stay inside that protective filter bubble? Maybe. But I have faith that the vast majority of people are rational. We prefer real news to the fake kind. We’d rather have the full picture than see the world through a straw. Above all, after the US election, it’s no longer possible to dispute the role that social media plays in influencing — even rewriting — world affairs. That’s a responsibility that ought not be taken lightly. Networks right now are uniquely positioned to give users either a full, factual view of the world or a partial, slanted one. At the very least, we should all be able to choose which of these perspectives we prefer.

Syndicated with permission from Ryan Holmes’ Medium account

Ryan Holmes

Ryan Holmes

CEO @hootsuite, Founder @invoke. I like social, startups, grownups, cycling, am learning to walk on hands and addicted to yoga.

0 replies on “Hootsuite’s Ryan Holmes says social networks have an “algorithmic responsibility” to users”