How socialbots are the masters of manipulation

No mistake about it, you might be a bot. Responding to the number of likes, retweets, upvotes, and comments can be an impulsive act. The bigger the numbers, the bigger the likelihood content will be unquestionably or unwittingly shared. It’s also a collective behaviour that’s expanding the ecosystem of fake.

Recent research from Indiana University’s Filippo Menczer and Phil Howard with the Oxford Internet Institute highlights mounting evidence that bots are amplifying and manipulating these social cues. The concerns about the impact of socialbots are many, yet the bigger concern could be own behaviours.

Socialbots are no longer spammy nuisances, they’re more like pathogens. Fake news and the spread of misinformation is not new, and neither are bots. Sadly, it’s taken the 2016 US election for these interrelated issues to become newsworthy.

In a political context, look no further than Syria (2011), Turkey (2013), Mexico (2014), or Spain (2015) as examples of bots dampening dissent and manipulating political discourse.

Unmasking the socialbots masquerading as real profiles is a complex problem because of their diversity, and rapid evolution. Gregory Maus, a computer science grad student at Indiana University, is creating a taxonomy of socialbots and likens them to a “new species.” For instance, those profiles having a Twitter bio that look and read like a real person, yet tweet at inhuman volumes (50 plus per hour) are considered cyborg bots. When these bots are acting in concert, they can influence the flow of any conversation.

It’s important to discern what behaviours any given profile can control.
 

There are also profiles with the appearance of the “nice and not so noisy neighbour.” On their own, you’d barely notice them in passing, but collectively these profiles are amplifying a wide variety of misinformation and fake content as well.

In combination, these different socialbot “personalities” use a variety of news feed mechanics (volume of tweets and mixes of content type) in efforts to influence trending topics, push people out of conversations, or successfully drown out other points of view. Believing that technological solutions will fix these problems is like thinking you’ll win whack-a-mole at the county carnival midway.

It’s easier to reshare content without considering the source or origins with no simple way to check and verify the identities behind every retweet or like. Even if someone chooses to check the social participants, only a fraction of the engagement accessible with the first 25 profiles connected to retweets and likes is visible. Essentially, it’s impossible to see how conversations are being manipulated, and where bots outnumber real people.

The people and businesses choosing to use bots to manipulate their audiences are culpable.

By looking behind the numbers and analyzing the profiles connected to the retweets, likes, and replies, we are researching today’s human versus non-human interactions on Twitter. One case study reveals socialbots at work in the lead-up to the May 9 British Columbia provincial election. This wasn’t a presidential-like deluge of socialbots dominating a conversation, yet a surprise nonetheless.

By documenting 13 separate tweets connected with one (now non-existent) profile and the hashtag #bcpoli, we uncovered a pattern of socialbot activity. In one example, bots were responsible for 43 of 59 retweets. Seeing this behavioir play out in a localized conversation serves as a canary in the coal mine-like moment. No matter how big or small the jurisdiction, event, or conversation, there’s no immunity from potential socialbot infections.

Outing celebrities like Katy Perry for having fake followers or publishing headlines suggesting President Trump’s profile is a bot army capable of unleashing propaganda war brings nothing meaningful to this discussion. It’s important to discern what behaviours any given profile can control. Who follows you, retweets, and likes your 140 characters is essentially beyond of your control.

On the other hand, buying followers, using automated retweet services, or automatic follow back features are all behaviours contributing to this problem. It’s a significant global issue to have state actors and their political proxies unleashing socialbots to amplify political misinformation. Yet, the people and businesses choosing to use bots to manipulate their audiences and artificially inflate key social metrics are equally culpable in proliferating the ecosystem of fake.

Avatar

John Gray

John Gray is the co-founder and CEO of Mentionmapp. As a writer, John cares about keeping the humanity in our stories and conversations about technology. He has a B.Ap.Sc. in Communications and a B.A. in English, both from Simon Fraser University.

0 replies on “How socialbots are the masters of manipulation”