Cybersecurity experts tackle privacy protection ahead of Control conference

security

Over the last several years, many Canadian researchers and news outlets have commented on the rise of political propaganda bots on social media platforms. There are reports of Twitter bot usage in Quebec politics as early as 2012, and yet it seems these trends continue unabated without any meaningful regulation. Reasonable calls for a digital campaigning code of conduct seem to fall on deaf ears, leaving policing of these matters to the very social media outlets that allowed this to happen in the first place. What does it mean for democracy when a propaganda bot is indistinguishable from a human account?

The recent revelations about Cambridge Analytica and its alleged Canadian offshoot AggregateIQ have put these modern political tactics under increased scrutiny, but will anything actually change?

In an effort to get ahead of this issue, I sat down with three of the biggest names in cybersecurity — and speakers at the Canadian Cloud Council‘s upcoming Control conference — to analyze the state of play and find out what citizens, governments, and political parties can do to take advantage of the benefits of these new tools while hedging against the potential ethical and societal risks.

For the next 72 Hours, Betakit readers can register for the event for only $95.00 by clicking on this link.
 


 

Robert Brennan Hart: Are political propaganda bots on your day-to-day radar? What cybersecurity or other mechanisms are being leveraged to contain them?

Robert Herjavec, CEO of Herjavec Group: In light of today’s political climate, absolutely yes, this is top of mind. We have many clients that we engage in threat hunting, social media monitoring, and brand analysis to help detect any mis-messaging, inappropriate use of brand, or negative propaganda.

In most cases, propaganda bots rely on access to big data generated through Facebook or Twitter. As people click on likes, they leave a trail of breadcrumbs that bots combine with algorithms to target their messages at scale. Limiting the impacts of this type of propaganda comes down to balancing privacy policy, consumer awareness and also organizational and governmental intervention.

From a corporate perspective, we advocate for protection, detection, and containment, just like any other cybersecurity incident – through a balance of people, process and technology.

Michael Hermus, former CTO of the US Department of Homeland Security: It is important to note that this problem is not limited to only “bots,” or autonomous fake accounts posing as real people. A parallel tactic often used in concert with bots is to leverage real people pretending to be different real people, to spread misinformation or promote a certain agenda. These are “trolls” and/or “sock-puppets,” and the Russian organization recently referenced in the Robert Mueller indictment, called the Internet Research Agency, is a prime example of a troll farm used with tremendous effect.

Very few companies hold privacy as a top priority. It will require government intervention and strict regulation to see true change.

In general, most people I encounter are quite concerned with this, as citizens of democracies that rely on access to accurate information. Unfortunately, the issue does not necessarily rise to the top of the agenda for many organizations, unless it directly impacts their business model. This includes social media firms, ‘traditional’ media organizations (which are now all partially digital), businesses in the digital advertising ecosystem, and political organizations.

Clearly, some government entities are also quite focused on this problem — and not only from the perspective of protecting our democratic institutions. Law enforcement and national security entities have been monitoring the use of these tactics by terrorist and extremist organizations for propaganda and recruiting for quite some time.

In terms of mitigation and containment, the techniques used to deal with terrorists, unfortunately, don’t work as well for misinformation campaigns – it is a much more insidious threat. Violent and extremist content is often easily identifiable through a combination of automated and manual techniques, and posting such content always violates the terms of service for major platforms. Therefore, these posts can be taken down quickly, and related accounts can be suspended. The technology platforms are usually quite interested in cooperating with law enforcement on this front.

On the other hand, many modern disinformation campaigns deal with politically polarizing topics that have a natural (real human) constituency. Trying to separate fact from fiction in this realm is a very tricky grey area, for both social media platforms and the government (at least, in free democratic societies). However, there are ways to identify patterns of behavior and account characteristics that are typical of bots or sock-puppets, and since the accounts are fake or fraudulent in some material way (which is also against most service terms), this can be used to shut them down. Unfortunately, as these adversaries get more sophisticated, enabled by advances in technology, the detection algorithms and techniques need to as well, creating a digital arms race.

RBH: What regulations should governments look at to make democracy more resilient to these kinds of campaigns? Are there pragmatic and meaningful ways to legislate on this issue?

Richard Rushing, chief information security officer at Motorola: One must remember that data is power. Even bad data can still be powerful. It is the internet, it is an IP address, it a faceless user, and trying to validate the user will be a hit or miss, at some level. All you have to do is bring a question of distrust to a system and the bad guys have won. Just like a data breach, it is hard to earn the trust back once it’s lost or brought into question.

Social Media
Photo via Burst.

Lance James, chief scientist at Flashpoint and founder of the CryptoLocker working group: The solution to this problem will require significant research as content is becoming the new security problem. But there could be ways where Congress can build a verified “journalist” source repository; one that is digitally signed and sources can opt-in to use verifying that they are coming from a registered source of journalism. It is hard to tell the difference between an opinion and a news source these days, as some consider blogs as news, and news as blogs.

To create resiliency, the reality is that one must use transparency to fight deception.The human condition will respond to its biases, and these attacks take advantage of cognitive dissonance and belief-based thinking. Disinformation focuses on corrupting the decision-making process, and the worst thing to do is literally react or have a reflex (it’s called reflexive control for that very reason). An actual, step-back, awareness model will have to be implemented, which will require analyzing the root cause of the issues. Banning information will not be a sound way of solving this problem, but encouraging informative understanding instead and ways to identify what is truth in an overwhelming age of information.

The best way to do this is through cybersecurity as a detection method, discrediting the information immediately through a platform that users can go to and check if this source is propaganda (digital signatures, etc.), and creating awareness on the effects of psychological propaganda and how it works. Training the masses to recognize the truth will be difficult at first, but that also means that the government will need to be transparent on the objectives of the United States so that it is aligned and we have a stable source of truth to work with.

Michael Hermus: As indicated earlier, this can be a bit of a grey area that makes it harder to legislate. Any attempt to control or restrict content obviously runs up against important concepts of free speech.

The right kind of transparency can quite literally shine a light on users, organizations, and motivations, allowing consumers to be better informed as they make judgments about content.

However, there is one overarching principle that can help solve these problems, which is transparency. The right kind of transparency can quite literally shine a light on users, organizations, and motivations, allowing consumers to be better informed as they make judgments about content. For example, legislation has been introduced in the US to require that political advertisements on digital and social platforms disclose information about who paid for the ads. Additional transparency around social media account owners that make it harder to post anonymously, or under fake identities, would also be tremendously helpful.

Robert Herjavec: Data privacy is the key issue here and it will absolutely require government intervention to be resolved.

We can’t rely on private corporations to make privacy — and in turn, reducing propaganda campaigns — a priority. We also can’t rely on consumers; they want it both ways, the efficiency and experience, as well as the privacy and security. As consumers, we rarely read the T’s & C’s; we click “accept,” we download the unauthorized app, and yet we also want privacy and security. Without penalty to the organization reinforcing the policy, it is already ineffective. We have to consider user opt-in, flexibility to control data access and breach notification, and penalties for not abiding by the regulation.

Private organizations need to feel the pain of not adhering to these policies. The first time we will truly see this in effect is with the EU’s GDPR legislation slated to come into effect May 25, 2018.

RBH: Are social media companies doing enough to safeguard the public against propaganda bots?

Robert Herjavec: Clearly not. Facebook didn’t do enough to safeguard their user’s information, and in retrospect, I’m sure they would agree.

With a platform like Facebook, [Mark] Zuckerberg and his executive team have the opportunity to set an example for corporate America and really the world when it comes to data privacy standard. They see that now. That being said, very few companies hold privacy as a top priority. It will require government intervention and strict regulation to see true change. Over the next two years, I expect we will see the US adopt a similar policy to the EU’s General Data Protection Regulation. It is something that we absolutely have to do.

Richard Rushing:As long as things are advertisement focused on social media, the consumer will always be in danger. A single cookie could cause a user to change their opinion on a subject because everywhere they go on the web, they are subverted with biased information or misinformation.

Michael Hermus: The big players (Facebook, Twitter, Google, etc.) certainly put significant resources into combating many kinds of fraudulent accounts and prohibited content. They are fairly aggressive in dealing with violent or terrorist content. However, I think it is safe to say that these organizations could do more to combat trolls, sock puppets, bots, and disinformation campaigns in general.

The Russian election interference and recent Cambridge Analytica scandal has put a lot of pressure on Facebook, and I don’t think it is a coincidence that CEO Mark Zuckerberg recently came out in support of the Honest Ads Act.

Facebook is also planning to require identity verification for a broad range of issue ads, and create a searchable archive of political ads to aid in transparency. These are very good steps in the right direction.

RBH: How can consumers and businesses protect themselves against digitally weaponized psychology?

Lance James: Consumers should inform themselves, check their sources, and do their homework. You may not agree with a popular news source but at least you know it’s not foreign-operated. Stick with the sources you trust rather than opening yourself up to just anything on the internet. Fact check, if possible, before making an opinion. When reading a headline, ask yourself how you feel when you read that headline…did it make you emote and react? Why? Who’s the source that’s doing this? And lastly, assume everything you see on the internet is untrue until researched and confirmed by credible, mainstream sources.

Michael Hermus: Honestly, some fairly basic axioms come to mind. The most obvious is “Don’t believe everything you see on Facebook or Twitter.” While social media, and digital content in general, have created an environment where everyone has a platform, all sources should not be treated equally. Various “mainstream” media outlets unquestionably have a degree of political bias, but their “hard news” components (as opposed to opinion or commentary) have a pretty good track record of being based on facts. Elevating Facebook posts linking to unknown websites to the same level as actual news media is inherently unhealthy to democracy.

A strong corollary to this is “Come out of your echo chamber every so often.” On all points of the political spectrum, digital media has facilitated an increasing isolation of ideology, such that people socialize with, and consume content from, people that share their own viewpoints. Conversely, they can literally block out those that have different opinions. This environment drives increasing polarization as individuals continually reinforce preconceived beliefs and limit exposure to any contrary evidence or opinion. This is situation ready-made for exploitation by disinformation campaigns with nefarious agendas.

For the next 72 Hours, Betakit readers can register for the event for only $95.00 by clicking on this link.

Feature photo via Unsplash.

Avatar

Robert Brennan Hart

Robert Brennan Hart is the Founder and CEO of the Canadian Cloud Council and Creator and Executive Producer of Control.

0 replies on “Cybersecurity experts tackle privacy protection ahead of Control conference”