We reached the end of an era several weeks ago when the news broke about the Facebook/Cambridge Analytica scandal. Prior to that, most people had been willing to turn over their data to companies in exchange for access to free products and services. Most of us didn’t think about it twice. We flocked to platforms like Facebook, even though that company’s primary business model has always been to harvest our data and use it to target us with products and services.
Unfortunately, as end users and consumers, most of us naively assumed that such companies could be trusted to do the right thing. We skimmed over privacy agreements, and often skipped reading them entirely. Worse yet, we continued to allow these companies to access our data, despite the growing evidence that many of them might be misusing it. In short, we trusted them when we shouldn’t have.
Trust is a critical issue for the technology industry. It allows tech companies to access the data that they need to fuel their machine learning models and ultimately power their artificial intelligence (AI) solutions. Without the trust of customers, companies will lose access to essential customer data. Meanwhile, companies that invest in building trust will gain a huge advantage that compounds over time as trust grows and AI model performance improves.
Why now?
Every company with ambitions in AI has an opportunity to make building trust part of its core mission.
The Facebook / Cambridge Analytica scandal — where up to 87 million user profiles are likely to have been harvested for political research and targeting purposes without permission — is just one of the latest in a string of revelations of data misuse or loss. For the past few years, there’s been a near constant stream of announcements of massive data breaches. Just last week, the Indian prime minister’s political party was accused of sharing the data of millions of application users without permission.
In the old paradigm that led to this situation, privacy and security were secondary considerations for technology businesses. Companies would collect and store as much data as possible, even if they didn’t have a specific use for it. Rather than proactively educating users, most companies would issue lengthy privacy policies, and safely assume that they would go unread.
At best, companies operating in this way take a reactive, compliance-focused approach to privacy and security. The result is insufficient focus on how user data is truly being used and monetized, and not enough consideration for the rights of the individuals and organizations represented.
Trust and AI
Every company with ambitions in AI has an opportunity to make building trust part of its core mission and organizational DNA. The good news is that through techniques such as differential privacy, it’s possible to build trust by quantifying the level of privacy you’re providing to your users. That, in turn, encourages customers to share more data, because they are confident that it will remain private.
AI also presents new challenges for trust. Since much of the data that companies use to train AI is generated by humans who are often themselves biased along racial, gender, or other lines, there’s a risk that this bias can become embedded in an AI model. To earn user trust, companies must hold their models to the highest standards by proactively anticipating, discovering, and correcting error and bias. In addition, as AI techniques become increasingly complicated, companies should give special attention to figuring out how to explain the choices of their models instead of simply assuming that users are happy to trust a black box.
A new age of trust
The best companies are already starting to build trust into their strategy. In fact, big brands like Apple and Google, as well as emerging tech leaders such as Integrate.ai and Bluecore, are already making trust a central part of what they do. They’re developing comprehensive programs for building trust proactively, and they’re making sure that privacy and security are first-class citizens in the product development process.
More importantly, they’ll be able to create disproportionate value for both their customers and themselves.
As part of the approach, they are adopting techniques such as differential privacy and narrowing the criteria for collecting, using, and sharing data to make sure that these activities are truly necessary to create value for their users. They are also embracing transparency with users by providing explicit and easy-to-understand information about how data is used and by whom, offering mechanisms for opt-out and data export, and emphasizing stewardship of user data rather than claiming corporate ownership of it.
Finally, the best companies are leading discussions around trust with their customers, competitors, partners, regulators, and the public at large. After all, if you’re not building shared marketplace norms for trust, particularly around how data is managed and insights are derived and used, then you’re at risk of somebody else in your industry doing it first.
The current Facebook scandal likely won’t be the last. There are huge challenges to overcome if we are to make genuine progress on issues of privacy, fairness and bias. But the companies that do figure it out will be able to protect themselves from the sort of backlash and value destruction currently playing out in the media. More importantly, they’ll be able to create disproportionate value for both their customers and themselves.
We will be publishing more on the topic of trust over the coming months. In the meantime, you can read more about differential privacy in this post and listen to how it is being used by one of our Georgian Partners’ portfolio companies, Bluecore, in this podcast.
This article was syndicated with permission from Georgian Partners’ blog.
Photo via Unsplash.