April #FutureMakers on digital ethics shows need for better data and fewer silos

FutureMakers Talks

In this present-day climate of data leaks, privacy breaches, and losing $2,000 because your McDonald’s app got hacked, there’s no better time to discuss the implications – and impact – of digital ethics. How data is created and used made for a perfect topic of discussion at the April edition of RBC FutureMakers Talks, led by a cadre of experts in AI, data science, and much more.

The first speaker of the night was Helen Kontozopoulos, the co-founder of ODAIA.AI and an adjunct professor of computer science at the University of Toronto. Kontozopoulos recounted her time at UofT, sharing a number of troubling stories involving “passing the buck” when it comes to responsibility in machine learning and AI development.

“Ethics is like that elephant in the room that nobody wants to talk about,” she said, noting that a lot of her students just wanted to start building, figuring that they’ll worry about privacy later. The problem there, Kontozopoulos noted, is that no other departments up the chain of command want to bother taking responsibility to ensure that ethics are being considered. “As long as we’re not lying, we’re okay,” she said in an imitation of a modern tech company’s marketing department.

Kontozopoulos attributed the pass the buck mentality to the siloed nature of departments within tech companies, with no one department responsible for decisions as a whole. This disconnect, she said, has led to ethical breaches and massive AI-related errors like racist chatbots.

Kontozopoulos also noted that a similar disconnect exists in the public space. The federal government, for example, has been forwarding security and privacy initiatives, such as the pan-Canadian AI strategy, but the general public is largely unaware of their intentions or effects. “Tech has disrupted and transformed norms of human behaviour – our interactions and expectations have changed, and the lines between what is private and what is public are blurred now,” Kontozopoulos explained.

The next speaker was Deborah Raji, a student researcher and Google AI mentee, who discussed the importance of the need for inclusive AI. Noting that machine learning and AI models default to male Caucasian faces is a massive problem in a multicultural world, Raji gave the room a well-rounded explanation of why this bias occurs and what needs to be done to fix it.

The problem, according to Raji, is that widespread use of commercial AI systems necessitates careful attention to bias and abuse. Some examples of this include gender bias in Google speech recognition, facial recognition inaccuracies for non-white people, voice assistants unable to understand strong accents, and so on.

Incorporating inclusion in model design is the solution. That means bigger datasets with a broader range of data, or “more pixels” to fill the picture, as Raji put it. To that extent, Raji noted some of the larger tech companies are working to “balance the scales a bit,” including IBM’s Diversity in Faces initiative (which created a bigger dataset of varied faces), and an Inclusive Images Dataset on Google.

Raji’s own initiative, Project Outreach, looks to dismantle these biases from the ground up, investing in lower-income and underrepresented communities and teaching them about computer science and representation. “I’m very much excited and interested in the idea of reaching out to underrepresented groups, because I think a lot of the changes we see in model administration and making these decisions happens when you bring more diverse voices to the table.”


Masoud Hashemi, a senior data scientist at RBC, followed to speak about ART, or the “Accountability, Responsibility, Transparency” model of digital ethics. The problem with machine learning, Hashemi explained, is that building systems is so different from traditional software development: the fact that machine learning models evolve their knowledge over time depending on specific datasets means that “performance evaluation alone is not sufficient.”

Hashemi went on to discuss how machine learning requires datasets with global, not just local, interpretability. The example he used was Google Translate defaulting to the male usage when translating “doctor” from gender-neutral languages – not only does this demonstrate a bias towards men, but also a need for machine learning to incorporate more diverse datasets.

However, Hashemi ended on a positive note, adding that there are reasons to be “cautiously optimistic” when it comes to fairness and machine learning, including the opportunity to reconnect with the moral foundations of fairness. It can also be a great excuse to prioritize collaboration across teams. “People from all these backgrounds – social sciences, ethics, law – everyone should work together to find solutions for different applications,” he said.

FutureMakers Talks

To close the evening, Sarah Sun, chief data strategist at Goldspot Discoveries Inc., spoke to the room about data privacy and ethics in mining – the kind with rocks, not cryptocurrency. While it may at first seem like an odd topic, Sun was quick to remind the audience that paper maps are some of the earliest examples of recorded data, and the data gathered from the mining industry is just as susceptible to ethical threats as any other industry.

Sun used a history lesson in Bre-X to explain her point. The biggest breach of ethics that the mining industry has seen thus far, it led to sweeping changes in industry standards (don’t know the history? There’s a movie!). This shift towards “geo-ethics,” as Sun called it, included professional consequences (people can’t claim to be geologists on LinkedIn without official designations, for example, and can quickly lose that designation if caught using falsified data), stronger reporting standards, and quality control checks.

Yet geo-ethics, Sun noted, is struggling in the face of new technology in the mining industry; challenges include mistrust around new tech (such as the cloud), misunderstanding machine learning, and an evolving black market for data, particularly concerning stolen maps from other mining firms.

Sun argued the solutions lie in stronger education, more research into emerging technologies, and data sharing – the same themes of collaboration and education referenced by previous speakers. The lesson? No matter the industry, ensuring that ethical implications are considered from the beginning is the path to a more inclusive, fair society.

Tickets for the next RBC #FutureMakers event on May 15th range from $7-15, but BetaKit readers can get in FREE using the promo code “BetaKitPROMO”.

BetaKit is a FutureMakers media partner.

Caitlin Hotchkiss

Caitlin Hotchkiss

Content coordinator, social media smartypants, wordsmith, Human Workflow™. Exists primarily on coffee, cat pictures, German dance metal, and pro wrestling. I will fight for your right to the Oxford comma.

0 replies on “April #FutureMakers on digital ethics shows need for better data and fewer silos”