Mila, UNESCO warn against use of AI in “harmful contexts” in joint book on AI governance

Yoshua Bengio, Mila
Joint book addresses lack of accountability, transparency in AI training models.

Though 2022 was marked by mass layoff in tech, it also saw the emergence of generative artificial intelligence (AI) that was thrust into mass adoption by players like OpenAI’s ChatGPT.

Since the new year started, startups and large corporations alike have been competing to release products that make use of generative AI, including Microsoft’s Bing and Google.

“This means working towards AI systems that are human-centered … as well as upholding human rights and the rule of law.”
 
 

A major point of contention in AI, however, is the lack of accountability and transparency in its data models and training processes. There have been multiple reports of early and recent iterations of ChatGPT producing inappropriate text, and showing tolerance to sexism and racism.

AI-based platforms tend to reflect human biases as well as historical or social inequities. The concern, however, lies in the fact that AI bias can cause real-life harm, like wrongful arrests or denied access to certain services.

In an effort to set the standards for ethical and inclusive use of the technology, Montréal-based AI institute Mila partnered with the United Nations Educational, Scientific, and Cultural Organization (UNESCO) to publish a joint book on AI governance.

With this book, entitled Missing Links in AI Governance, Mila and UNESCO said they want to provide “fruitful perspectives” to help shape the development of AI so that “no one is left behind.”

“This means working towards AI systems that are human-centered, inclusive, ethical, sustainable, as well as upholding human rights and the rule of law,” reads a portion of the book’s introduction.

According to a joint statement by UNESCO and Mila, the book discusses the influence of AI on Indigenous and LGBTQ+ communities, the necessary inclusion of Southern countries in global governance, and the use of AI to support innovation for socially beneficial purposes. It also explores the use of AI in potentially harmful contexts like autonomous weapons or “manipulation of digital content for social destabilization.”

RELATED: Generative AI will test what’s worse: biased data or user bad faith

The book comprises 18 articles on AI governance written by academics, civil society representatives, innovators and policy makers. Canadian AI leader and Mila founder Yoshua Bengio co-authored an article called “Innovation ecosystems for socially beneficial AI” alongside fellow Mila researchers Allison Cohen, Benjamin Prud’homme, Amanda Leal De Lima Alves, and Noah Oder.

Founded in 1993, Mila claims to have a community of over 1,000 researchers, with multiple members conducting research at the intersection of AI and sustainability, health, fairness, ethics, and governance.

Mila has led several initiatives for responsible AI in the past, such as co-leading the Inclusive Dialogue on the Ethics of AI with the University of Montréal’s Algora Lab, as part of the process leading to the adoption of UNESCO’s Recommendation on the Ethics of AI in 2020.

This joint book with UNESCO was partially funded by the Ministère des Relations internationales et de la Francophonie du Québec (Québec’s Ministry of International Relations and La Francophonie) and the Fonds de recherche du Québec (Québec’s research fund), according to Mila.

Featured image courtesy Mila.

Charlize Alcaraz

Charlize Alcaraz

Charlize Alcaraz is a staff writer for BetaKit.

0 replies on “Mila, UNESCO warn against use of AI in “harmful contexts” in joint book on AI governance”