Last month, a group of Canadian tech leaders penned an open letter to the Toronto Police Services Board (TPSB) calling for a series of changes to be made to its draft artificial intelligence (AI) policy, which is designed to govern how the force uses such tools.
The December 15 letter comes in response to the TPSB’s call for public feedback on the draft policy. The letter’s signatories include 21 data science, machine learning (ML), and AI experts and entrepreneurs from companies like Shopify, ApplyBoard, PointClickCare, CIBC, and TD Bank, all of whom are members of an ML ethics discussion group created by Toronto startup Aggregate Intellect.
The use of AI technologies in policing is a controversial topic.
The letter’s recommendations include revising the proposed AI policy to better account for compromised data, bolstering the approval and review process of new technologies, and adjusting its risk categorization of certain tools.
The letter’s 21 signatories include: Willie Costello, data scientist at Shopify; Somaieh Nikpoor, AI strategy lead for the Government of Canada; Amir Feizpour, co-founder and CEO of Aggregate Intellect; Anh Dao, co-founder of Healthtek; Daria Aza, data analyst at Manulife; Indrani Bhattacharya, data scientist at StatusNeo; Sina Dibaji, data scientist at ApplyBoard; Soomin Aga Lee, manager of data and AI governance at PointClickCare; Frankline Ononiwu, data scientist at TD Bank; and Suhas Pai, CTO of Bedrock AI; among others (since publishing this story, the letter has been updated with a disclaimer noting “the individuals’ opinions do not reflect the views of their employers”).
Feizpour told BetaKit he decided to co-write the letter because the issues at play were particularly important to the Aggregate Intellect team, members of which come from historically marginalized backgrounds and he noted have experienced bias ranging from micro-aggressions to “uninviting environments for people who look ‘different’, all the way to blatant racial aggressions.”
“Being a tech person, I understand from the very basics of how AI and data works that machines are going to simply replicate our societal biases in the best-case scenario and amplify it in most cases, unless we actively design AI systems to counteract that,” said Feizpour. “So, it is important to me on a personal level to make sure AI is leveraged with enough contextual knowledge and in the most responsible way possible.”
Police departments around the world are investing in AI tools like crime prediction and facial recognition softwares, but the use of AI in policing has become a controversial topic given that it has been found to violate privacy laws, perpetuate existing biases, and lead to wrongful arrests.
The TPSB said it is developing the AI policy “to create transparency about the [TPS’] use of AI technology, and to ensure that AI technologies are used in a manner that is fair, equitable, and does not breach the privacy or other rights of members of the public.”
“There are so many ways that using “AI in policing” can go wrong that as soon as we saw the call we were simultaneously alarmed (because of everything that was going on in the US last couple of years) and pleased (because at least TPSB is asking the experts’ opinion),” said Feizpour.
As part of the policy, TPSB plans to maintain a public list of all high, medium, and low-risk AI tech in use, roll out a method of collecting public concerns about specific technologies, and set up a continuous review schedule of all technologies not deemed to be of minimal risk. The TPSB claims that this will be “the first Policy of its kind among Canadian Police Boards or Commissions.”
“We applaud the TPSB’s call for public feedback … yet there are also several aspects of the current Policy that we believe must be improved.”
“We applaud the TPSB’s call for public feedback on the draft Policy, and its explicit commitments to transparency, fairness, equity, privacy, and public consultation,” states the letter. “The kinds of assessments and reporting that this Policy sets out provide a powerful framework for ensuring the appropriate and trustworthy implementation of AI technologies by the Toronto Police. Yet there are also several aspects of the current Policy that we believe must be improved.”
For instance, the letter refers to the current descriptions of risk categories as “unsatisfactory.”
As it stands, TPSB’s draft policy groups AI technologies into five risk-based categories: extreme risk, high risk, medium risk, low risk, and minimal risk. Extreme-risk technologies include facial recognition software that derives data from mass surveillance, while the high-risk category lists an analytics system that recommends “where units should be deployed to maximize crime suppression.”
According to the current policy, extreme-risk technologies will not be sanctioned for use, and high and medium-risk technologies will be subject to a set of evaluations and consultations, while low and minimal-risk tech will face lighter reporting restrictions.
The letter from Canadian AI experts calls for the TPSB to deem any AI tech using biased or poor quality data to be deemed as extreme risk, rather than high risk, because it is “inherently compromised.” It also asks for the TPSB to make explicit that data collected via social media should not be considered a viable source, and that data acquired through data brokers “cannot be classified as below high risk.”
The letter’s recommendations follow the Canadian exit of American facial recognition startup Clearview AI in 2020, after a joint investigation from Canadian privacy authorities determined the company’s software, and its use by the RCMP, violated privacy laws by collecting photos of Canadians without their knowledge or consent via social media platforms like Facebook and LinkedIn.
Last month, a CBC report found that the Toronto Police Service, which initially denied then later admitted to using Clearview AI’s tech, deployed it in 84 investigations. The TPSB is a civilian body that oversees the Toronto Police Service.
The tech industry letter also requests that the TPSB consider any tech that has created significant concern in the community or that has been rejected by Toronto citizens to be considered as extreme risk.
“The most fundamental issue that I see with the proposal as-is is lack of provision for more continuous and in-depth interactions with the public and importantly experts.”
-Amir Feizpour, Aggregate Intellect
In addition to these recommendations, the letter identifies room for improvement in the formal review and approval process outlined, highlighting that as it stands, there are no checks to ensure the initial risk assessment process is conducted by qualified experts and relevant stakeholders, which include independent and community representation.
The letter also calls for continuous monitoring rather than limiting monitoring to just a year after full deployment, and calls upon the results of this reporting to be made public, and include details about the ownership structure of vendors and any changes in their partnerships, governance, or data management policies.
“The most fundamental issue that I see with the proposal as-is is lack of provision for more continuous and in-depth interactions with the public and importantly experts on how AI would be used in surveillance and judiciary types of activities,” said Feizpour. “These are so sensitive and can really hurt public’s trust in the system if handled poorly that I think an ongoing conversation and extreme transparency is the only way to do this correctly.”
Although such a process would create logistical hurdles, Feizpour believes that “given what’s at stake,” this level of transparency and feedback is warranted. “If done poorly this is guaranteed to hurt some demographics more than others, and that’s the last thing we want as a society,” he said.
The TPSB’s public consultation on the proposed AI policy concluded on December 15. The TPSB plans to use the feedback it gathered during this process to further revise its policy and bring its final draft before the board for consideration and approval at its February 2022 meeting.