Armilla AI has launched with $1.5 million in pre-seed funding and the aim to ferret out and correct biased AI.
Investors in Armilla included the Spearhead Fund, Two Small Fish Ventures, and C2 Ventures. Other investors were Yoshua Bengio along with Apstat partners Nicolas Chapados, and Jean-François Gagné, along with a few other undisclosed angel investors. The round closed in the early summer.
The fresh funds are helping with the launch of Armilla AI, and have already gone toward hiring the startup’s first round of engineers.
“AI models are making more critical decisions every day, which means they require new oversight protocols that can ensure they are accurate, fair, and curb potential abuse.”
Armilla provides customers with automated validation tools to test machine learning for robustness, accuracy, fairness, data drift, bias, and more. The company describes its platform as the first all-in-one quality assurance for machine learning.
“AI models are making more critical decisions every day, which means they require new oversight protocols that can ensure they are accurate, fair, and curb potential abuse,” said Yoshua Bengio, the ACM A.M Turing Award recipient, founder of the Québec AI institute Mila, and an Armilla investor.
“This growing need for independent validation requires the same attention and investment used to build models themselves. This is how to responsibly build AI,” Bengio added.
The newly minted startup is working with a few undisclosed partners and “solving real-life problems for them,” according to Armilla’s CPO, Karthik Ramakrishnan.
The platform runs a rigorous series of tests across a gamut of scenarios,” Ramakrishnan said. For instance, financial institutions not wanting to discriminate against immigrants would typically remove the immigration status in the data set.
But according to Ramakrishnan that’s not enough. It’s been found that a high correlation exists between the status of being an immigrant and multi-tenant units in a building, Ramakrishnan noted. “Why?” he asked rhetorically. “Because most immigrants tend to live in shared accommodations in the first few years. So if you did not remove the multi-tenancy data as well your model becomes implicitly biased against immigrants now.”
These are the kinds of checks that need to be done before a company makes their AI models public, Ramakrishnan maintained. “These are the challenges we see in real life. These are the kinds of things we do not want to see reflected and create those systemic bias issues.”
The CPO noted that enterprise companies won’t just put something out “willy-nilly,” and that they have risk and compliance teams who want to ensure the reputational and legal risks are understood.
“We found that there was no system doing this in a systematic way,” Ramakrishnan said. “In our experience we found this was broken. Governments are waking up to this fact as well.”
For instance, the federal Office of the Privacy Commissioner of Canada has been looking at policies around AI, and expressed its concerns over its use: “We are paying specific attention to AI systems given their rapid adoption for processing and analysing large amounts of personal information. Their use for making predictions and decisions affecting individuals may introduce privacy risks as well as unlawful bias and discrimination.”
The European Union is even more advanced in its legislation, and has introduced a legal framework for AI as well as the algorithmic accountability act that can provide fines up to six percent of a company’s revenues if it’s found that a firm wasn’t rigorous with its use of AI.
Canada is also one of the founding members of The Global Partnership on Artificial Intelligence (GPAI). GPAI aims to share multidisciplinary research and identify key issues among AI practitioners, with the objective of, among other things, promoting trust in and the adoption of trustworthy AI.
“All of this is saying that most of the regulators in government are waking up to the fact that AI is already embedded in our society,” said Ramakrishnan. “And it’s one of those technologies that’s not obvious. It’s not a physical object sitting in front of you. There are decisions being made for you, about you, in the background. It’s going to be seamless. That’s concerning if we don’t know how these platforms or these systems behave, and if they’re not being designed for safety.”
CEO Dan Adamson and CTO Rahm Hafiz formed Armilla along with Ramakrishnan. All three previously worked in AI companies. Adamson was the founder and CEO of OutsideIQ, a company that developed proprietary cognitive computing search and analysis technology that employs both natural language processing and machine learning to think and act like a researcher. New York-based Exiger, a company that helps businesses monitor compliances such as money laundering regulations, acquired OutsideIQ in 2017.
Hafiz worked as Exiger’s director and head of cognitive technology, while Ramakrishnan was the VP, head of industry solutions and advisory at Element AI.
Toronto-based Armilla currently has nine employees, and will earn revenue through providing SaaS. Launched in 2020, the company was in stealth mode until October 21.
Ramakrishnan said the company faces the normal startup challenges of scaling appropriately, and generating revenue. He noted with the new legislation forthcoming on AI use, Armilla doesn’t know how that might change the landscape.
“Clients are grappling with this,” he said. “So we need to move very quickly so we’re the first QA [quality assurance] for ML platform. And when you’re the first, it’s uncharted waters.”
Feature image source Unsplash