Maluuba team explains why language is the key to making machines intelligent

Montreal’s status as a global artificial intelligence hub may have been big news for some over the past few months, but one Waterloo-based company was ahead of the rush in getting their Montreal lab up and running in early 2016. Now, with the biggest deep learning lab for language in la Belle Ville (and growing!), Maluuba is delving deep into building AI that can not only understand human language but also communicate effectively, with dozens of applications and opportunities for further research on the horizon.

BetaKit sat down with research scientist Adam Trischler and product manager Rahul Mehrotra to discuss the Maluuba’s work, growth, and the possible implications of the AI revolution on the world.

How did Maluuba get started, and why the focus on language, versus images or other kinds of machine learning?

Rahul Mehrotra: We actually started about four and a half years ago, around the time Siri was bought by Apple. An equivalent voice assistant didn’t exist on Android yet, so the initial product launch was a Maluuba app that competed with Siri on Android.

With the rise of machine learning and deep learning over the past three years, we transitioned from a voice assistant to actually solving a fundamental problem: teaching machines to understand human beings. We learned that you can hardcode the way people communicate, but there was no fundamental research, or not a lot of it, being done on how to teach machines to reason and to think. About 18 months ago we hired Adam — he was one of our first deep learning researchers — who started what this lab has become today.

And that’s why you focused on language? Because it’s the best way to teach machines to think?

RM: We believe that to really solve intelligence, to build a machine that’s truly intelligent, then language has to be at the heart of it.

So would you say that these machines are truly intelligent? I’ve heard from others that the AI we currently have doesn’t even come close to what we would consider true intelligence.

Adam Trischler: Right now so-called artificial intelligence systems are all very specialized. They’re good at one thing that they’re trained on, like recognizing images. You can train a neural network on millions and millions of images over many hours of computer cycles, and it will be able to recognize different breeds of dogs, different types of birds. It can achieve super-human performance on that narrow task, but that’s the only thing it can do.

With existing techniques, if you try to train that network to do something else, it would just forget all of its image recognition capability. In that sense, humans are way beyond what we have [in AI] right now because we are general intelligence entities. We learn new skills and compose skills together without erasing what we’ve already learned.

So has anyone in the AI community managed to make things combine?

AT: Not yet, and everyone in the community recognizes this limitation. The problem has its own term: it’s called “catastrophic forgetting”. People are definitely trying to tackle that very problem so that neural networks can extend themselves and add new skills.

Maluuba founder Kaheer Suleman
Maluuba founders Kaheer Suleman and Sam Pasupalak

How does Maluuba fit into the AI puzzle that is piecing together?

RM: We are focused on language, which I think helps to differentiate ourselves from the other labs who do almost everything. They’ll do images and sound and video and text, whereas Maluuba is extremely focused on language, and written language more than anything else.

Also being in Montreal helps us significantly. We are able to attract some of the best researchers in the world who want to stay local, or who want to come to Canada. Our lab is filled with people from Italy, Germany, China, France, India. We have one of the best deep learning language-based labs in the world today. And it’s been difficult competing with Google and Facebook, but I don’t think it’s been that difficult given our narrow focus.

AT: Before Element AI and the announcement by Google about Hugo Larochelle’s lab at the Google Montreal Office, we were pretty much the player in Montreal. It was a very important strategic decision to move here, because people who go to UdM to do their PhD — they’re Yoshua Bengio’s students, and the best in the world — end up loving Montreal, so it’s an easy case to make that they should continue doing this great work, and don’t have to move.

You’re clearly already ahead of the game in terms of the size of your lab and the talent you have. What were some of your major accomplishments this past year?

RM: [In 2016] we went from two people in this lab to about 25 full-time employees and six interns. We published 14 research papers, which is a significant number for the amount of people we have. We also put out data sets. Your algorithms are only as good as the data that’s available to you, so we spent about six months building out two really large, complicated data sets that we made available to the research community.

AT: This year has been all about laying the foundation. You never know exactly where you’re going, but by working over the course of the year, we’ve seen our research program crystallize.

In the beginning, we were tackling the problems we thought we could solve because we had to make a name for ourselves and get some results. But now we have our own research program and research vision all focused on the idea of information-seeking agents — agents that can answer your questions, and can also ask their own questions and build up their own store of knowledge.

maluuba

So what are some of the ways in which your AI could be integrated into people’s lives?

RM: One example of what Adam’s team is doing that has a lot of applications in our daily lives is when you search for an answer on Google. In that top snippet, you get a lot of irrelevant information. So something we demo’d a few months ago is when you ask a question, it doesn’t just pull up an answer from a knowledge base or database, it actually reads the documents in real-time and produces an answer. That way, you have the most recent, most relevant information available to you at all times.

Enterprise is also a huge focus for us. Any enterprise has thousands and thousands of documents, and so much time is wasted looking for information in them. What if you could just ask a question in natural language, let the machine do the reading comprehension, and present you with the answers?

It could be for your business code of conduct, anything related to HR policies, insurance policies, health care benefits. We could easily just read the documents and present you with an answer.

Legal is another use case we’ve talked about. Paralegals have to read all of these past cases and compare them. What if you could make that easier by feeding the documents into a machine and getting the answers you’re looking for?

That sounds like paralegals may soon need to look for new careers. Do you ever worry that the work you’re doing is ultimately going to render humans obsolete?

AT: It’s something that I certainly think about. Obviously our goal is not to make people obsolete. We’re people too! This is definitely becoming a big topic and people are putting forth ideas like universal basic income to compensate for the incoming AI revolution. In the ideal scenario, if people are put out of work it’s because they don’t have to do crappy jobs anymore. They can focus on what they’re interested in — making art, writing, etc. — and we will have enough productivity with AI that it doesn’t matter that people aren’t working.

Avatar

Lauren Jane Heller

Lauren Jane Heller is passionate writer and storyteller. With a background in documentary film and journalism, she has now found her niche writing for and about the continually evolving world of technology.

0 replies on “Maluuba team explains why language is the key to making machines intelligent”