Between announcements on government investments and lab launches, most in the startup ecosystem are wondering what it will take to make Canada a leader in AI tech.
But for Kindred.AI founder and CSO Suzanne Gildert, the question is much bigger: once AI itself becomes more developed, will we need to establish robot rights?
Gildert tackled the question during the SingularityU Summit’s first Canadian event, which gathered thought leaders in sectors like AI, health, and finance to talk about the effect of exponential tech.
“We have to give them rights and responsibilities and we shouldn’t be afraid of it.”
Kindred.AI — which has been low-key about its developing — is working on human-like robots that can learn from humans wearing exoskeletons that perform certain tasks. She shared why she thinks robots should be built humanlike; it’s easier to augment them to our world built for humans than it would be to build entirely new infrastructure.
But the actual reason that she’s interested? It would be easier for human bodies to merge to AI if they’re built more humanlike.
“We can more easily merge with AI if it inhabits a similar body to ours,” said Gildert. “I want to augment my body and connect my brain to the cloud, and be part of the singularity. You only need to believe one of these arguments I’ve given to know that humanlike AI is definitely coming. That’s my big science fiction big picture.”
However, she acknowledges that this won’t happen anytime soon. In the meantime, as we build AI, we need to consider ethical issues in its development. Reinforcement learning – which requires giving robot a sense of good and bad, and therefore programming extreme pleasure and pain for them to learn the difference — could amount to torture.
— Mell (@MellFeurtado13) October 11, 2017
At the same time, AI minds have data stored electronically to the cloud, and are subject to the same issues with data security. “But it’s not just passwords and not just personal information, it’s thoughts and dreams and personality.”
And, will humans have the right to delete AI minds as they develop their experiments? “Every time I reach for the delete button, I get an uneasy feeling. I’m wiping a being out of existence,” said Gildert. “Simple AI, I’m pretty sure I’m not causing suffering, but who knows in the future. At what point does it become not just an uneasy feeling, but unethical to press that button?”
She added that humans must expect AI to make mistakes, and that organizations like AI safety research committees are at odds with the idea that AI must learn from doing things wrong. To her, robots should be afforded a “childlike” phase.
“As a civilization, we’re giving birth to AI, and I don’t think that will cause us to lose humanity — but I think it’s a test of humanity,” Gildert said. “If we can pass this test and live with these beings and monsters, we can mature as a civilization. We have to give them rights and responsibilities and we shouldn’t be afraid of it, we should be humbled by it. And we as a species have some growing up to do alongside of them.”