Canada is reportedly considering joining Australia in banning youth from accessing social media. But should it?
Last week, the Globe and Mail and Politico reported that Ottawa was considering a ban as one of multiple ways to better protect young people from online harms such as cyberbullying, addiction, the posting of CSAM, or child sexual abuse material, and worsening mental health.
Canada’s privacy commissioner requires platforms to get parental consent before they collect personal information from children under age 13. That has led to most apps to require that users sign up with a birthdate showing they are at least 14 years old if they want to do anything beyond just viewing content. However, an outright ban could, in theory, enforce that rule, and require platforms to prove that they aren’t allowing children to access their content.
BetaKit managing editor Sarah Rieger chatted with Josh Tabish, senior director for Canada with tech policy advocacy group Chamber of Progress, about why he thinks a ban isn’t the right move—and what could be done instead.
The following interview has been edited for length and clarity.
Australia recently banned children under age 16 from making social media accounts; the UK has voted to do the same. Denmark, Norway, France, and Spain are considering similar proposals. Why shouldn’t Canada?
The first thing I’d say is these lawmakers are responding to real concerns. I’m a parent, and I think parents are desperate for help. My concern is that while social media bans play well politically, I’m not sure they make for good policy.
It may have a lot of symbolic value, but it may not have a lot of technical teeth because kids are smart.What kids really want and need are online spaces that are safe for them: spaces that have age-appropriate designs, places where they can learn and express themselves and connect with each other. This is particularly true for at-risk, marginalized kids: anyone who is a little different. Internet access can be a real lifeline.
What I worry about is if you implement a social media ban, what are [companies’] incentives to invest in safety features? If kids aren’t on the platform, that means less development cycles. That means less content moderation. That means fewer trust and safety people to help make the spaces better.
Australia’s ban on social media for youth has been in place since December. How is it going so far?
Australia is the first mover here—it will be interesting to see the impact on kids’ digital literacy. Are they learning to stay safe? Does it let social media platforms off the hook?
I’ve seen some reports that suggest kids are finding workarounds. Teenagers are very good at getting around rules. They’re using VPNs (virtual private networks), they’re creating fake accounts, they’re using shared family devices. They’re also flocking to social media platforms that aren’t captured by the regulations. [Editor’s note: Australia’s ban does not capture platforms with social features like YouTube, Roblox, or Lemon8.]
Speaking of workarounds, let’s talk about the mechanism of the ban. It’s pretty much based on age-verification, right? Like, punching in your date-of-birth?
Yeah.
How effective is that as a barrier?
This is a really important point. When you see a ban being considered, the first question is: how is this going to work?
Platforms have some options available to them. In Australia, the law says platforms have to figure out a method for determining age. They can estimate based on data [Editor’s note: Roblox, for example, is using AI to scan children’s faces]. They can ask for ID verification. The big drawback with this is that no matter how you tailor the rules, you are encouraging platforms to collect data on kids and store it. And if you read the news, you know that companies are routinely the subject of data breaches.
Canada is reportedly considering a ban like Australia’s. Where does our regulation currently stand on online harms?
The government’s plans are still a little unclear. They have been feeling, I think, a lot of pressure to act since their previous online harms bill stalled out. Culture minister Marc Miller has said that he plans to bring measures forward. So we don’t really know what that means.
We’ve seen in Bill C-16 a measure to make the distribution of non-consensual deepfake images illegal. That’s a good step in the right direction. But what’s next is the million-dollar question. We don’t think social media bans are the right approach. But they would need to be paired with other measures as well, like digital literacy training or other incentives to encourage platforms to create safer spaces.
What solutions are possible here? Is any country, or platform, effectively making social media safer?
Canada is a bit of an outlier in that it’s one of the few G7 countries that doesn’t have a national digital literacy strategy. If you look at countries considering social media bans, like the UK or France, they also have strong strategies to help make sure kids know how to stay safe online.
It seems odd to implement a ban before measures on how to educate; awareness usually comes before hard regulations. Say a ban is effective: then you get this generation of kids that have a limited experience of social media when they turn 14, or 16. Training for parents is super helpful too. Tech is a hard space to stay on top of.
RELATED: Grok’s non-consensual sexual images highlight gaps in Canada’s deepfake laws
We are seeing some good efforts from platforms. To pick one, Apple has a new kids’ account feature that by default will add safety settings that will block adult content. YouTube just rolled out a feature for parental controls that will not allow kids to endlessly scroll short videos. I wish someone would turn that feature on for me.
Where we see good approaches are countries incentivizing platforms to make spaces safer in ways that make sense for their business model.
Speaking of Apple—there’s a question here of the distinct layers of responsibility for online safety. Does it fall to the app store? The social network?
At times it can feel like everyone is pointing the finger at everyone else. It’s like the Spider-Man meme.
Philosophically, I think everyone can agree that companies should bear some responsibility for the stuff they put out into the world. The problem is porn companies say it should be managed, Meta says the issue should be solved by the app stores… all these approaches are flawed in the same way, which is they let the person pointing the finger off the hook.
I think the right approach is to create incentives for platforms to act responsibly; at the end of the day, the responsibility is on developers to make their spaces safe and to enrich the experience for everyone.
The latest platforms that have been raising alarm bells are AI chatbots. Are there any special considerations when it comes to creating regulation to make those tools safer for youth?
I think it’s actually really easy to over index on risk and imagine every bad thing that can happen with AI. I think chatbots will be super helpful for kids in a variety of ways, like as an educational resource.
Chatbots are probably here to stay for the foreseeable future; teaching kids how to interact with them in safe and reasonable ways is something we should turn our attention to.
RELATED: TikTok and Grok are a mess
The Canadian government has a voluntary code of conduct for AI developers where they ask companies that have signed on to mitigate risks. If you asked me to gaze into a crystal ball, I would assume that voluntary code will form the baseline for future AI safety regulations.
We’re talking a lot about Canada’s role in governance here, but a lot of these platforms are giant US tech companies. Is it even possible for us to govern Meta, or X?
Well, X is a special case.
But, broadly, I think Canada should do what’s right for Canadians. I think policymakers shouldn’t be afraid to pass laws to protect Canadians online. Canada’s sovereignty isn’t up for debate. Laws like the Online News Act remain on the books.
The government faces a tricky balancing act: they need to find policies that protect Canadians and help us grow our tech economy but also don’t jeopardize our relationships. I think they can do that, I just think it’s going to require a lot of careful consideration and creativity.
If you want to learn more about accessing social media spaces safely and responsibly, Tabish recommends checking out MediaSmarts, Canada’s national digital literacy organization, or digital rights group OpenMedia.
