Grok’s non-consensual sexual images highlight gaps in Canada’s deepfake laws

A photo, anonymized by BetaKit, that has been digitally manipulated by Grok to depict a young woman in a bikini made of dental floss, without her consent.
Experts say Canada’s murky privacy and online harm laws leave victims with limited options.

On social media platform X, users have been prompting AI model Grok to generate non-consensual sexual images of women, and sometimes children. Experts say the laws around those images are murky in Canada, and Grok is currently under investigation for a potential previous breach of Canadians’ privacy.

The backlash against Grok’s image generation has been swift, with international governments and regulators condemning X and its chatbot for both creating and hosting this content (which has included child sexual abuse material, or CSAM). In Canada, tech privacy experts say the legality is unclear as the government looks to update its tech and privacy legislation. 

“Most companies have terms and conditions that prohibit non-consensual intimate images [but] when it comes to synthetic sexual content, there’s a bit of a murkiness.”

Suzie Dunn, Dalhousie University

In a statement posted today on X, federal AI and digital innovation minister Evan Solomon said that “deepfake sexual abuse is violence” and that “[p]latforms and AI developers have a duty to prevent this harm.”

A spokesperson for Solomon’s office told BetaKit in a statement on Wednesday that the government is “committed to the safety of Canadians, especially children and women, who are at a higher risk of exploitation when it comes to non-consensual sexual deepfakes. The spokesperson added that the minister plans to introduce legislation to protect Canadians’ sensitive data.

In today’s statement, Solomon mentioned the previously introduced Bill C-16, the Protecting Victims Act, which aims to expand the laws banning the non-consensual distribution of intimate images to include non-consensual deepfakes. 

Sexualized images flood Grok

Grok is the generative AI model created by Elon Musk’s x.AI that is available to X (formerly Twitter) users on the social media platform. Musk and xAI have positioned Grok as a more permissive chatbot with fewer rules governing its prompt generation than competing AI models. The model has previously generated racist and misogynistic content, including hate speech, and regularly returns factual errors and inaccuracies. xAI has faced legal action related to alleged copyright infringement from authors and media outlets like the New York Times. Despite the backlash, xAI announced this week that it has raised $20 billion USD from investors, including chip giant Nvidia. 

In late December, sexualized and “nudified” images of women and children, created by Grok from users’ prompting and without the subjects’ consent, began appearing en masse in X users’ feeds. User requests included commands for Grok to put women and children into clothing like bikinis emblazoned with swastikas, or made of material like dental floss, in order to show as much skin as possible. The chatbot was then prompted to post an apology on Jan. 2, calling the incidents “lapses in safeguards” and directing users to the FBI and a cyber tip line. Founder Elon Musk posted that users, instead of xAI and its chatbot, would bear legal responsibility for illegal material generated through prompts, and that offenders would be banned permanently. Multiple reports indicate that the non-consensual sexual imagery is still being generated.

“Most companies have terms and conditions that prohibit non-consensual intimate images,” Suzie Dunn, an associate professor of law at Dalhousie University, told BetaKit on Wednesday. “When it comes to synthetic sexual content, there’s a bit of a murkiness.”

In Canada, it’s illegal to knowingly publish or distribute an intimate image of someone without their consent. The implications for Canadian users seeing this potentially illegal content on their X feeds are less clear-cut. 

“If you chose to follow a user whose primary purpose was to create CSAM from Grok and you’re actively asking for that content to show up in your feed, that’s an interesting legal question,” Dunn said.

The X Safety account posted on Jan. 3 that the social media platform takes action against illegal content on X, including CSAM, by taking it down, suspending accounts, and “working with local governments and law enforcement as necessary.” 

“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the post reads. BetaKit has reached out to xAI for comment. 

Legal recourse

Creating or possessing sexual images of children is illegal in Canada under the Criminal Code, which does not differentiate between photography and AI-generated images. In 2023 in Québec, a man was sentenced to prison for three years for creating AI child pornography.

According to Dunn, the primary recourse for adults who have been the victim of an AI-generated deepfake is through the civil system, but laws vary by province. All provinces except for Ontario have some form of intimate image protection act, Dunn said, which prohibits sharing private images without the subject’s consent. 

Sharon Polsky, president of the Privacy & Access Council of Canada, told BetaKit on Wednesday that taking the civil route can be costly and strenuous for individuals, when going after the company hosting the content. 

“You also have to realize that you, the individual, better have very deep pockets,” Polsky said. “It’s a David-and-Goliath situation.” 

Reporting non-consensual deepfakes to local police is the other option. But Polsky cautioned that investigators who specialize in CSAM are overworked and face complications depending on whether the victim and offending party are in different jurisdictions. 

In Ontario in November, a provincial judge ruled that distributing a fake, digitally altered nude image was not a crime, arguing that altered images are not explicitly mentioned in the Criminal Code. Manitoba and Québec have recently updated their legislation to include images that have been altered. 

Dunn said that while legislation could be helpful, “what people want is some sort of victim helpline that they can call” to get the content taken down immediately. For example, the Canadian Centre for Child Protection, a charity, operates the Cybertip.ca tip line specifically for reporting the online sexual exploitation of children. 

“More than changing laws, the government should be providing these types of services,” Dunn said.

Canada’s privacy commissioner said it could not comment on concerns about Grok’s image generation because the agency has been investigating X for a complaint since February 2025. The investigation is focused on X’s compliance with Canada’s federal privacy law and will look at whether the company is meeting its obligations with respect to its collection, use, and disclosure of Canadians’ personal information to train AI models, including Grok. 

Canada’s legal landscape

Canada doesn’t yet have legislation regulating AI models, and its privacy legislation hasn’t been updated in 40 years. Minister Solomon has said that he would not revive the prior government’s Artificial Intelligence and Data Act (AIDA). The act was part of Bill C-27, which sought to modernize data privacy and protections but died in January 2025 when Parliament was prorogued. The feds are considering aspects of the bill that might be carried forward. 

Another bill seeking to protect users and children from harmful online content also died on the House floor. The proposed Online Harms Act would have created a regulatory body for harmful online content, including sexualized deepfakes. It would give operators the ability to ban content that sexually victimizes children and intimate content circulated without consent. 

Solomon has said that updated legislation is coming this year that will specifically tackle deepfakes and data privacy, as well as a refreshed AI strategy

The spokesperson said Solomon’s office at Innovation, Science and Economic Development Canada has connected with the Royal Canadian Mounted Police (RCMP). In an email today, the RCMP said that it will generally only confirm an investigation in the event it leads to criminal charges.

If you’ve had images of yourself posted or manipulated by Grok or other AI models without your consent and are open to talking about your experience, please contact BetaKit reporter Madison McLauchlan via Signal at @madisonmcla.12

If you, or someone you know, has been the victim of sexual harassment or abuse, help is available via crisis lines and support centres across Canada.

Feature image via X. The image has been anonymized by BetaKit to protect the identity of those involved.

0 replies on “Grok’s non-consensual sexual images highlight gaps in Canada’s deepfake laws”