This article is real — but AI-generated deepfakes look damn close and are scamming people
How amazing is it that Canadian celebrities like TV chef Mary Berg, crooner Michael Bublé, comedian Rick Mercer and hockey megastar Sidney Crosby are finally revealing their secrets to financial success? That is, until the Bank of Canada tried to stop them.
None of that is true, of course, but it was the figurative bag of magic beans apparent scammers on social media tried to sell to people, enticing users to click on sensational posts — Berg under arrest, Bublé being dragged away — and leading them to what looks like, at first glance, a legitimate news story on CTV News’s website.
If you’re further intrigued by what appears to be an AI-generated article, you’ll have ample opportunity to click on the many links — about 225 on a single page — that direct you to sign up and hand over your first investment of $350, which will purportedly increase more than 10-fold in just seven days.
These were just the latest in a batch of deepfake ads, articles and videos exploiting the names, images, footage and even voices of prominent Canadians to promote investment or cryptocurrency schemes.
Lawyers with expertise in deepfake and AI-generated content warn they currently have little legal recourse, and that Canadian laws haven’t advanced nearly as rapidly as the technology itself.
Financial scams and schemes appropriating the likenesses of famous people are nothing new, but the use of rapidly-advancing generative-AI technology puts “a new twist on a pretty old concept,” said lawyer Molly Reynolds, a partner at Torys LLP in Toronto.
And it’s going to get worse before it gets better. Developing the tools and laws to prevent it from happening is a game of catch-up that we’re already losing, she said.
Information Morning – NS7:01Implications of deep-fake scam ads
Detecting deepfakes
While there is a lot of content on the internet that has obvious signs of being AI-generated, University of Ottawa computer science professor WonSook Lee said some of it is so good now that it’s getting much harder to discern what’s real.
She said even a couple of years ago she could immediately detect an AI-generated image or deepfake video of a person just by glancing at it and noticing differences in pixelation or composition. But some programs can now create near-perfect photos and videos.
What isn’t perfectly generated can be further altered with photo and video editing software, she added.
While we are learning about AI, it’s getting smarter, too.
“If we find a method to detect deepfakes, we are helping the deepfakes to improve,” she said.
Star power
It seems X has curtailed the swarm of Canadian celebrity scam ads, to some extent, and suspended some — but not all — of the accounts sharing them. CBC News attempted to contact a spokesperson for X Corp., the social media platform’s parent company, but only received an automated response.
X, and other social media and website hosting companies may have policies aimed at preventing spam and financial scams on their platforms. But Reynolds said they face a “question of moral obligations versus legal obligations.”
That’s because there aren’t many legal obligations spurring platforms to remove fraudulent materials, she said.
“There are individuals who are deeply impacted with no legal recourse, with no help from the technological companies and maybe without a big, you know, social network to rely on the way that Taylor Swift has,” Reynolds said.
After all, prominent Canadians don’t wield nearly as much influence as Taylor Swift. If they did, perhaps the story would play out differently.
The rapid spread of sexualized AI-generated images of the pop music superstar last month prompted social media companies to take near-immediate action. Even the White House weighed in.
X promptly removed the images and blocked searches for Swift’s name. Within days, U.S. lawmakers tabled a bill to combat such deepfake pornography.
But Reynolds said it’s not just situations involving non-consensual, sexualized imagery that can cause harm — especially when it comes to people whose names and faces are their brands.
CBC News requested interviews with Berg and Mercer to ask if either had taken any action in response to the fake ads appropriating their likenesses. Mercer declined to be interviewed for this story. Berg’s publicist forwarded the request to CTV parent company Bell Media, which turned it down.
13:52Will the Taylor Swift AI deepfakes finally make governments take action?
New legal landscape
Whether someone is famous is irrelevant if your image is being used in a way you haven’t consented to, said Pablo Tseng, a Vancouver-based lawyer specializing in intellectual property at McMillan LLP.
“You are in control of how you should be presented,” said Tseng, a partner at McMilllan LLP. “The law will still see this as a wrong that’s been committed against you. Of course, the question is: Do you think it’s worth your while to pursue this in court?”
Canada hasn’t followed the U.S.’s lead on new legislation for deepfakes, but there are some existing torts — laws mostly established by judges in an effort to provide damages to people who have suffered wrongdoing — that could possibly be applied in a lawsuit involving AI-generated deepfakes, according to Tseng.
The tort of misappropriation of personality, he said, could apply because it’s often a case of someone’s image being digitally manipulated or grafted onto another image.
The tort of false light, which pertains to misrepresenting a person publicly, is a more recent option that is based on U.S. law and was first recognized in Canada in Superior Court in 2019. But so far, it’s only been recognized in two provinces (British Columbia is the other).
Playing the long game
Anyone who wants to pursue any sort of legal action over the production and distribution of deepfakes will have to be in it for the long haul, said Reynolds. Any case would take time to work through the court system — and it’s likely to be expensive.
The fight can, however, pay off.
Reynolds pointed to a recent class-action lawsuit against Meta over “Sponsored Stories” advertisements on Facebook, between 2011 and 2014, that generated endorsements using names and profile photos of users to promote products without their consent.
Meta proposed a $51-million settlement to users in Canada last month. Lawyers estimate 4.3 million people who had their real name or photo used in a sponsored story could qualify.
“It’s not a particularly quick avenue for individuals, but it can be more cost effective when you have a class action,” said Reynolds.
But the pursuit of justice or damages also requires knowing who bears responsibility for these deepfake scams. The University of Ottawa’s Lee said what’s already challenging will become nearly impossible to do with further advancements to generative AI technology.
Much of the research that has been published on artificial intelligence includes source code that is openly accessible, she explained, meaning anyone with the know-how can create their own program without any sort of traceable markers.