Business

The EU wants to crack down on misinformation online — but here’s why it probably won’t work

The European Union has publicly called on X, Facebook owner Meta and TikTok to deal with false information on their sites. But industry researchers and experts say it could be an impossible task — even as hundreds of accounts linked to Hamas have been removed by X, the social network previously known as Twitter.

On Thursday, Linda Yaccarino, the CEO of X, outlined efforts by her company to combat illegal content on the platform. She was responding to public demands from a top European Union official for information on how X is complying with the EU’s tough new digital rules as conflict escalates between Hamas and Israel.

“X is proportionately and effectively assessing and addressing identified fake and manipulated content during this constantly evolving and shifting crisis,” Yaccarino said in a letter to Thierry Breton, an EU commissioner who often leads the 27-nation bloc’s actions on its Digital Services Act.

Yaccarino said her platform has acted to “remove or label tens of thousands of pieces of content.”

But one former employee who worked with the social network’s trust and safety team said she is not sure it can do much about this problem, after it deliberately decided to roll back its moderation teams.

Linda Yaccarino, CEO of X, formerly known as Twitter, says her platform is ‘proportionately and effectively’ assessing fake and manipulated content on the platform. (Jason Alden/Bloomberg)

“I think Twitter has significantly reduced capacity, by the company’s own choosing, to address these issues,” Theodora Skeadas said in an interview with CBC News, adding that she does not believe it has the ability to be responsive to harmful content on its platform.

Skeadas said that almost the entire trust and safety team, including herself, was laid off in the months after Elon Musk closed the deal in October 2022 to purchase Twitter.

“There aren’t as many people involved in the ecosystem whose day-to-day job was connected to tackling disinformation,” she said.

Not just Twitter, but Facebook and TikTok, too

European authorities announced on Thursday that they are demanding more information from X on how it handles illegal content and complaints about that content. X also needs to provide information by Oct. 18 on how its crisis responses work.

The European Union has also posted letters on social media addressed to Meta, the owner of Facebook and Instagram, and to TikTok.

In a statement to CBC News, a Meta spokesperson said the company has staff who speak both Hebrew and Arabic and are monitoring the situation.

A company sign with a blue infinity symbol.
Facebook owner Meta has said it has staff members fluent in the Middle East’s major languages monitoring the situation in Israel and Gaza. (Tony Avelar/The Associated Press)

“Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and co-ordinate with third-party fact checkers in the region to limit the spread of misinformation,” the company said.

When asked repeatedly for comment by CBC News, X responded with an automatic email reply that said: “Busy now, please check back later.”

TikTok has indicated it will be responding to the EU. As well, a company spokesperson told CBC News it has added moderation resources in both Arabic and Hebrew.

Social network researcher Siva Vaidhyanathan said because of their sheer size, it may be too big a task to expect social platforms to eliminate all misleading or illegal content. As an example, he pointed out that Facebook alone has billions of user accounts.

“That means Facebook is constantly going to be facing many millions of uploads every second. A lot of them are puppy pictures … but a lot of them are going to be misleading videos that might have been taken somewhere else and some other time, but marked to make it seem as if it’s happening in Gaza or it’s happening in Israel right now,” said Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia.

A man of south Asian descent stands in front of a bookshelf, with a microphone in front of him.
Media scholar Siva Vaidhyanathan says the sheer number of social media accounts could make it nearly impossible to fully ensure false content is cleaned up. (Anis Heydari/CBC)

From Vaidhyanathan’s perspective, that sheer volume of content means that there is no way to design a system that could hire enough people and pay them enough to weed through and assess every piece of content properly.

“We’ve seen time and time again, no matter how much Facebook says it’s committed to cleaning up its service, that it’s never enough and it’s probably never going to be enough,” he said.

What about the law?

As for the European Union’s legal warnings to social networks, lawyers have said that enforcement of the its Digital Services Act remains untested.

Paul Bernal, who teaches information technology law in the United Kingdom, said while it’s clear that European authorities want to police online content deemed illegal, it’s not clear whether they can actually compel anything to happen.

An gentleman with white hair is pictured in front of signs showing "Munich Security Conference."
Thierry Breton is the European commissioner who wrote to social media platforms, including X (Twitter), Facebook owner Meta and TikTok demanding they take action to remove content deemed illegal under EU law. (Johannes Simon/Getty Images)

“if it turns out they can’t [enforce these rules], then … it’s really removing their power. They’ll feel like kind of paper tigers who don’t actually have any any power to do anything,” said Bernal, a professor at the University of East Anglia law school. Even though EU law does not directly apply in the U.K. at this time, he said, similar laws and regulations are brewing there.

In a statement, the European Commission said it could enforce fines on platforms or even ban them as a last resort, but it didn’t comment on where things stand when it comes to those steps with any of the social platforms it has sent letters to.

Fake information driven by anger: researcher

Dealing with fake information on social networks may not be possible through fact-checking and testing for legitimacy, said the University of Virginia’s Siva Vaidhyanathan, because readers are driven by emotion.

“The key to understanding any of those moments is not to pay attention to the truth or falsity of the claim, or even the truthfulness or falsity of the source…. Those help, but that’s a loser’s game,” he said, adding that when it comes to politics or crisis situations, social media users often seek out posts with amplified emotions.

WATCH | Can you trust verification badges online anymore?: 

How will the paid social media verification process affect you? | About That

Featured VideoA growing number of social media companies are changing the way they verify users, with a move to having them pay for the badges. About That producer Kieran Oudshoorn speaks with CBC News senior business reporter Anis Heydari about why it could affect how businesses and consumers interact online.

“You want to get mad. You feel like you need to feel something. You go to Twitter and then you find something that will make you mad and then you will pile on. That’s the dynamic,” he told CBC News, adding that he theorizes that groups such as Hamas deliberately post violent and misleading videos to trigger these responses.

“They are not looking to be loved. They are looking to be hated. And that’s so easy to do. And I think that’s what we’re seeing going on here,” Vaidhyanathan said.

It’s a concern echoed — and extended outside of the online world — by industry players like Theodora Skeadas.

“As false information that is inspired by hate spreads, it can lead people to do [real world] offline harm, which is damaging to everyone,” she said.

A woman wearing a headset looks into a webcam.
Theodora Skeadas, who was laid off from Twitter’s safety team, told CBC News she does not believe the social media company, now called X, has the capacity to address fake content due to its own choices. (CBC)

Skeadas, who continues to work as an independent consultant in online trust and safety, said she remains concerned about what happens with online platforms even outside of the immediate conflicts taking place in Israel and Gaza.

“Disinformation affects elections just as much as it affects times of crisis. And I’m concerned about the capacity of platforms like Twitter to meaningfully address this issue as we move toward a year with major elections,” she said.

See also  How can I improve my motivation at work? 1 quick tip

Related Articles

Leave a Reply

Back to top button