Skip to main content

Tech platforms should fight Islamophobia the way they fought ISIS

Tech platforms should fight Islamophobia the way they fought ISIS

/

Facebook and YouTube have helped quash terrorism before, and they can do it again

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

First Burials Begin For Victims Of Christchurch Mosque Attacks
Mourners attend the funeral of a victim of the Christchurch terrorist attack at Memorial Park Cemetery on March 20th in Christchurch, New Zealand.
Photo by Carl Court/Getty Images

After last week’s horrific terrorist attack in New Zealand, early commentary focused on how the shootings at two Christchurch shootings seemed to be purpose-built for spreading on social media. “A mass shooting of, and for, the internet,” Kevin Roose called it in the New York Times:

The details that have emerged about the Christchurch shooting — at least 49 were killed in an attack on two mosques — are horrifying. But a surprising thing about it is how unmistakably online the violence was, and how aware the shooter on the videostream appears to have been about how his act would be viewed and interpreted by distinct internet subcultures.

In some ways, it felt like a first — an internet-native mass shooting, conceived and produced entirely within the irony-soaked discourse of modern extremism.

As Roose notes, the alleged killer promoted the attack on Twitter and 8Chan, and broadcast it live on Facebook. Facebook took down the original video, but not before it could be copied and widely and shared. Over the next 24 hours, it would be uploaded to Facebook another 1.5 million times — of which, Facebook was able to remove 1.2 million copies at the time of uploading. The same thing was happening simultaneously on YouTube, but the company would not share any numbers that might describe the scale of its challenge.

The wide availability of videos of the attacks, both on and off the big tech platforms, has drawn widespread condemnation. On Tuesday, Rep. Bennie Thompson, chairman of the House Homeland Security Committee, called on tech companies to explain themselves in a briefing March 27th:

“Studies have shown that mass killings inspire copycats — and you must do everything within your power to ensure that the notoriety garnered by a viral video on your platforms does not inspire the next act of violence,” Thompson wrote.

But at the same time the platforms come in for another stern lecture from Congress, others are calling for a deeper view at the bigotry that makes such terrorist attacks possible. Here’s Caroline Haskins in a piece titled The Christchurch Terror Attack Isn’t an ‘Internet’ Terror Attack:

Whitney Philips, a professor of communications at Syracuse University, said that the ideas that we choose to tolerate on the internet is a result of the forces of the masses, not just the actions of people on fringe corners of the internet. If the kind of attack we saw at Christchurch could be neatly blamed on a small, white supremacy forum alone, it would be a far less difficult problem to solve. Sadly, the reality is much more complicated.

“The shifting of the Overton window is not the result of just a small group of extremists,” Philips said. “The window gets shifted because of much broader cultural forces.”

Mike Masnick makes a similar point in Techdirt:

The general theme is that the internet platforms don’t care about this stuff, and that they optimize for profits over the good of society. And, while that may have been an accurate description a decade ago, it has not been true in a long, long time. The problem, as we’ve been discussing here on Techdirt for a while, is that content moderation at scale is impossible to get right. It is not just “more difficult,” it is difficult in the sense that it will never be acceptable to the people who are complaining.

Part of that is because human beings are flawed. And some humans are awful people. And they will do awful things. But we don’t blame “radio” for Hitler (Godwin’d!) just because it was a tool the Nazis used. We recognize that, in every generation, there may be terrible people who do terrible things, using the technologies of the day.

Given the opposing views, how do we move ahead? In my view, the debate highlights a distinction that we make all too rarely in discussing these issues. There are platform problems, and there are internet problems. And we have to consider them separately if we’re going to move beyond the finger-pointing stage of post-disaster conflict.

Platform problems include the issues endemic to corporations that grow audiences of billions of users, apply a light layer of content moderation, and allow the most popular content to spread virally using algorithmic recommendations. Uploads of the attack that collect thousands of views before they can be removed are a platform problem. Rampant Islamophobia on Facebook is a platform problem. Incentives are a platform problem. Subreddits that let you watch people die were a platform problem, until Reddit axed them over the weekend.

Internet problems include the issues that stem from the existence of a free and open network connecting all of humanity together. The existence of forums that allow white supremacists to meet, recruit new believers, and coordinate terrorist attacks is an internet problem. The proliferation of free file-sharing sites that allow users to post copies of gruesome videos is an internet problem. The rush of some tabloids to publish their own clips of the shooting, or analyze the alleged killer’s manifesto, are an internet problem.

Some problems, of course, are a little bit of both.

And in all cases, these “problems” have their upside. A free and open internet — and speech-friendly tech platforms — have been a boon to all sorts of causes, businesses, and artists. What really has Silicon Valley uneasy at the moment is the total uncertainty about how you address the bad that the internet does without crippling the good it does, too.

In the meantime, we are seeing a surge in far-right, white nationalist violence, and it increasingly resembles a coordinated terror campaign. Platforms, to their credit, have begun to treat it this way. The Global Internet Forum to Counter Terrorism, which includes Facebook, Microsoft, Twitter, and YouTube, acted over the weekend to share information about more than 800 distinct videos around the attack.

The forum formed in 2017 after platforms faced widespread criticism for failing to recognize how ISIS and other terrorist groups were using them to recruit new members. Platforms acted in concert to remove terrorists, and it appears to be successful. As Ryan Broderick and Ellie Hall wrote on Tuesday:

Google and Facebook have also invested heavily in AI-based programs that scan their platforms for ISIS activity. Google’s parent company created a program called the Redirect Method that uses AdWords and YouTube video content to target kids at risk of radicalization. Facebook said it used a combination of artificial intelligence and machine learning to remove more than 3 million pieces of ISIS and al-Qaeda propaganda in the third quarter of 2018.

These AI tools appear to be working. ISIS members and supporters’ pages and groups have almost been completely scrubbed from Facebook. Beheading videos are pulled down from YouTube within hours. The terror group’s formerly vast network of Twitter accounts have been almost completely erased. Even the slick propaganda videos, once broadcast on multiple platforms within minutes of publication, have been relegated to private groups on apps like Telegram and WhatsApp.

A similar approach is needed here. Not every problem related to the Christchurch shooting should be laid at the platforms’ feet. But nor can we throw up our hands and say well, that’s the internet for you. Platforms ought to fight Islamophobia with the same vigor that they fight Islamic extremism. Hatred kills, after all, no matter the form it takes.

We also shouldn’t ask the platforms to solve this problem alone. Fighting terrorism has not traditionally been the province of for-profit corporations, and for good reason. When terrorist groups are organizing in plain view on public web forums, governments are responsible for intervening. They won’t stop every attack, but we should ask that they put at least as much pressure on themselves as they’re putting on tech companies.

Democracy

The Case for Investigating Facebook

Rep. David Cicilline, chairman of the House Subcommittee on Antitrust, Commercial and Administrative Law, says Facebook should face an investigation over competition issues. “The F.T.C. is facing a massive credibility crisis,” he writes:

How the commission chooses to respond to Facebook’s repeated abuses will determine whether it is willing or able to promote competition and protect consumers. If the commission does conclude that Facebook has violated the consent order, how it fixes this problem through a legal remedy will be a test of its effectiveness. The commission has the authority to impose substantial fines on Facebook. Given that the corporation had more than $55 billion in revenue in 2018alone, even a fine in the low billions of dollars will amount to a slap on the wrist, a mere cost of doing business.

Moreover, because Facebook is a repeat offender, it is critical that the commission’s response is strong enough to prevent future violations. America’s laws are not suggestions

Facebook Takes Steps to Prevent Bias in the Way It Shows Ads

Three years after an enterprising group of journalists used Facebook’s targeting tools to post discriminatory housing ads, the company settled various lawsuits against it and said it would take new steps to prevent abuse of its ad platform. Noam Scheiber and Mike Isaac report:

The company said that anyone advertising housing, jobs or credit — three areas where federal law prohibits discrimination in ads — would no longer have the option of explicitly aiming ads at people on the basis of those characteristics.

The changes are part of a settlement with groups that have sued Facebook over these practices in recent years, including the American Civil Liberties Union, the National Fair Housing Alliance and the Communications Workers of America. They also cover advertising on Instagram and Messenger, which Facebook owns.

Sen. Josh Hawley is making the conservative case against Facebook

Makena Kelly talks to Sen. Josh Hawley (R-MO) about his headline-grabbing criticism Google, Facebook, and other big tech companies:

HAWLEY: Over the last decade, a consumer’s data has become much more valuable. But consumers don’t necessarily realize it. If you think about these familiar applications like Gmail and Facebook, to the user they look about the same as they did 10 years ago. But the cost to the user is much higher now because those companies are collecting and extracting incredible amounts of personal and private data, and the users have absolutely no idea. They haven’t been informed about it. They haven’t had the option to meaningfully consent.

Trump’s campaign secret weapon: Facebook

President Trump’s 2020 campaign “has spent nearly twice as much as the entire Democratic field combined on Facebook and Google ads,” Sara Fischer reports.

“Spend can only scale with strong performance. We have an experienced team, still together from 2016,” a senior member of the Trump 2020 team tells Axios’ Jonathan Swan. “But most of all, we have Donald Trump and nothing scales and converts like Trump,”

Google hit with €1.5 billion antitrust fine by EU

That’s the bad news for Google. The good news is that it added $17 billion to its market cap today!

Russia’s Putin Signs Into Law Bills Banning ‘Fake News,’ Insults

Here’s another authoritarian ruler cracking down on dissent by using the president’s favorite phrase:

President Vladimir Putin has signed legislation enabling Russian authorities to block websites and hand out punishment for “fake news” and material deemed insulting to the state or the public.

The two bills that critics see as part of a Kremlin effort to increase control over the Internet and stifle dissent were signed by the president on March 18, according to posts on the government portal for legal information.

Locating The Netherlands’ Most Wanted Criminal By Scrutinising Instagram

A fugitive kept posting Instagram photos of himself, so Henk Van Ness used a variety of tools outlined here to find the man’s location. Unfortunately, Iran has no extradition treaty with the Netherlands, so it looks like this isn’t going anywhere. But it’s a valuable look at just how much information you can give away about yourself with a single photo.

Elsewhere

Life After Facebook: The Untold Story Of Billionaire Eduardo Saverin’s Highly Networked Venture Firm

Alex Konrad talks to Facebook co-founder Eduardo Saverin about what he does with all his money, and the answer seems to be investing in things that I will never care about, not even a little bit:

One example: Ninja Van. A last-mile logistics provider for delivery services in Southeast Asia, the Singapore-based startup employs 2,000 people and works with 10,000 drivers. It’s an expensive, complicated business, but B Capital stepped in to write a check when others balked. “Eduardo and the team asked the right questions,” says Lai Chang Wen, CEO of Ninja Van. “They’re able to give us a wider perspective across businesses and geographies.”

Can a Facebook Post Make Your Insurance Cost More?

Here’s one more reason to consider stop public posts, from Ellen Byron and Leslie Scism:

Did you document your hair-raising rock-climbing trip on Instagram? Post happy-hour photos on Facebook ? Or chime in on Twitter about riding a motorcycle with no helmet? One day, such sharing could push up your life insurance premiums.

In January, New York became the first state to provide guidance for how life insurers may use algorithms to comb through social media posts—as well as data such as credit scores and home-ownership records—to size up an applicant’s risk. The guidance comes amid expectations that within years, social media may be among the data reviewed before issuing life insurance as well as policies for cars and property.

Facebook says service hindered by lack of local news

Here’s a bleak story from David Bauder about how Facebook’s efforts to promote local news have been hindered by the fact that many Americans now live in areas that have no local journalism to speak of — a fact exacerbated by Facebook’s growing dominance of the digital ad market.

Some 1,800 newspapers have closed in the United States over the last 15 years, according to the University of North Carolina. Newsroom employment has declined by 45 percent as the industry struggles with a broken business model partly caused by the success of companies on the Internet, including Facebook.

The Facebook service, called ”Today In ,” collects news stories from various local outlets, along with government and community groups. The company deems a community unsuitable for “Today In” if it cannot find a single day in a month with at least five news items available to share.

Facebook’s Top Representative in China Leaves Firm ($)

It appears Mark Zuckerberg is serious about leaving China for the foreseeable future. Wayne Ma and Sarah Kuranda report:

Facebook’s chief representative in China left the company earlier this year, according to people familiar with the matter, after a series of corporate missteps that led to a deterioration of relations between Facebook and the Chinese government.

The previously unreported departure of Ivy Zhang leaves Facebook with one government affairs employee in China, William Shuai, who remains in talks with Chinese officials about opening a representative office in Shanghai, one of the people said. The person also said there are no immediate plans to replace Ms. Zhang.

Appeals Court Dismisses Freedom of Speech Claim Against Tech Giants

Thank God:

An appeals court sided with Google, Facebook, Twitter and Apple last week when it threw out a lawsuit accusing the companies of conspiring to suppress politically conservative viewpoints.

Kidfluencers’ Rampant YouTube Marketing Creates Minefield for Google

Mark Bergen reports that children’s programming on YouTube is a wasteland of undisclosed sponsorships. As usual, the Federal Trade Commission is nowhere to be found.

Since it was founded in 2005, YouTube has operated beyond the reach of rules that govern advertising on traditional television. But the site has grown so large and influential that the days of light-touch regulation may soon be over. Kids’ programming is where the crackdown is most likely. The problem with sponsored content is that it’s not always clear what’s an ad. Kids are particularly vulnerable to being manipulated by paid clips that masquerade as legitimate content. On TV, the ground rules are clearer: Ads come when the show takes a break.

“The uptick in sponsored content and child influencers is very overwhelming,” said Dona Fraser, director of the Children’s Advertising Review Unit, an industry watchdog funded by companies including Google. “This has exploded in front of our eyes. How do you now wrangle every child influencer out there?”

Maricopa woman, sons, accused of abusing 7 adopted children featured in popular YouTube series

Speaking of kids’ content on YouTube, here’s an absolutely chilling story from Brooke Miller and BrieAnna J Frank, writing for my former employers at the Arizona Republic:

A mother and her two adult sons were arrested Friday on suspicion of repeatedly abusing the mother’s seven adopted children when they did not perform well in the mother’s YouTube videos, city of Maricopa police said.

The children reported being locked in a closet for days without food, water or access to a bathroom, pepper sprayed from head to toe and forced to take ice baths, according to police.

The woman, identified as 48-year-old Machelle Hobson, of Maricopa, has operated a YouTube channel called “Fantastic Adventures,” where episodes featured each of the seven adopted children acting out different scenarios, usually inside a house or backyard, according to a probable cause statement from police.

Devin Nunes sues Twitter for letting “Devin Nunes’ Mom” and “Devin Nunes’ Cow” insult him

Notably, after the congressman filed a lawsuit, his cow avatar surpassed him in Twitter followers.

Did Twitter Help Ground the Boeing 737 MAX?

Social media pressure may have grounded a dangerous plane when the US government wouldn’t, John D. Stoll reports:

Sprout Social, a social-media software firm, estimates at least 870,000 tweets were posted about Boeing’s 737 MAX over the past week, a majority of which trended negative. Investor sentiment hasn’t been kind. Shares of Boeing have lost nearly 15% in total market value since the end of last week.

Boeing and the FAA took their share of criticism, but airlines felt the brunt of the onslaught from concerned passengers who’ve grown all too used to tagging carriers in bitter tweets and angry Facebook posts. Airline executives, who compete in an industry with lots of options and high expectations, felt the heat.

After the porn ban, Tumblr users have ditched the platform as promised

This one’s a few days old, but after all the jokes about how people only used Tumblr for porn, it’s instructive to know that yes, actually, fully 30 percent of Tumblr usage was porn-related. There’s a business opportunity in here somewhere!

China’s new social media craze: Paying random people to shower you with over-the-top compliments

China may be a dystopia, but praise groups are one of my favorite social networking developments of the year. Arjun Kharpal reports on groups inside WeChat that allow you to pay others to compliment you:

One group administrator who spoke to CNBC said they offer a service where you can invite another person into a group, and that individual will be given custom-made compliments. It could be a friend or partner, for example.

The administrator, who asked to remain anonymous, said they charge 15 yuan for three minutes or 25 yuan for five minutes of praise in the WeChat group. You can send in additional information such as details of your relationship with a person and their likes and dislikes. You are then invited to one of the groups on WeChat alongside the other person you have nominated. And then the compliments begin.

Launches

WhatsApp tests in-app reverse image searches to prevent the spread of hoaxes

Ashley Carman reports on an interesting hoax-fighting feature now being tested inside WhatsApp:

The WhatsApp team at Facebook is continuing to build features to help thwart fake news. WABetaInfo reports that a new beta version of WhatsApp includes an in-app web browserand the ability to reverse image search an image that’s sent in a chat so that you can try to figure out where the image really came from.

Facebook is adding quoted replies to Messenger conversations

Sure, why not:

The feature comes as an expansion of the company’s existing reaction emoji. Now, when you hold down on an individual message, in addition to simply adding a reaction, you’ll be able to reply with a new “reply” button, which will attach a quoted version of the original message to your response. The quoted messages aren’t quite their own message threads — they’ll still appear in line with the rest of the chat, but it seems useful enough.

Oculus unveils the Rift S, a higher-resolution VR headset with built-in tracking

“Oculus VR unveiled its next-generation Rift headset,” Nick Statt reports, “a higher-resolution pair of virtual reality goggles that remove the need for external cameras by incorporating built-in tracking. The name of the device, as rumored by numerous reports over the last 12 months, is the Oculus Rift S. In a surprise twist, it’s been developed in partnership with Lenovo. Like Oculus Quest, it will ship this spring for $399.”

Takes

The Attack That Broke the Net’s Safety Net

Tech companies are finding that the surest way to stop bad actors is to cripple their own platforms, The New York Timeseditorial board notes:

It’s telling that the platforms must make themselves less functional in the interests of public safety. What happened this weekend gives an inkling of how intractable the problem may be. Internet platforms have been designed to monopolize human attention by any means necessary, and the content moderation machine is a flimsy check on a system that strives to overcome all forms of friction. The best outcome for the public now may be that Big Tech limits its own usability and reach, even if that comes at the cost of some profitability. Unfortunately, it’s also the outcome least likely to happen.

Instagram just took advantage of Amazon’s biggest weakness

Jason Del Rey says Instagram’s move into shopping shows where its main ecommerce rival is vulnerable. But it’s risky:

Amazon has repeatedly failed at becoming a destination for discovery shopping — or the kind of shopping common in physical retail where you might uncover something you didn’t need but now covet. That type of shopping is one that many Americans still consider a form of entertainment, and one that still makes up a large chunk of total commerce transactions in the country today.

While the opportunity for Instagram is large, so is the risk. For years, Instagram executives have resisted adding purchasing functionality, in part to avoid being too “in your face” about the commercialization of an app that became hugely popular for noncommercial reasons. Now they have changed course and are moving past one form of commercialization (ads) to add another (commerce). Will that turn off some Instagram users?

And finally ...

Facebook: ‘Identifying Hate Speech Is Difficult Because Some Posts Actually Make Pretty Interesting Points’

You know Facebook’s under the spotlight when its head of product policy gets the Onion treatment:

“At Facebook, we are committed to combating violence and hate speech on our platform, but can you really call these posts hate speech when a lot of them are based on science and logic?” said Monika Bickert, head of global policy management at Facebook, claiming that unless you’re a sheep who just swallows everything the mainstream media sells you, a number of these posts had a lot to consider, and even if you don’t completely agree with the attacks on race, religion, gender, or sexual orientation, it should not be a crime to make people think. 

The most striking thing about this piece is that it’s barely even a satire!

Talk to me

Send me tips, comments, questions, and your favorite internet problems: casey@theverge.com.