Skip to main content

Twitter’s fear of making hard decisions is killing it

Twitter’s fear of making hard decisions is killing it

/

What the Alex Jones controversy and its move to limit third-party apps have in common

Share this story

Photo by Michele Doying / The Verge

Why does Twitter move so slowly?

It’s a question that has been on my mind since Monday, as we watched the company belatedly tiptoe into enforcement of its guidelines against inciting violence. It came up again Thursday, as we saw the company move — a staggering six years after first promising to do so — to significantly restrict the capabilities of third-party apps.

Nothing defines Twitter so thoroughly as its bias toward inaction. In February, Bloomberg’s Selina Wang diagnosed the problem in an article titled “Why Twitter can’t pull the trigger on new products.” Largely, Wang’s reporting laid the blame at the feet of CEO Jack Dorsey:

Dorsey’s leadership style fosters caution, according to about a dozen people who’ve worked with him. He encourages debate among his employees and waits — and waits — for a consensus to emerge. As a result, ideas are often debated ‘ad nauseum’ and fail to come to fruition. ‘They need leadership that can make tough decisions and keep the ball rolling,’ says a former employee who left last year. ‘There are a lot of times when Jack will instead wring his hands and punt on a decision that needs to be made quickly.’

This view closely tracks my own discussions with current and former employees. They’ve described for me the regular hack weeks that take place at Twitter, in which employees mock up a variety of useful new features, almost none of which ever ship in the core product.

Subscribe to The Interface /

Get our daily column about social networks and democracy the night before everyone else

Subscribe

It’s true that Twitter has fewer employees, and less money, than its rivals at Facebook. And even its recent glacial pace of development is arguably faster than it was under previous CEO Dick Costolo.

But time and again, Twitter’s move-slow-and-apologize ethos gets it into trouble. Today’s action against third-party apps illustrates the problem.

Once upon a time, Twitter let people build whatever kind of Twitter apps they wanted to. For a brief, shining time, Twitter was a design playground. Developers making Twitter apps invented new features, such as “pull to fresh” and account muting, that became industry standards.

Then, in 2012, Twitter reversed course. Under Costolo, the company decided that its future lay in Facebook-style feed advertising, which meant consolidating everything into a single native app it could control.

But rather than kill off third-party apps for good, it introduced a series of half-measures designed to bleed them out slowly: denying them new features, for example, or capping the number of users they could acquire by limiting their API tokens. While this spared some amount of yelling in the short term, the move — which was still hugely unpopular with a vocal segment of the user base — needlessly prolonged the agony.

Even after today’s action, third-party apps aren’t dead. They can no longer send push notifications, and their timelines will no longer refresh automatically — making them useless to people like me who, as a Tweetbot user, relies on a waterfall of tweets cascading down my screen each day to stay in touch with the day’s news. (As of today I am, God help me, a Tweetdeck user.)

The fate of the third-party apps is a relatively small concern for Twitter; the overwhelming majority of its user base uses the flagship app. They are going to die eventually, but Twitter refuses to kill them off once and for all. It’s a prime example of how the company, when presented with an obvious decision, goes out of its way to avoid making it.

That’s why I’ve been baffled this week by Dorsey’s media tour, in which he has sought to explain the company’s ambivalent approach to disciplining Alex Jones. Over the past week, Twitter found that Jones violated its rules eight times, then gave him a one-week suspension in which he could still read tweets and send direct messages.

Here is how Dorsey described that process to The Hill’s Harper Neidig:

“We’re always trying to cultivate more of a learning mindset and help guide people back towards healthier behaviors and healthier public conversation.“

“We also think it’s important to clarify what our principles are, which we haven’t done a great job of in the past, and we need to take a step back and make sure that we are clearly articulating what those mean and what our objectives are.”

Again, presented with an obvious decision, Twitter declines to make it. Then, even more surprisingly, it suggests the problem is that it hasn’t clearly articulated its own policies — when, in fact, it articulated perfectly clear policies online, to the point that CNN’s Oliver Darcy was able to use them to identify the very instances of rule-breaking that eventually got Jones into trouble.

On Wednesday, Jack Dorsey told the Washington Post that he is ”rethinking the core of how Twitter works.“ And yet the company’s history suggests that it hasn’t failed for lack of thinking. The problem, rather, is that thinking has so often served as a substitute for action.

Democracy

Google Employees Protest Secret Work on Censored Search Engine for China

Kate Conger and Daisuke Wakabayashi get their hands on a letter signed by 1,400 Googlers protesting the development of a censored search engine and news app. This is shaping up to be a major conflict. Google won’t comment — censorship is considered a state secret in China, so discussing it could scuttle the company’s plans — but as a result, these employees get to define the narrative with no pushback from Google itself.

“We urgently need more transparency, a seat at the table, and a commitment to clear and open processes: Google employees need to know what we’re building,” the letter said.

The letter also called on Google to allow employees to participate in ethical reviews of the company’s products, to appoint external representatives to ensure transparency and to publish an ethical assessment of controversial projects. The document referred to the situation as a “code yellow,” a process used in engineering to address critical problems that impact several teams.

Google Censorship Plan Is “Not Right” and “Stupid,” Says Former Google Head of Free Expression

Lokman Tsui, Google’s head of free expression for Asia and the Pacific from 2011 to 2014, takes a look at Google’s plans for a censored search engine. “This is just a really bad idea, a stupid, stupid move,” he tells Ryan Gallagher. “I feel compelled to speak out and say that this is not right.” Tsui goes on:

“In these past few years things have been deteriorating so badly in China – you cannot be there without compromising yourself,” Tsui said. Google launching a censored search engine in the country “would be a moral victory for Beijing,” he added. “Beijing has nothing to lose. So if Google wants to go back, it would be under the terms and conditions that Beijing would lay out for them. I can’t see how Google would be able to negotiate any kind of a deal that would be positive. I can’t see a way to operate Google search in China without violating widely held international human rights standards.”

Google Staff Tell Bosses China Censorship is “Moral and Ethical” Crisis

Gallagher also reports on an essay written by former Googler Brandon Downey, who worked on the original censored Google search engine:

“I want to say I’m sorry for helping to do this,” Downey wrote. “I don’t know how much this contributed to strengthening political support for the censorship regime in [China], but it was wrong. It did nothing but benefit me and my career, and so it fits the classic definition of morally heedless behavior: I got things and in return it probably made some other people’s life worse.”

“We have a responsibility to the world our technology enables,” Downey adds. “If we build a tool and give it to people who are hurting other people with it, it is our job to try to stop it, or at least, not help it. Technology can of course be a force for good, but it’s not a magic bullet – it’s more like a laser and it’s up to us what we focus it on. What we can’t do is just collaborate, and assume it will have a happy ending.”

Update on Myanmar

Late on Wednesday, following a dire report on its handling of ethnic conflict in Myanmar, Facebook posted an “update” on its work there. Players of talking-points bingo will find “we were slow to act,” “we’re hiring more people,” and “we have more work to do” all represented. But here’s something I didn’t know about Facebook’s problems in Myanmar — they’re exacerbated by a font display issue:

We’re also working to make it easier for people to report content in the first place. One of the biggest problems we face is the way text is displayed in Myanmar. Unicode is the global industry standard to encode and display fonts, including for Burmese and other local Myanmar languages. However, over 90% of phones in Myanmar use Zawgyi, which is only used to display Burmese. This means that someone with a Zawgyi phone can’t read websites, posts or Facebook Help Center instructions written in Unicode properly. Myanmar is switching to Unicode, and we’re helping by removing Zawgyi as an option for new Facebook users and improving font converters for existing ones. This will not affect people’s posts but it will standardize how they see buttons, Help Center instructions and reporting tools in the Facebook app.

New WordPress policy allows it to shut down blogs of Sandy Hook deniers

Amid criticism that the company was hosting several blogs that harassed the victims of Sandy Hook shootings, WordPress parent Automattic changed company policy on Thursday and began shutting down those blogs. Sarah Perez reports that WordPress policy now prohibits “malicious publication of unauthorized, identifying images of minors.”

WordPress policies were designed to be more resistant to the strategic use of copyright claims as a means of getting content removed. Longtime web veterans know they were written this way because they were created at a time when large corporations would wield copyright law – like the DMCA – as a weapon used to force platforms to take down content about their company that they deemed unfavorable.

But in recent years, the permissiveness these policies has also created loopholes for those whose spread disinformation, incite hatred and violence, and post abusive and offensive content to the web.

Austin pirate radio station that airs Alex Jones faces $15k fine

The latest entity to de-platform Alex Jones — besides WordPress — is the Federal Communications Commission, reports Gary Dinges. (It’s not clear what connection, if any, this station actually has to Jones.)

An Austin pirate radio station that airs controversial host Alex Jones has been knocked off the city’s airwaves – at least temporarily – and the Federal Communications Commission has levied a $15,000 penalty that the station’s operators are refusing to pay.

A lawsuit filed this week in U.S. District Court in Austin accuses Liberty Radio of operating at 90.1 FM without federal consent since at least 2013. Religious programming was airing on that frequency Wednesday, in place of Liberty Radio.

Why Facebook Enlisted This Research Lab to Track Its Trolls

Issie Lapowsky profiles the Atlantic Council’s Digital Forensics Research Lab, which is tasked with explaining the origins of misinformation online. Facebook is leaning heavily on the group as it works to understand the influence campaign that is now unfolding on the service:

But for Facebook, giving money away is the easy part. The challenge now is figuring out how best to leverage this new partnership. Facebook is a $500 billion tech juggernaut with 30,000 employees in offices around the world; it’s hard to imagine what a 14-person team at a non-profit could tell them that they don’t already know. But Facebook’s security team and DFRLab staff swap tips daily through a shared Slack channel, and Harbath says that Brookie’s team has already made some valuable discoveries.

Elsewhere

How Snap Is Becoming Twitter

Believe it or not, “Snap is the new Twitter” used to be considered something of a hot take. But the numbers don’t lie: it’s another company that vacillates between slow growth and outright decline, Tom Dotan reports:

For now, Snap’s ad revenue is growing quickly, as advertisers flock to what remains a relatively new platform. In the June quarter, the company’s revenue of $262 million was up 44% over the same period last year, blowing past analyst projections. But if it follows Twitter, Snap’s ad revenue growth will slow sharply next year.

A Mark Zuckerberg-backed nonprofit is helping separated migrant families

Silicon Valley immigration advocacy group FWD.us, which seems to have dramatically underperformed expectations, recently invested millions of dollars in reuniting separated families of migrants, Heather Kelly reports. Good for FWD.

The group spent two weeks in July in New Mexico, Texas, and Arizona booking flights for reunited parents and their children who were just out of federal custody. The multi-million dollar effort, called Flights for Families, required long hours on the phone booking some 1,300 tickets and attending to countless other details, such as lining up prepaid cell phones, connecting families with lawyers, and keeping the kids entertained.

What Am I Worth to Advertisers? My Obsessive Quest to Put a Price on My Attention

Bryan Menegus was served 319 online ads one Tuesday in July, costing advertisers about $2.69, he estimates.

Launches

Facebook cracks down on opioid dealers after years of neglect

Facebook is now suggesting resources to people who search for fentanyl and other opioids, as well as removing more drug dealers from search results, Josh Constine reports.

Takes

Facebook’s failure in Myanmar is the work of a blundering toddler

Olivia Solon is not impressed with Facebook’s recent statements about its work in Myanmar:

When the Guardian asked how the notoriously metrics-focused company would measure the success of the policy, the answer was characteristically mealy-mouthed: “Our goal is to get better at identifying and removing abuses of our platform that spread hate and can contribute to offline violence or harm, so people in Myanmar can safely enjoy the benefits of connectivity.”

When pushed again to specify how it would measure this, a spokeswoman said “that’s difficult”.

And finally ...

An Ad Network That Helps Fake News Sites Earn Money Is Now Asking Users To Report Fake News

Revcontent makes one of those awful chum boxes that attach to the bottom of more reputable news stories enticing you to learn about one weird trick to cure belly fat, or 12 former child stars who now look terrible, or whatever. After Buzzfeed’s Craig Silverman asked them about various fake news stories contained in their chum boxes, Revcontent grudgingly removed a few of them — but not before denouncing Buzzfeed itself as fake news.

An ad network launched a new initiative to “continue the fight against fake news” at the same time it was working with 21 websites that have published fake news stories, according to a review conducted by BuzzFeed News.

When contacted for comment, Revcontent subsequently removed four of the sites from its network, and in a statement suggested that a previous BuzzFeed News story about ad networks on fake news sites could itself be considered “fake news.”

The story above is from 2017, and Revcontent lets me know that it is now working with an international fact-checking network. Progress!

Talk to me

Send me tips, questions, comments: casey@theverge.com.

Update, 10:37 a.m.: This story has been updated to note that Revcontent has begun working with a fact checker.