Skip to main content

If Facebook controls your mind, so do a lot of other tech companies

If Facebook controls your mind, so do a lot of other tech companies

/

Claims that it’s easy to sway us misunderstand how hard it is to change minds

Share this story

Illustration by Alex Castro / The Verge

Can Facebook’s algorithms control our population and politics? Yes, and it is uniquely bad, says François Chollet, an artificial intelligence researcher at Google and author of the well-known machine learning library Keras. But in hyping up the power of AI, he is underestimating how hard it is to change our minds, and the distinction he makes between Facebook and other tech companies is weak.

In a widely shared Twitter thread, Chollet argues that Facebook is capable of “mass population control” on the political level, and its AI doesn’t even need to be that sophisticated. This is possible, he argues, because our lives are increasingly lived online, and that gives Facebook massive power to guide what we see. Our “static” brains are vulnerable to being influenced, and there’s no way to fight it. AI is getting smarter, and the world hasn’t ended yet because we’re still bad at AI. Chollet is specifically singling out Facebook — and not his employer, Google, Amazon, or Apple  — because of its history of experiments, its “morally bankrupt leadership,” and because he believes it is uniquely embedded in our social lives. (More on this later.)

First, it’s not clear that Facebook is as politically powerful as Chollet makes it out to be. He’s right that relying on the internet makes it easier for companies to manipulate us — and yes, Facebook already has. This is a genuine problem, but it’s also one we’re trying to solve now, not a scary future apocalypse we’re all blind to. For Chollet’s argument to be truly frightening, Facebook needs to be the sole source of news. That’s unlikely, and Pew numbers show that people still get their news from a variety of sources.

Being fooled by bad information happens frequently. But it’s hard for facts — or, for that matter, lies — to speak to our beliefs and emotions.

Other research has shown that online “echo chambers” aren’t as common as we think, according to Brendan Nyhan, a professor of political science at Dartmouth College who studies information and politics. And, in a paper published last November in the Proceedings of the National Academy of Sciences, economists found that polarization actually increased most quickly among the Americans least likely to use the internet: the elderly. In fact, other research suggests that polarization is more likely to be caused by television than by the internet.

Even then, the effects are small. One estimate shows that the net effect of exposure to an additional ad shifts the partisan vote of approximately two people out of 10,000, according to Nyhan. “Persuasion is very difficult,” he wrote in an email to The Verge. “We should worry about possible exceptions, but in general people’s minds are hard to change en masse about high-profile candidates and issues.”

So how do we square this with all the studies about how we’re prone to cognitive biases? All this talk of vulnerable minds and easy manipulation is conflating different things. It’s easy for us to be taken in by a news story, whether it’s deliberately fake or accurate but not worth worrying about (like the tiny possibility that a Chinese space station will fall on people’s heads). We’re bad at assessing risk, we get taken in by novelty, and we’re cognitively lazy. Being fooled by bad information happens pretty frequently. But it’s hard for facts — or, for that matter, lies — to speak to our beliefs and emotions. We know that already: just stating facts won’t change people’s minds.

Most people have always lived in a world where everyone is trying to influence us: television, billboards, the entire traditional advertising industry. Virtually all of us have been persuaded by something a friend said — so yes, we can absolutely be influenced. But as the psychologist Jonathan Haidt has written, many of our beliefs are emotional and intuitive. We arrive at rational reasons for them after the fact. Partisanship is powerful, and it’s much harder to change your parent’s vote than to tell them to use a different brand, because politics reflect identity more strongly than brands do. For example, party identification is increasingly related to someone’s position in the income distribution, not by whether they use Facebook. That’s why Chollet’s point about political population control is confusing. Algorithms can probably serve a lot of information that reinforces our beliefs, but they’re less likely to be able to easily change us from conservatives to liberals.

And when it comes to getting served information, it’s weird to single out Facebook. Facebook certainly is more personal than YouTube, but Americans don’t trust Facebook anyway. And his argument that the other companies have yet to explicitly experiment on users isn’t reassuring. Google and Amazon aren’t embroiled in election scandals yet, but that doesn’t mean their technology couldn’t land them in one eventually. (And if we’re relying on the virtue of leaders, then both of those companies are only one bad leader away from becoming menaces themselves.) Google’s powerful search algorithm can unwittingly serve up false facts and answers, and it collects personal data and sells advertising capabilities based on that data. Home assistants like Alexa have access to our intimate lives, and this information can easily be used against us. These companies usually want to sell us things rather than influence our politics, but they sometimes wind up doing so anyway. Take Google-owned YouTube: without any malicious intent, its recommendation algorithm routinely points to extremist content.

So while it’s easy to see why Chollet’s tweets were widely shared — they confirm a lot of people’s worst fears — the situation is more complicated than he suggests. Facebook isn’t as powerful as he presents it to be, and other tech companies have the means to engage in similarly bad behavior. And if the only thing preventing Google — or Amazon, or anyone else — from being actively bad for society is good leadership, we may be in trouble when their leadership changes. It all leads to one conclusion, really: the best way to guarantee safety is regulation, whether that’s restrictions on how companies collect and use our data, or laws mandating that these algorithms are not opaque. These companies aren’t the same, and we should keep nuances in mind, but the future that Chollet fears is more likely to come to pass if we don’t cast a critical eye on all of them.