Tear down the new institutions

They’re just like the old ones.

Ben Werdmuller
Ethical Tech
Published in
8 min readOct 10, 2017

--

We used to talk about gatekeepers.

For a while, the internet was all about smashing gatekeepers in order to give everyone a voice.

Record labels were gatekeepers: they decided who could release an album. Book publishers were gatekeepers: they decided who could publish. Before the web transformed the way we communicate, media companies seemed like gatekeepers to culture itself.

The web brought about a genuine revolution of thought and discourse. Once I’ve finished writing this piece, I’ll hit Publish and you’ll be able to read it from anywhere in the world — all for free. If you want to, you can reply below, without any kind of human filter deciding whether your comment is worth reading. All of this seems commonplace now, but when blogging was new, it was scandalous. Let anyone publish?! What are their credentials?! How will you find anything good to read?!

Of course, the record labels and the book publishers were mostly white men, publishing things that white men wanted to publish; the gatekeepers were holding back diverse communities that had never really been heard.

Unfortunately, that struggle continues: those open comments became places where white men left torrents of abuse for these newly-empowered voices that were threatening their status quo.

There were two schools of thought. In the first, utopian view, the internet would disrupt the entire concept of gatekeepers and culture would become a democratic, peer-to-peer mesh, as it always should have been. Nobody would be in a position of determining what could be popular and who could be heard ever again. Everybody would own the discourse. Everything would be free.

In the other school of thought, we just needed new gatekeepers: a technological vanguard that was better suited to harnessing the internet than the previous generation. This elite would disrupt incumbent industries and replace them, establishing new seats of power and making billions of dollars in the process.

This is the school of thought that won.

Relevant connections

Fourteen years ago, I co-wrote a paper that imagined a world where people would get their information through social networks. Their relevant connections” would act as a kind of filter through which knowledge would flow. You would connect to people you found interesting, and you would learn through what they posted and re-shared.

It’s hard to remember that blogging was a revolution: suddenly anyone could publish. In 2003, platforms like Blogger and Movable Type were enabling voices who could never have found audiences before. Livejournal had invented an intimate feed of your friends and communities, its granular access controls making it safer to speak frankly online. Companies like Technorati were taking the pulse of this new social realm.

What if you could marry these things together? It seemed obvious that this was going to be the future of information.

Those two schools of thought took off in different directions. In the one I was part of, we immediately set to work building distributed, open source technologies that could be deployed anywhere. The network software I co-founded, Elgg, was used by non-profits and universities to share resources. People like Brad Fitzpatrick, who had previously started Livejournal, worked on the hard problem of decentralized identity, building technology like OpenID that could allow anyone to log into anything, without any single party owning the platform.

Meanwhile: Facebook.

Facebook is what you get when you don’t have any ethical quandaries about building a platform where engagement is powered by addiction, proprietary algorithms and rules decide what you should and shouldn’t read, and advertising is allowed to target users to a disturbing degree. The torrents of abuse that women and underrepresented communities endure on the internet are empowered by rules written by rich, white men, who can’t quite imagine such a different set of lived experiences:

Facebook’s rules constitute a legal world of their own. They stand in sharp contrast to the United States’ First Amendment protections of free speech, which courts have interpreted to allow exactly the sort of speech and writing censored by the company’s hate speech algorithm. But they also differ — for example, in permitting postings that deny the Holocaust — from more restrictive European standards.

At the same time, it can be beautiful. Its centralization allows you to find people you haven’t seen in decades; should you want that, it can lead to emotional reunions. This weekend, I connected with family members over the platform to remember a loved one, and I arranged to meet up with someone I went to school with in Oxford twenty-eight years ago. Maybe we would have achieved that on the decentralized web; maybe we wouldn’t. Facebook gives us that.

Regardless, Facebook does enable abuse, and it can swing elections. In 2010, Facebook ads had been used by Republican opposition to a Florida ballot proposition. The opposition party failed, but when polling firms dug into whether the ads had been an efficient use of funds, they were surprised at what they found:

[…] What the polling firms found was that heavy Web users who were on Facebook were 10 points more likely to vote no on 8 than Democrats (who may or may not have seen the ads) were. “The people who self identify as Democrats who should have voted our way were 10 percent less likely to vote for it than people who were heavy web users on Facebook,” said Koster. “And people who were heavy web users on Facebook, including Republicans and independents, outperformed Democrats on a Democratic issue because of the online ads.” Not only that, but heavy Web users were often able to repeat back the messaging slogans they’d seen verbatim.

In 2010, Facebook had 400 million active users. Today, it has 2 billion.

Swinging elections with ad targeting

Earlier this year, Scout reported that this technique was used at scale in the 2016 US election.

By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world.

This summer, McClatchy reported that Congressional investigators are checking into whether Jared Kushner helped supply information that Russian companies used to target ads in key swing states. As the Financial Times reported:

The Russian-bought Facebook ads cost $100,000 and reached 10m Americans, targeting swing states with propaganda on everything from race to guns to gay rights. And the threat has not gone away: last month Russian internet trolls sought to stir up passions in a debate over American football players kneeling in protest during the national anthem, according to James Lankford, a senator from Oklahoma.

I don’t believe these ads solely swung the election toward Trump. There were deep-seated social issues and widening income inequality, and the DNC didn’t help their case with the contents of their leaked emails (although there’s no evidence that they went so far as to rig the primaries against Sanders). Nonetheless, they were a significant factor, and the fact that a foreign power can place ads to swing an election is clearly an enormous security problem.

The US, of course, often involves itself in foreign elections:

For decades, American intelligence agencies have historically used clandestine tactics to put leaders into office who are favorable to U.S. national interests. This practice of meddling dates back to the early days of the CIA and was seen as a necessary strategy to contain the Soviet Union during the Cold War.

This strategy might not seem like a problem until it happens to you. Similarly, allowing companies to place ads in order to swing opinion towards their brand might not seem like a problem until the brand is a political party and the company is connected to a foreign government. The problem is not just that Russian companies placed the ads (although, to reiterate, this is an enormous problem); it’s that this kind of ad placement could swing public opinion at all, for anybody.

Rules need to be written so that even the worst actor does the right thing. And social media — the operating system for international popular politics in the age of the internet — needs better rules.

Better rules

There’s a term for rules that apply to everybody in software: source code.

In the light of the 2016 election, and the upcoming 2018 primaries, we need to revisit the utopian vision of the web. But whereas the 2003 vision of the web was a enabled solely by technology, our reimagined version needs to incorporate the humanities at a deep level.

If we concentrate solely on the technical ability to create decentralized systems, we run the risk of building platforms that only technical people will use, or that repeat the mistakes of the centralized platforms when it comes to abuse and the ability to be gamed. Fighting abuse and deeply understanding how culture and other human factors bias algorithms will be key.

It’s a tricky path to walk. Mastodon has become the most popular of the decentralized social platforms, but in part for very uncomfortable reasons: its large Japanese userbase is largely into child erotica. However, we need to reframe our thinking about these kinds of standards-based projects. Yes, you can publish horrific content on the web, and send it via email. That doesn’t inherently mean the web and email are bad. Nobody owns them. You can despise child erotica while not despising the web. And with enough social insight, you can build community standards into decentralized software.

The new generation of social platforms have become gatekeepers that dictate exactly what we can learn about, and how. Most importantly, they have decided to make money by letting third parties have influence over what we learn. It has become painfully clear that this is in opposition to democracy itself.

There has never been a clearer argument for building new kinds of platforms that have all the virality and human beauty of social media, but protect society from being gamed. Technologists, artists, journalists, academics, open source activists and sociologists need to come together to build something new — or better yet, lots of new things.

As we’re fond of saying at my day job: nothing will work, but everything might. Now is the time for lots and lots of experiments.

Facebook was a great proof of concept, and a great warning. Let’s move on.

This is a personal blog post, which doesn’t necessarily reflect the views of my employer.

But: I’m looking for mission-driven startups helping to build a more informed, inclusive and empathetic society. And if you’re helping to solve the problems discussed in this post, so much the better. Apply today.

--

--

Writer: of code, fiction, and strategy. Trying to work for social good.