How tech design is always political

Social media companies have made many mistakes over the past 15 years. What if they’re repeated in the so-called AI revolution?

Facebook has a long-maligned yet still active feature called “People You May Know.” It scours the network’s data troves, picks out the profiles of likely acquaintances, and suggests that you “friend” them. But not everyone you know is a friend.

Anthropologist Dragana Kaurin told me this week about a strange encounter she had with it some years back.

“I opened Facebook and I saw a face and a name I recognized. It was my first grade teacher,” she told me. Kaurin is Bosnian and fled Sarajevo as a child, at the start of the war and genocide that took hundreds of thousands of lives between 1992 and 1995. One of Kaurin’s last memories of school life in Sarajevo was of that very same teacher separating children in the classroom on the basis of their ethnicity, as if to foreshadow the ethnic cleansing campaign that soon followed.

“It was widely rumored that our teacher took up arms and shot at civilians, and secondly, that she had died during the war,” she said. “So it was like seeing a ghost.” Now at retirement age, the teacher’s profile showed her membership in a number of ethno-nationalist groups on Facebook.

Kaurin spent the rest of that day feeling stunned, motionless. “I couldn’t function,” she said.

The people who designed the feature probably didn’t anticipate that it would have such effects. But even after more than a decade of journalists like The New York Times’ Kashmir Hill showing various harms it could inflict — Facebook has suggested that women “friend” their stalkers, sex workers “friend” their clients, and patients of psychiatrists “friend” one another — the “People You May Know” feature is still there today.

From her desk in lower Manhattan, Kaurin now runs Localization Lab, a nonprofit organization that works with underrepresented communities to make technology accessible through collaborative design and translation. She sees the “People You May Know” story as an archetypical example of a technology that was designed without much input from beyond the gleaming Silicon Valley offices in which it was conceived.

“Design is always political,” Kaurin told me. “It enacts underlying policies, biases and exclusion. Who gets to make decisions? How are decisions made? Is there space for iterations?” And then, of course, there’s the money. When a feature helps drive growth on a social media platform, it usually sticks around.

This isn’t a new story. But it is top of mind for me these days because of the emerging consensus that many of the same design mistakes that social media companies have made over the past 15 years will be repeated in the so-called “AI revolution.” And with its opaque nature, its ability to manufacture a false sense of social trust and its ubiquity, artificial intelligence may have the potential to bring about far worse harms than what we’ve seen from social media over the past decade. Should we worry?

“Absolutely,” said Kaurin. And it’s happening on a far bigger, far faster scale, she pointed out.

Cybersecurity guru Bruce Schneier and other prominent thinkers have argued that governments should institute “public AI” models that could function as a counterweight to corporate, profit-driven AI. Some states are already trying this, including China, the U.K. and Singapore. I asked Kaurin and her colleague Chido Musodza if they thought state-run AI models might be better equipped to represent the interests of more diverse sets of users than what’s built in Silicon Valley.

Both researchers wondered who would actually be building the technology and who would use it. “What is the state’s agenda?” Kaurin asked. “How does that state treat minority communities? How do users feel about the state?”

Musodza, who joined our conversation from Harare, Zimbabwe, considered the idea in the southern African context: “When you look at how some national broadcasters have an editorial policy with a political slant aligned towards the government of the day, it’s likely that AI will be aligned towards the same political slant as well,” she said.

She’s got a point. Researchers testing Singapore’s model found that when asked questions about history and politics, the AI tended to offer answers that cast the state in a favorable light.

“I think it would be naive for us to say that even though it’s public AI that it will be built without bias,” said Musodza. “It’s always going to have the bias of whoever designs it.”

Musodza said that for her, the question is: “Which of the evils are we going to pick, if we’re going to use the AI?” That led us to consider that a third way might be possible, depending on a person’s circumstances: to simply leave AI alone.

This piece was originally published as the most recent edition of the weekly Authoritarian Tech newsletter.