Picture this: you’re in a busy restaurant having a quiet meal with a friend. Suddenly, one of the patrons, obviously drunk, starts getting loud and obnoxious, going from table to table insulting the other diners. Within a minute or two, all of the other customers are very uncomfortable and wishing the management would throw the bum out. That’d be the sensible thing to do, wouldn’t it? But the management is actually powerless to do that. Instead they ask everyone to leave. Then they shut down the restaurant until they can figure out a way to prevent other random loudmouth drunks from ruining their business.
Well, Microsoft just had a similar experience on Twitter. In 2014, the company launched a learning “chatbot” driven by artificial intelligence on two popular social media platforms in China. The chatbot, named Xiaoice, has been a huge success; tens of millions of users enjoy interacting with “her.”
But recently, when Microsoft launched on Twitter the same kind of chatbot, this one named Tay, things went disastrously off the rails within a matter of hours. As you probably know, there are certain Twitter users whose favorite activity is sowing chaos and disruption on the platform. When word quickly spread through their grapevine that Tay was programmed to learn through its interactions, they bombarded its account with sexist, racist and anti-semitic tweets. The result? Very quickly, Tay itself started tweeting highly offensive hate speech. Helpless to “throw the bums out,” Microsoft quickly issued an apology and took Tay offline while their engineers figure out how to prevent a recurrence.
Microsoft’s experience with Tay shows, once again, that technology can be too easily co-opted to serve as a force multiplier for the offensive views of a small handful of idiots. And as a recent NPR story pointed out, some of Google’s algorithms have learned socially discredited biases, even without a concerted effort to corrupt them.
Should we just learn to expect these kinds of incidents and just chalk it up to “algorithms being algorithms?” Why is this a big deal?
I could argue that allowing algorithms to reflect and especially to magnify intolerant biases runs counter to our values. And while I believe that, I don’t even think I have to go there to argue that this is a problem worth trying to solve. From a strictly pragmatic point of view, biased algorithms are bad for business. Who wants to risk offending and alienating large segments of their market? Sure, Google and Microsoft are big enough to survive embarrassing incidents like these, but many businesses probably aren’t.
Algorithms can’t just be programmed to learn from data. They must be programmed to discern which data is worth learning from and which data should be discounted.
I can see some people wringing their hands already, arguing that making value judgments over which data to include and which to exclude amounts to some kind of insidious social engineering. But we’re not talking about limiting free speech in public spaces. We’re talking about setting rules in online business environments. We’re talking about keeping the loudmouth drunks from ruining everyone else’s dinner and threatening the restaurant’s livelihood.
It’s a tough challenge, because businesses understandably don’t want to spend a lot of time and money on problems caused by tiny subsets of their audiences. But when we consider that those few audience members have an outsized ability to disrupt and alienate audiences many times their own number—and, indeed, that they revel in that power—it should tilt the cost/benefit analysis. As much as we can, we need to edge out these edge cases.
Businesses have a financial interest and responsibility in making their online environments welcoming to the widest possible potential market. A restaurant can hire a bouncer to throw out an obnoxious drunk and prevent him from returning. It’s time our algorithms got better bouncers.
This article was written by H.O. Maycotte from Forbes and was legally licensed through the NewsCred publisher network.