I woke up one morning to find that my Twitter account, @GordonSBrooks, had been locked because, in tweets responding to a reprehensible person trolling Buzz Aldrin, I called her (I assume a her by the profile picture, but there's no guarantee on these things) an "attention whore," meaning someone who is essentially selling themselves in exchange for getting attention.
And Twitter's algorithms locked me out. They didn't lock the account of the person who insulted and trolled a great American hero, whose feed is full of the worst kind of conspiracy tripe, and who accuses the Speaker of the US House of Representatives of being Mafia-connected.
I just deleted my tweets because it's not worth the fight in this case. But at some point we have refuse to let algorithms set the rules. Because this isn't just a problem with Twitter. This is a problem with our attitudes about artificial intelligence.
Companies like Twitter use AI to save money. I understand that. In Twitter's case they pretty much have no choice. The problem comes in the degree to which we believe that AI is doing a pretty good job of being intelligent, and so we let decisions that really should be left to human beings fall to AI instead, with only a passing nod to human review.
I can envision an appeal like this being reviewed. The AI catches the word "whore" without the context of the modifying adjective "attention" and freaks out. The harried human who, at some point, reviews the tweet, also sees the word "whore" and doesn't have enough time to think about the context because they have so many hundreds of tweets to review by the end of the shift that thinking doesn't really come into the equation. The human has a quota to fill, and Twitter gets to say that the tweets were "reviewed" by a real human being. And we all rail against Twitter's rules without really realizing that the whole human review portion of this is a sham.
The real danger of artificial intelligence is not that it's going to take away our jobs, although it will do that in a lot of cases. The danger is that we will, given our tendency to see patterns of intelligence where intelligence doesn't really exist, assign to AI tasks for which it is wholly unsuited.
In the world of social media, this means tainted discourse, which is not a trivial problem. But what happens when we let AI make decisions about larger matters, matters that truly involve life-and-death decisions?
Before we deploy AI, we need to make sure that it really is as smart as we think it is.