Skip to navigation

PCPro-Computing in the Real World Printed from

Register to receive our regular email newsletter at

The newsletter contains links to our latest PC news, product reviews, features and how-to guides, plus special offers and competitions.

// Home / Blogs

Posted on January 22nd, 2014 by Barry Collins

Sorry, Stan Collymore, you can’t beat Twitter racists with an algorithm

Collymore BBC report

Twitter is, once again, taking a mauling in the mainstream media for failing to tackle abuse. I’ve just watched ex-footballer Stan Collymore on the BBC Breakfast sofa, describing how he received racist abuse and death threats for daring to suggest a Liverpool player dived for a penalty. Earlier this week, Olympic medallist Beth Tweddle took some appalling, misogynistic abuse in a live Twitter Q&A about women in sport. Twitter’s ability to amplify the opinions of the dregs of our society remains undiminished.

Twitter, Collymore and others argue, is not doing enough to tackle the abusers. I wholeheartedly agree. What I don’t agree with is Collymore’s assertion that tackling racist comments is a simple matter of tapping out a few lines of code:

Collymore tweet

Collymore’s a footballer, not a computer scientist, so his faith in the ability of algorithms to filter out abuse is understandable, if misplaced. No “simple” script or algorithm could accurately discern the context of tweets; it’s a brutally complex task to make computers interpret human language. The continued existence of the Loebner Prize – an annual competition which will be scrapped the moment the judges cannot distinguish a computer from a real human in a Turing test that includes deciphering and understanding text – is proof of that.

Several of the abusive tweets fired at Collymore contained the word “nigger”, for example. You might think it reasonable for Twitter to simply block any tweet that contains such as an offensive word, but such a policy would also ensnare entirely benign tweets such as this, which is just one of several non-abusive tweets I found within seconds of tapping “nigger” into Twitter’s search engine:

Nigger tweet

The word “yid” is used as a form of abuse to Jewish people, but as the Jewish writer and comedian David Baddiel explained on last night’s Newsnight, in reference to the Nicolas Anelka case, the word has also been “reclaimed” by Tottenham Hotspur supporters, in much the same way “nigger” has been by black comedians such as Chris Rock and Reginald D Hunter.

“Spurs fans are completely correct to say that they think they do it [use the word 'yid'] in a different way to the way that Chelsea fans do it,” Baddiel explained. “That is absolutely right. All that has to go into the mix when you’re trying to get to a place, at the end of the day, where anti-Semitism isn’t on the terraces any more. It is complicated, it is nuanced.”

It’s even more complicated when you’re dealing with millions of messages every minute. There is simply no conceivable way that an algorithm could, in real-time or otherwise, accurately discern the racist intent of a tweet.

Human intervention

That’s not to say Twitter is powerless to prevent such abuse. Collymore claims that, six weeks after reporting a previous incident of racist abuse to the police, Twitter has yet to provide details of the account holder to police. And many of the tweets that Collymore claims to have reported to the site remain online, days and weeks after they were published.

The journalist, Caroline Criado-Perez, who was subject to vile threats for doing nothing more provocative than championing the cause of Jane Austen to appear on a bank note, has also claimed Twitter is too slow to respond to reports of abuse. It took a fellow journalist – not the police or Twitter – to identify the two offenders, who were eventually prosecuted.

I shudder to think how many reports of abuse Twitter receives each day. I suspect it’s tens of thousands. But Twitter’s not a poor company, nor a fledgling start-up any more. It patently needs to hire more staff to deal with abuse, because – unfortunately – this isn’t a job it can outsource to machines.

Tags: , ,

Posted in: Newsdesk


Follow any responses to this entry through the RSS 2.0 feed.

You can skip to the end and leave a response. Pinging is currently not allowed.

19 Responses to “ Sorry, Stan Collymore, you can’t beat Twitter racists with an algorithm ”

  1. Sinudeity Says:
    January 22nd, 2014 at 1:47 pm

    No, people just need to grow a pair. You need to accept the fact that there will be anonymous kiddies parking behind their PC’s, trolling random people. Just learn to deal with it. Learn not to take anonymous people hiding behind the safety of their PC’s seriously.

  2. Terry Says:
    January 22nd, 2014 at 2:48 pm

    Where does this ridiculous idea come from that it is the job of the site to ‘prevent’ or ‘tackle’ this “abuse”? Twitter is, as far as I can gather, a blank wall, offering the opportunity for people to express their thoughts. If those thoughts are unpalatable or even illegal then that is entirely an issue with the writer.

    If I receive a nasty letter through the post, I don’t complain to Basildon Bond and ask them what they intend to do to stop it happening again. Nor do I blame Microsoft if I receive a nasty message in my hotmail.

    Last time I looked it wasn’t actually compulsory for anyone to have a Twitter account (I don’t). The solution for these celebs is simple. Log off and delete and retreat back into the fantasy world they occupied before where everybody thinks them to be wonderful.

  3. Jerry Says:
    January 22nd, 2014 at 6:01 pm


    While I agree that it shouldn’t be mandatory for any website to police it’s content, when it comes to Twitter, doing so is obviously in their own self-interest.

    If celebrities were to do as you suggest and delete their accounts, then so would a huge number of other users who would have no reason to stay. It would be death by atrophy for the site.

    The solution isn’t impossibly complex algorithms though, as the author states. The obvious difference between Twitter and any site which is relatively free of this abuse is anonymity.

  4. tech3475 Says:
    January 22nd, 2014 at 6:07 pm

    This doesn’t surprise me, even the government thing a ’simple algorithm’ can solve ‘all the problems’ on the internet.

  5. Tim Says:
    January 22nd, 2014 at 6:55 pm

    Twitter are being slow because they don’t want to spend a fortune tracking down silly teenagers for no good reason. They probably think the police would be better off catching real criminals. In a country full of drunks and guns a fat kid typing in his backroom is pretty low on the priority list.

  6. Chris Says:
    January 22nd, 2014 at 8:47 pm

    I’ve got a suggestion for Twitter: Ban any account that are judged to have crossed the line, block the IP for repeat offences with new accounts, and other members of the family affected by the block would see a notice on the site saying “Your IP has been blocked because of the following comments…”

    A little bit of parenting would put an end to the majority of the nonsense.

  7. adolfobama Says:
    January 23rd, 2014 at 6:00 am

    So, what are the chances the same ‘hidden people’, not that daft frontwoman M.P. Perry, who ultimately pulled the levers that pushed through the ‘protect the kids’ filters will dream up new internet controls? It’s a dead certainty in my opinion. Porn, trolls and the internet have all been going strong for ages, but we are suddenly seeing all these changes being introduced now. If an agenda wasn’t being pushed, then it wouldn’t be all over the MSM. Hmm…

  8. Rob Says:
    January 23rd, 2014 at 7:47 am

    If this was a complaint from an ordinary Joe it would not be of any interest to anyone, Why do people with TV profiles think they are more precious than the rest of us? Grow up SC!

  9. Dan Says:
    January 23rd, 2014 at 8:29 am

    I deal with offensive people everyday just walking the street, can the police filter out the morons who aren’t on the Internet first?

  10. Bill Maslen Says:
    January 23rd, 2014 at 10:26 am

    This is precisely why Stephen Fry ‘retired’ from Twitter, isn’t it? Although it goes against the grain, I have to agree with those who argue that if you don’t like it, don’t use it. But it’s sad to witness the unexpectedly effective role of social media in exposing the noxious, grimy underbelly of human culture – those aspects of human nature that drive one to conclude that we’re innately nasty rather than innately nice.

  11. PM Says:
    January 23rd, 2014 at 10:38 am

    What age is Stan Collymore? He sounds like a whining little child who needs to man the f*ck up. Or even better Stan, how about doing something more useful than wasting your time on something so mundane and juvenile as Twatter?

  12. It'sMe Says:
    January 23rd, 2014 at 12:03 pm

    Just wondering if a few simple algorithms could’ve stopped him kicking the shhh out of his girlfriend?

  13. Phil Lee Says:
    January 23rd, 2014 at 3:53 pm

    Twitter refuse to provide the police with account details of the criminals, citing US laws on freedom of speech.
    Well, if they want to do business anywhere other than the US, they need to learn to abide by the laws of the countries they do business in, and that includes not impeding a police investigation into death threats.
    The simple answer is to firewall twitter at the transatlantic link level, until they get their act together, because at the moment they are not only facilitating crime, they are shielding the criminals.
    I suggest warrants should be issued for the arrest of all members of the twitter board of directors, and the reciprocal arrangements with the US to be invoked.
    After all, we know that if someone here breaks one of their laws, the US do exactly that in a heartbeat.

  14. Dar Says:
    January 23rd, 2014 at 5:51 pm

    @Its’me ROFL, Ulrika found it “ironic” how he was now claiming to be the victim.
    Sorry Stan, if you don’t like the program, switch channels! lol

  15. Gimboid Says:
    January 27th, 2014 at 11:30 pm

    Social media is drivel, what’s the surprise ?

  16. Rob Says:
    January 29th, 2014 at 6:29 pm

    @Chris “block the IP for repeat offences with new accounts”

    Bad idea for a few reasons: For most users, their IP address is leased rather than “owned”, and can change frequently or just be re-allocated to someone else. Sometimes, also, many users share the same IP address (not just family members) – they would all be blocked, even if they didn’t know the offender. And anyway, it’s quite easy to get a new IP address – just go down to your local McDonalds and use their free WiFi, for instance – so this would be unlikely to stop anyone who was determined to cause offense.

  17. James Says:
    January 30th, 2014 at 11:36 am

    @Rob “Bad idea for a few reasons” – not necessarily. ISP’s own groups of IP addresses – and if one were blocked – complaints would force the ISP to find and deal with the culprit. Although for this to happen would require the notice of the offense to date-stamped and any block to expire after a time limit. Much the same would apply to local WiFi as users are usually logged.

  18. MonoTache Says:
    February 5th, 2014 at 8:04 pm

    Let’s play hypotheticals here. If a site becomes responsible for policing it’s own content, and can via “new legislation” be “shut down”, what happens if someone attacks the site by posting offensive messages? How easy would it be to take down a site with an Alf Garnett spam bot?

  19. Nick Says:
    February 9th, 2014 at 12:27 pm

    If Twitter used a phone/SMS verification for user accounts I’m guessing a lot of the anonymous idiots wouldn’t post if they thought it could be linked back to their home or mobile phone… (Granted, it still wouldn’t stop a determined idiot!)


Leave a Reply

Spam Protection by WP-SpamFree

* required fields

* Will not be published






Your email:

Your password:

remember me


Hitwise Top 10 Website 2010