Carnegie Mellon is training computers to identify sarcasm on Twitter

Home Forums Technology Carnegie Mellon is training computers to identify sarcasm on Twitter

This topic contains 0 replies, has 1 voice, and was last updated by Hannibal X Hannibal X 8 years, 3 months ago.

  • Author
    Posts
  • #1352 Score: 0
    Hannibal X
    Hannibal X
    Keymaster
    23 pts

    Carnegie Mellon is training computers to identify sarcasm on Twitter
    Twitter contains multitudes. On any given day you’ll find earnest and passionate rumination, breaking news and analysis, silly hashtag games, horribly abusive idiots spewing hate and much, much more. Another constant across the platform is sarcastic reactions to all manner of events big and small. Indeed, when you’re fully ensconced in the echo chamber that is Twitter, it can sometimes be hard to tell what’s real and what’s not. Fortunately, some researchers at Carnegie Mellon University have our backs: they’re training computers to recognize sarcasm on Twitter, and they’ve had some solid success so far.

    Authors David Bamman and Noah A. Smith from CMU’s School of Computer Science noted that while most computational approaches to detecting sarcasm simply analyze the linguistics, sarcasm is all about context — and including that context on Twitter has made their detection methods much more reliable. As they write in their research paper, “the relationship between author and audience is central for understanding the sarcasm phenomenon.” But things get trickier on social media, because the notion of “audience” becomes much more complicated: on social media, “a user’s ‘audience’ is often unknown, underspecified or ‘collapsed’, making it difficult to fully establish the shared ground required for sarcasm to be detected, and understood, by its intended (or imagined) audience.”

    To properly test for sarcasm, the researchers built out a number of factors to test on. Individual tweets subjected to a number of factors, but the study also took into account details from the author’s profile, historical content and details from that author’s audience. It’s a complicated bit of modeling, but testing on the tweet, its author, its audience and its response helped the researcher’s sarcasm detector reach an 85 percent accuracy level. That’s significantly higher than the 75 percent accuracy rate it hit when analyzing just the content of a tweet without additional factors included.

    Read more

You must be logged in to reply to this topic.