Posted by : Brij Bhushan Friday, 8 September 2017


Three Cornell researchers built a language model algorithm to study gender bias in sports journalism, and it’s apparently capable of distinguishing inappropriate questions where humans fail. Their research paper was originally published last year and discussed by the New York Times this week. The algorithm was built specifically to ferret out questions that weren’t related to the topic of tennis. By comparing in-game commentary to post-game questions and noting the differences, the researchers were able to train it to recognize when questions were more on topic than others. They discovered that women were more likely than men to receive these atypical questions. Several…

This story continues at The Next Web

Leave a Reply

Subscribe to Posts | Subscribe to Comments

Popular Post

Followers

- Copyright © 2013 FB EDucator - Powered by Blogger-