Posted by : Brij Bhushan Tuesday, 27 February 2018


A team of international researchers recently taught AI to justify its reasoning and point to evidence when it makes a decision. The ‘black box’ is becoming transparent, and that’s a big deal. Figuring out why a neural network makes the decisions it does is one of the biggest concerns in the field of artificial intelligence. The black box problem, as it’s called, essentially keeps us from trusting AI systems. The team was comprised of researchers from UC Berkeley, University of Amsterdam, MPI for Informatics, and Facebook AI Research. The new research builds on the group’s previous work, but this time…

This story continues at The Next Web

Leave a Reply

Subscribe to Posts | Subscribe to Comments

Popular Post

Followers

- Copyright © 2013 FB EDucator - Powered by Blogger-