Researchers at IBM Research U.K., the U.S. Military Academy and Cardiff University have recently proposed a technique they call Local Interpretable Model Agnostic Explanations (LIME) for attaining a better understanding of the conclusions reached by machine learning algorithms. Their paper, published on SPIE digital library, could inform the development of artificial intelligence (AI) tools that provide exhaustive explanations of how they reached a particular outcome or conclusion.
from News on Artificial Intelligence and Machine Learning http://bit.ly/2HWWr9W
Home
machine-learning-ai-news
News on Artificial Intelligence and Machine Learning
An approach to enhance machine learning explanations
- Blogger Comment
- Facebook Comment
Subscribe to:
Post Comments
(
Atom
)
0 comments:
Post a Comment