Over the past decade or so, deep neural networks have achieved very promising results on a variety of tasks, including image recognition tasks. Despite their advantages, these networks are very complex and sophisticated, which makes interpreting what they learned and determining the processes behind their predictions difficult or sometimes impossible. This lack of interpretability makes deep neural networks somewhat untrustworthy and unreliable.
from News on Artificial Intelligence and Machine Learning https://ift.tt/3oGdYYb
Home
machine-learning-ai-news
News on Artificial Intelligence and Machine Learning
Concept whitening: A strategy to improve the interpretability of image recognition models
- Blogger Comment
- Facebook Comment
Subscribe to:
Post Comments
(
Atom
)
0 comments:
Post a Comment