In recent years, many artificial intelligence (AI) and robotics researchers have been trying to develop systems that can provide explanations for their actions or predictions. The idea behind their work is that as AI systems become more widespread, explaining why they act in particular ways or why they made certain predictions could increase transparency and consequently users' trust in them.
from News on Artificial Intelligence and Machine Learning https://ift.tt/2GibgqR
Home
machine-learning-ai-news
News on Artificial Intelligence and Machine Learning
Do explanations for data-based predictions actually increase users' trust in AI?
- Blogger Comment
- Facebook Comment
Subscribe to:
Post Comments
(
Atom
)
0 comments:
Post a Comment