A neural network learns when it should not be trusted

Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they're correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.

from News on Artificial Intelligence and Machine Learning https://ift.tt/3lMKarh
SHARE
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment