The risks of attacks that involve poisoning training data for machine learning models

A growing number of studies suggest that machine learning algorithms can leak a considerable amount of information included in the data used to train them through their model parameters and predictions. As a result, malicious users with general access to the algorithm can in many cases reconstruct and infer sensitive information included in the training dataset, ranging from simple demographic data to bank account numbers.

from News on Artificial Intelligence and Machine Learning https://ift.tt/Ay1mUeb
SHARE
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment