Turns out, even language models "think" they're biased. When prompted in ChatGPT, the response was as follows: "Yes, language models can have biases, because the training data reflects the biases present in society from which that data was collected. For example, gender and racial biases are prevalent in many real-world datasets, and if a language model is trained on that, it can perpetuate and amplify these biases in its predictions." A well-known but dangerous problem.
from News on Artificial Intelligence and Machine Learning https://ift.tt/J1W7DPM
Home
machine-learning-ai-news
News on Artificial Intelligence and Machine Learning
Large language models are biased. Can logic help save them?
- Blogger Comment
- Facebook Comment
Subscribe to:
Post Comments
(
Atom
)
0 comments:
Post a Comment