Researchers at the Electronics and Telecommunications Research Institute (ETRI) in South Korea have recently developed a neural network model that can generate sequences of co-speech gestures. Their model, trained on 52 hours of TED talks, successfully produced human-like gestures that matched speech content.
from News on Artificial Intelligence and Machine Learning https://ift.tt/2K7MQxK
Home
machine-learning-ai-news
News on Artificial Intelligence and Machine Learning
End-to-end learning of co-speech gesture generation for humanoid robots
- Blogger Comment
- Facebook Comment
Subscribe to:
Post Comments
(
Atom
)
0 comments:
Post a Comment