End-to-end learning of co-speech gesture generation for humanoid robots

Researchers at the Electronics and Telecommunications Research Institute (ETRI) in South Korea have recently developed a neural network model that can generate sequences of co-speech gestures. Their model, trained on 52 hours of TED talks, successfully produced human-like gestures that matched speech content.

from News on Artificial Intelligence and Machine Learning https://ift.tt/2K7MQxK
SHARE
    Blogger Comment
    Facebook Comment

0 comments:

Post a Comment