Human Robot Interaction based on Machine Learning
In this project, we try to learn hierarchical knowledge of different interactions from human demonstration videos based on MCMC and generate motion for the robot (Baxter).
For details of the project, please refer to the paper: Tianmin Shu*, Xiaofeng Gao, Michael S. Ryoo, Song-Chun Zhu. Learning Social Affordance Grammar from Videos: Transferring Human Interactions to Human-Robot Interactions, which published in the Proceedings of 2017 IEEE International Conference on Robotics and Automation. As a second author, I focus on programming, simulation, dataset collection and testing our model on Baxter.