{"610957":{"#nid":"610957","#data":{"type":"news","title":"What If Robots Could Learn Skills from Scratch?","body":[{"value":"\u003Cp\u003EAny machine can learn to move with enough engineering, according to \u003Cstrong\u003EKaren Liu, \u003C\/strong\u003Ebut imagine\u003Cstrong\u003E \u003C\/strong\u003Ewhat could happen if machines were able to evolve and learn new motions over time with very little instruction, just like a human child does.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELiu, an associate professor in the \u003Cstrong\u003ESchool of Interactive Computing\u003C\/strong\u003E and member of the \u003Cstrong\u003EMachine Learning Center at Georgia Tech,\u003C\/strong\u003E conducts research on simulating and controlling human and animal movements in the digital world with virtual \u0026ldquo;agents\u0026rdquo; or using actual robots in the lab.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECreating moving agents in a digital landscape has been around for many years but Liu and her team are teaching agents to move by using artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn previous iterations, robots and agents have been taught using reinforcement learning (RL), which requires extensive coding and algorithmic development for each movement, no matter how big or small.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn contrast to the common approach of mimicking motion trajectories, Liu\u0026rsquo;s lab wanted to create a virtual agent that learns how to walk on its own.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy combining RL with deep learning, the recent advancement in deep RL has demonstrated that it is possible to use a \u0026ldquo;minimalist\u0026rdquo; approach to learn locomotion, but the resulting motion appears unnatural.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELiu\u0026rsquo;s team proposed to train the agent using curriculum learning with adjustable physical aid to create more natural animal locomotion using the minimalist learning approach.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurriculum learning is, as it sounds, very similar to how a person goes through their educational process. An agent is given a simpler task at the beginning of the learning process and once it masters the skill, it is able to progress to the next lesson.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the challenges researchers face is making sure the agent\u0026rsquo;s motion looks natural.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Without motion trajectory to mimic, most locomotion produced by deep RL methods are too energetic or asymmetrical.\u0026rdquo; said Liu.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo help combat these issues, Liu and her team have introduced a virtual spring to assist an agent to provide physical aid during the training process.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor instance, if the agent needs to walk forward, the spring helps to propel it forward. If it is about to fall, the spring pushes it back up. Because the spring is a physical force, its stiffness can easily be adjusted, making the lesson more or less difficult. As the agent learns the skill, the spring is adjusted before eventually being taken out completely.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor Liu, creating generative models for natural animal motion has always been a fascinating research area. \u0026ldquo;We have been trying to mimic the kinematics and the dynamic characteristics of real animal movements. Thanks to the recent development in deep reinforcement learning, for the first time, we are able to also mimic \u0026ldquo;how\u0026rdquo; the real animals acquire motion skills.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKaren Liu and co-authors Wenhao Yu and Greg Turk recently presented their paper\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1801.08093\u0022\u003E, \u0026ldquo;Learning Symmetric and Low Energy Locomotion\u0026rdquo;\u003C\/a\u003E at \u003Ca href=\u0022https:\/\/s2018.siggraph.org\/attend\/vancouver\/\u0022\u003ESIGGRAPH 2018\u003C\/a\u003E in Vancouver BC, Canada.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A Georgia Tech lab is working to teach robots new skills with minimal data."}],"uid":"34773","created_gmt":"2018-09-05 21:04:06","changed_gmt":"2018-09-07 15:42:45","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-09-05T00:00:00-04:00","iso_date":"2018-09-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"610956":{"id":"610956","type":"image","title":"What If Robots Could Learn Skills from Scratch?","body":null,"created":"1536181222","gmt_created":"2018-09-05 21:00:22","changed":"1536181222","gmt_changed":"2018-09-05 21:00:22","alt":"","file":{"fid":"232612","name":"Screen Shot 2018-09-05 at 11.25.27 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png","mime":"image\/png","size":517127,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png?itok=PllwftNC"}}},"media_ids":["610956"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}