{"667812":{"#nid":"667812","#data":{"type":"news","title":"Researchers Use Novel Approach to Teach Robot to Navigate Over Obstacles","body":[{"value":"\u003Cp\u003EQuadrupedal robots may be able to step directly over obstacles in their paths thanks to the efforts of a trio of Georgia Tech Ph.D. students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen it comes to robotic locomotion and navigation, Naoki Yokoyama says most four-legged robots are trained to regain their footing if an obstacle causes them to stumble. Working toward a larger effort to develop a housekeeping robot, Yokoyama and his collaborators \u2014 Simar Kareer and Joanne Truong \u2014 set out to train their robot to walk over clutter it might encounter in a home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe main motivation of the project is getting low-level control over the legs of the robot that also incorporates visual input,\u201d said Yokoyama, a Ph.D. student within the School of Electrical and Computer Engineering. \u201cWe envisioned a controller that could be deployed in an indoor setting with a lot of clutter, such as shoes or toys on the ground of a messy home. Whereas blind locomotive controllers tend to be more reactive \u2014 if they step on something, they\u2019ll make sure they don\u2019t fall over \u2014 we wanted ours to use visual input to avoid stepping on the obstacle altogether.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo achieve their goal, the researchers took a novel training approach of fusing a high-level visual navigation policy with a visual locomotion policy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a\u0026nbsp;\u003Ca href=\u0022http:\/\/www.joannetruong.com\/projects\/vinl.html\u0022\u003Epaper\u003C\/a\u003E\u0026nbsp;advised by Interactive Computing Associate Professor Dhruv Batra and Assistant Professor Sehoon Ha, Kareer, Yokoyama, and Truong show that their two-policy approach successfully simulates robotic navigation over obstacles.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey call their approach ViNL (Visual Navigation and Locomotion), and so far, it has guided robots through simulated novel cluttered environments with a 72.6% success rate. The team will present its paper, ViNL: Visual Navigation and Locomotion Over Obstacles, at the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.icra2023.org\/\u0022\u003EIEEE International Conference on Robotics and Automation\u003C\/a\u003E, which is being held May 29-June 2 in London.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBoth policies are model-free \u2014 the robot learns on its own simulation and doesn\u2019t mimic any pre-existing behavioral patterns \u2014 and can be combined without any additional co-training.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThis work uniquely combines separate locomotion and navigation policies in a zero-shot manner,\u201d said Kareer, who along with Truong is a Ph.D. student within the School of Interactive Computing. \u201cIf we come up with an improved navigation policy, we can just take that, do no extra work, and deploy that to our robot. That\u2019s a scalable approach. You can plug and play these things together with very little fine-tuning. That\u2019s powerful.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe visual navigation policy teaches the robot through goal-achieving motivation. It gives the robot an objective of navigating from one place to another while avoiding any obstacles. The robot receives a score based on how successfully it completes its task. If it stumbles over an obstacle, it is penalized.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe gave it an environment that had very few obstacles, and then slightly more and slightly more,\u201d Kareer said. \u201cThis gradual approach is helpful to its learning. When you just toss it into an environment with a million obstacles, it fails a lot. But if you show it one or two obstacles and say, \u2018try to learn these,\u2019 it\u2019s much more stable.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe locomotion policy teaches the robot how to use its limbs to step over an object, including how high it should lift its legs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBecause a real-world quadruped will only be able to see what its front camera sees, obstacles will disappear from its view as it gets closer to them. The team accounted for this by incorporating memory and spatial awareness into their network architecture to teach the robot exactly when and where to step over the obstacle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe robot has a rich understanding of where its entire limb is relative to the obstacles,\u201d Kareer said. \u201cWhen you see it walking over obstacles, it\u2019s not just deciding to put its foot down on spots where there are no obstacles. It\u2019s remembering where all the obstacles are relative to its body and keeping its limbs out of the way until it\u2019s passed over them.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd if an obstacle is too tall to step over, the robot can also choose to go around it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe saw that it was very good at navigating, and even in cases where it might take a wrong turn, it knows that it can backtrack and go back where it came from,\u201d Truong said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFinally, the group taught the robot specifically what types of objects it should be looking to step over in a house, such as toys, and ones that it should go around, such as a chair. This also helps the robot to know how high it will need to lift its legs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWhat\u2019s important for navigation is to be able to have the experience of navigating in real-world houses, so we train our navigation policy with photo-realistic scans of apartments,\u201d Truong said. \u201cWe used scans of over 1,000 apartments for training and evaluated the robot in scenarios it had never seen before. We zero-shot deploy it into a new environment, so you can take a new robot, put it in a new house, and it will be able to do this as well.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers agree their paper is multi-faceted and has numerous implications that fall outside its focus but are nonetheless important. Their work could lead to robots\u0026nbsp;\u003Ca href=\u0022https:\/\/ai.googleblog.com\/2023\/05\/indoorsim-to-outdoorreal-learning-to.html\u0022\u003Enavigating openly in the outdoors\u003C\/a\u003E, selectively picking paths based on the user\u2019s preference to avoid muddy ground or rocky terrain.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cNormally, it matters much less how you get from Point A to Point B,\u201d Truong said. \u201cYou just need to know that Point B is valid. With overcoming obstacles, not only do Point A and Point B need to be valid, how you get from Point A to Point B also matters.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team\u2019s paper also won a best paper award for the Learning for Agile Robotics Workshop at the 2022 Conference on Robot Learning in December.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers are using a new approach to train their robot to walk over clutter it might encounter in a home.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are using a new approach to train their robot to walk over clutter it might encounter in a home."}],"uid":"32045","created_gmt":"2023-05-18 16:58:30","changed_gmt":"2023-05-18 19:19:51","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-05-18T00:00:00-04:00","iso_date":"2023-05-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670836":{"id":"670836","type":"image","title":"208A9510 copy.jpg","body":null,"created":"1684429431","gmt_created":"2023-05-18 17:03:51","changed":"1684429431","gmt_changed":"2023-05-18 17:03:51","alt":"From left to right, Simar Kareer, Joanne Truong, and Naoki Yokoyam work together on developing a quadrupedal robot that can navigate over obstacles. (Photos by Kevin Beasley\/College of Computing)","file":{"fid":"253773","name":"208A9510 copy.jpg","image_path":"\/sites\/default\/files\/2023\/05\/18\/208A9510%20copy.jpg","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/2023\/05\/18\/208A9510%20copy.jpg","mime":"image\/jpeg","size":363067,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/05\/18\/208A9510%20copy.jpg?itok=3HH-uczy"}}},"media_ids":["670836"],"groups":[{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"8862","name":"Student Research"}],"keywords":[{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}