{"636275":{"#nid":"636275","#data":{"type":"news","title":"Robots Gain Ability to Master Object Manipulation with Context-Aware Technique","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhereas humans might touch a hot pan on a stove once and never forget the lesson, it\u0026rsquo;s more complex to train robots to apply such universal knowledge across different situations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new technique, called CAGE, or Context-Aware Grasping Engine, takes into consideration a range of factors \u0026ndash; such as the task the object will be used for, whether the object is full or empty, what it\u0026rsquo;s made of, and its shape \u0026ndash; so that a robot can learn the right way to grasp various objects in a given context. For example, it allows a robot to learn not to hold a hot cup of tea by its opening, or to handle a cooking pot differently based on whether it just left a stovetop or a cabinet.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for,\u0026rdquo; said\u0026nbsp;\u003Cstrong\u003EWeiyu Liu\u003C\/strong\u003E, lead researcher on CAGE and Ph.D. student in robotics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing CAGE, a robot is able to apply what it has learned to objects it\u0026rsquo;s never seen.\u0026nbsp; For example, if trained to grasp a spatula by the handle to make a scooping motion, the robot is able to generalize this knowledge and know to grasp a mug by the handle and use it to scoop \u0026mdash; if that was the programmed task \u0026mdash; even if the robot has never encountered a mug before.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research team, from the Robot Autonomy and Interactive Learning (RAIL) lab at Georgia Tech, validated their approach against three existing methods for teaching robots to handle objects. The team used a novel dataset consisting of 14,000 grasps for 44 objects, 7 tasks, and 6 different object states (e.g. objects contained solids, liquids, or were empty).\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECAGE outperformed the other methods in a simulation by statistically significant margins, according to the researchers, highlighting the model\u0026rsquo;s ability to collectively reason about contextual information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECAGE had an 86 percent success rate when averaged across tests looking at how well it identified context-aware grasps and if the model could generalize to new objects a robot had not seen previously. Among the existing methods, the highest success rate averaged 69 percent.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELiu said that the team\u0026rsquo;s model can rank grasp \u0026ldquo;candidates\u0026rdquo; for various contexts, ensuring that more suitable candidates are ranked higher than less suitable ones given a context. So a robot might, for example, learn to hand a sharp metal knife to a person handle-first, but hand over a plastic knife in any orientation due to its relative safety.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA final experiment evaluated the effectiveness of CAGE using a Fetch robot equipped with a camera, moving arm, and a parallel-jaw gripper. It performed almost perfectly in making a judgement on how to grasp objects for several distinct tasks, including scooping, pouring, lifting, and handing over an object, among others. If there was no suitable grasp for the given situation, the robot made no attempt in all cases.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work, developed by Liu,\u0026nbsp;\u003Cstrong\u003EAngel Daruna\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E, was accepted into the\u0026nbsp; International Conference on Robotics and Automation, taking place virtually this June. The paper is titled\u0026nbsp;\u003Ca href=\u0022http:\/\/rail.gatech.edu\/assets\/files\/Liu_ICRA20.pdf\u0022 rel=\u0022noopener noreferrer\u0022 target=\u0022_blank\u0022\u003ECAGE: Context-Aware Grasping Engine\u003C\/a\u003E\u0026nbsp;and the research data is publicly available at\u0026nbsp;\u003Ca href=\u0022https:\/\/github.com\/wliu88\/rail_semantic_grasping\u0022\u003Ehttps:\/\/github.com\/wliu88\/rail_semantic_grasping\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EThis work is supported in part by NSF IIS 1564080, NSF GRFP DGE-1650044, and ONR N000141612835. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used."}],"uid":"27592","created_gmt":"2020-06-16 21:01:58","changed_gmt":"2020-06-16 21:15:10","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-16T00:00:00-04:00","iso_date":"2020-06-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636277":{"id":"636277","type":"image","title":"One Step Closer to Domestic Robots | ICRA 2020","body":null,"created":"1592341649","gmt_created":"2020-06-16 21:07:29","changed":"1592341649","gmt_changed":"2020-06-16 21:07:29","alt":"","file":{"fid":"242101","name":"robot coffee graphic_mercury.png","image_path":"\/sites\/default\/files\/images\/robot%20coffee%20graphic_mercury.png","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/robot%20coffee%20graphic_mercury.png","mime":"image\/png","size":430675,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/robot%20coffee%20graphic_mercury.png?itok=v1ushW4X"}},"636276":{"id":"636276","type":"image","title":"Sonia Chernova with robot arm","body":null,"created":"1592341598","gmt_created":"2020-06-16 21:06:38","changed":"1592341598","gmt_changed":"2020-06-16 21:06:38","alt":"","file":{"fid":"242100","name":"sonia chernova.jpg","image_path":"\/sites\/default\/files\/images\/sonia%20chernova.jpg","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/sonia%20chernova.jpg","mime":"image\/jpeg","size":230142,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sonia%20chernova.jpg?itok=Jy3khAuK"}}},"media_ids":["636277","636276"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=EnHUHQv8hr0\u0026feature=emb_logo","title":"CAGE: Context-Aware Grasping Engine"}],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=CAGE%20algorithm%3B%20ICRA%202020\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nGVU Center and College of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}