{"453021":{"#nid":"453021","#data":{"type":"news","title":"Researchers Develop Deep-Learning Method to Predict Daily Activities","body":[{"value":"\u003Cp\u003EResearchers from the School of Interactive Computing and the Institute for Robotics and Intelligent Machines developed a new method that teaches computers to \u201csee\u201d and understand what humans do in a typical day.\u003C\/p\u003E\u003Cp\u003EThe technique gathered more than 40,000 pictures taken every 30 to 60 seconds, over a 6 month period, by a wearable camera and predicted with 83 percent accuracy what activity that person was doing. Researchers taught the computer to categorize images across 19 activity classes. The test subject wearing the camera could review and annotate the photos at the end of each day (deleting any necessary for privacy) to ensure that they were correctly categorized.\u003C\/p\u003E\u003Cp\u003E\u201cIt was surprising how the method\u2019s ability to correctly classify images could be generalized to another person after just two more days of annotation,\u201d said Steven Hickson, a Ph.D. candidate in Computer Science and a lead researcher on the project.\u003C\/p\u003E\u003Cp\u003E\u201cThis work is about developing a better way to understand people\u0027s activities, and build systems that can recognize people\u0027s activities at a finely-grained level of detail,\u201d said Edison Thomaz, co-author and graduate research assistant in the School of Interactive Computing. \u201cActivity tracking devices like the Fitbit can tell how many steps you take per day, but imagine being able to track all of your activities \u2013 not just physical activities like walking and running. This work is moving toward full activity intelligence. At a technical level, we are showing that it\u0027s becoming possible for computer vision techniques alone to be used for this.\u201d\u003C\/p\u003E\u003Cp\u003EThe group believes they have gathered the largest annotated dataset of first-person images to demonstrate that deep-learning can understand human behavior and the habits of a specific person.\u003C\/p\u003E\u003Cp\u003EStudent Daniel Casto, a Ph.D. candidate in Computer Science and a lead researcher on the project, helped present the method earlier this month at UBICOMP 2015 in Osaka, Japan. He says reaction from conference-goers was positive.\u003C\/p\u003E\u003Cp\u003E\u201cPeople liked that we had a method that combines time and images,\u201d Castro says. \u201cTime (of activity) can be especially important for some activity classes. This system learned how relevant images were because of people\u2019s schedules. What does it think the image is showing? It sees both time and image probabilities and makes a better prediction.\u201d\u003C\/p\u003E\u003Cp\u003EThe ability to literally see and recognize human activities has implications in a number of areas \u2013 from developing improved personal assistant applications like Siri to helping researchers explain links between health and behavior, Thomaz says.\u003C\/p\u003E\u003Cp\u003ECastro and Hickson believe that someday within the next decade we will have ubiquitous devices that can improve our personal choices throughout the day.\u003C\/p\u003E\u003Cp\u003E\u201cImagine if a device could learn what I would be doing next \u2013 ideally predict it \u2013 and recommend an alternative?\u201d Castro says. \u201cOnce it builds your own schedule by knowing what you are doing, it might tell you there is a traffic delay and you should leave sooner or take a different route.\u201d\u003C\/p\u003E\u003Cp\u003EThe research, \u201cPredicting Daily Activities From Egocentric Images Using Deep Learning,\u201d can be found at \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/cpl\/projects\/dailyactivities\/\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/cpl\/projects\/dailyactivities\/\u003C\/a\u003E. Authors are Castro, Hickson, Vinay Bettadapura, Thomaz, with School of Interactive Computing Professors Gregory Abowd, Henrik Christensen and Irfan Essa.\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Researchers from the School of Interactive Computing and the Institute for Robotics and Intelligent Machines developed a new method that teaches computers to \u201csee\u201d and understand what humans do in a typical day."}],"uid":"28124","created_gmt":"2015-09-28 13:07:15","changed_gmt":"2016-10-08 03:19:40","author":"Tyler Sharp","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-09-28T00:00:00-04:00","iso_date":"2015-09-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"453011":{"id":"453011","type":"image","title":"UBICOMP 2015","body":null,"created":"1449256297","gmt_created":"2015-12-04 19:11:37","changed":"1475895197","gmt_changed":"2016-10-08 02:53:17","alt":"UBICOMP 2015","file":{"fid":"203401","name":"paper_figure.png","image_path":"\/sites\/default\/files\/images\/paper_figure_0.png","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/paper_figure_0.png","mime":"image\/png","size":208651,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/paper_figure_0.png?itok=Q1xBm3zB"}}},"media_ids":["453011"],"groups":[{"id":"47223","name":"College of Computing"}],"categories":[],"keywords":[{"id":"171488","name":"School of Interactive Computing; Institute for Robotics and Intelligent Machines; IRIM; Edison Thomaz; Daniel Castro; Irfan Essa; Vinay Bettadapura; Henrik Christensen; Gregory Abowd; Steven Hickson"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:tlabouff@cc.gatech.edu\u0022\u003ETara La Bouff\u003C\/a\u003E\u003Cbr \/\u003ENews and Media Relations Manager\u003Cbr \/\u003E404.894.7253\u003C\/p\u003E","format":"limited_html"}],"email":["tlabouff@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}