{"620364":{"#nid":"620364","#data":{"type":"news","title":"People May Be Able to Find Images on a Computer Based Solely on Their Eye Movements","body":[{"value":"\u003Cp\u003EWhen humans try to recall images from memory, they involuntarily move their eyes in a pattern that is similar to when they are actually looking at the image.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJames Hays\u003C\/strong\u003E, an associate professor in the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E, and researchers from TU Berlin and Universit\u0026auml;t Regensburg, are looking at how these patterns, known as gaze patterns, can be used to retrieve images from memory so that it\u0026rsquo;s easier to find that same image \u0026ndash; like an adorable dog photo \u0026ndash; stashed away in the digital cloud.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough a controlled lab experiment and a real-world scenario, Hays and his co-authors have developed a matching technique using machine learning to help computers understand what image someone is thinking of, and accurately retrieve it from a computer folder \u0026ndash; based solely on eye movements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing eye-tracking software in the lab, the researchers recorded the eye movements of 30 participants as they looked at 100 different indoor and outdoor images, ranging from picturesque lighthouse scenes to cozy living rooms. Participants were then asked to look at a blank screen and recall any of the 100 images they just saw.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers also conducted a realistic scenario by putting together a mock museum with 20 posters of various sizes and orientations spread throughout the \u0026ldquo;museum.\u0026rdquo; They outfitted each participant with a headset complete with a \u003Ca href=\u0022https:\/\/pupil-labs.com\/pupil\/\u0022\u003EPupil mobile eye tracker\u003C\/a\u003E, complete with two eye cameras, and one front-facing camera. Participants were asked to walk around the museum and look at all of the images, taking however long they liked, and in whatever order they preferred. They took anywhere from a few seconds to over a minute looking at each poster.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter looking at all of the images, participants were asked to look at a blank whiteboard and recall as many of their favorite images as possible, in any order. Participants remembered between 5 and 10 of the total 20 poster images.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe results from both experiments indicated that the gaze patterns of people looking at a photograph contain a unique signature that computers can use to accurately determine the corresponding photo.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing the data collected from the experiments, researchers created spatial histograms, or heat maps, that could be analyzed by their new machine learning technique to determine which photo someone was thinking about. Hays and Co. also used a \u003Ca href=\u0022https:\/\/en.wikipedia.org\/wiki\/Convolutional_neural_network\u0022\u003EConvolutional Neural Network (CNN)\u003C\/a\u003E to look at the 2,700 collected heat maps.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The ability to retrieve images using eye movements would be beneficial to those who are disabled or unable to search for images using their hands or voice,\u0026rdquo; said Hays. \u0026ldquo;Also, wearable technology is a huge industry right now, and we believe that tracking motion with the eyes would be a natural by-product of that boom.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn Hays\u0026rsquo; previous research, \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1801.02753\u0022\u003ESketchyGAN\u003C\/a\u003E, people are able to draw (rather than type) what they are looking for to get image search results. But, if images are mislabeled or people can\u0026rsquo;t draw that well, search results are not useful. Other attempts at image retrieval have included various types of brain scans, but those are often too expensive and impractical for everyday use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile this new research may prove helpful to people, it does not come without limitations, researchers note. The scalability of the model in part depends on image content and how many images are in the database. The more images the database holds, the more likely it is that several different photos will produce extremely similar gaze patterns.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne proposed workaround to this potential issue is asking people to make more extensive eye movements than they normally would. At the moment, participants are not asked to do anything more intentional or out of the norm when looking at the images. Researchers think that by putting a small amount of effort back on the user, this would help the computer find the correct image.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother foreseen problem is working with people\u0026rsquo;s memories. As people\u0026rsquo;s memories grow weaker with time or age, it will be harder to get a crisp gaze pattern and accurately return the right image. The team plans to explore these potential issues in the future by looking into the influence on memory decay and how it affects image retrieval from long-term memory.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe authors are also looking into combining gaze tracking with a speech interface, as that could be a rich resource for information. No matter which direction they go, the team believes that eye-movement image retrieval is not only possible but also a significant next step to improving human and computer interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne might even say that before long, people will be able to find that favorite dog photo in the blink of an eye.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFurther details on this approach to image retrieval can be found in the paper,\u003Ca href=\u0022http:\/\/cybertron.cg.tu-berlin.de\/xiwang\/files\/mi.pdf\u0022\u003E \u0026ldquo;The Mental Image Revealed by Gaze Tracking,\u0026rdquo;\u003C\/a\u003E which has been accepted at the ACM Conference on Human Factors in Computing Systems (CHI 2019), May 4-9.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"What if we could find images on our computer just by tracking our eye movements? ML@GT assistant professor James Hays explores this idea in new research that will be presented next month at CHI 2019."}],"uid":"34773","created_gmt":"2019-04-12 14:42:21","changed_gmt":"2019-04-12 20:51:03","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-12T00:00:00-04:00","iso_date":"2019-04-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620361":{"id":"620361","type":"image","title":"Machine Learning at Georgia Tech and School of Interactive Computing associate professor James Hays collaborated with researchers from TU Berlin and Universit\u00e4t Regensburg to create new eye-tracking software.","body":null,"created":"1555079754","gmt_created":"2019-04-12 14:35:54","changed":"1555102299","gmt_changed":"2019-04-12 20:51:39","alt":"","file":{"fid":"236216","name":"Screen Shot 2019-04-12 at 10.33.09 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png","mime":"image\/png","size":951664,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png?itok=DnahJC51"}},"620363":{"id":"620363","type":"image","title":"In one experiment, participants were outfitted with a Pupil mobile eye tracker and asked to observe art in a fake museum.","body":null,"created":"1555079859","gmt_created":"2019-04-12 14:37:39","changed":"1555079859","gmt_changed":"2019-04-12 14:37:39","alt":"","file":{"fid":"236217","name":"Screen Shot 2019-04-12 at 10.33.34 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png","mime":"image\/png","size":1804726,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png?itok=Lo8NF-00"}}},"media_ids":["620361","620363"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}