{"651299":{"#nid":"651299","#data":{"type":"event","title":"Georgia Tech Neuro Seminar Series","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003E\u003Cem\u003E\u0026quot;Can Deep Neural Networks Model the Robustness of Human Object Recognition?\u0026quot;\u003C\/em\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Ca href=\u0022http:\/\/www.psy.vanderbilt.edu\/tonglab\/web\/Home.html\u0022\u003EFrank Tong, Ph.D.\u003C\/a\u003E\u003Cbr \/\u003E\r\nProfessor\u003Cbr \/\u003E\r\nPsychology Department\u003Cbr \/\u003E\r\nVanderbilt University\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003E\u003Cstrong\u003ETong Research\u003C\/strong\u003E\u003C\/em\u003E\u003Cbr \/\u003E\r\nThe goal of Frank Tong\u0026#39;s research is to investigate, characterize, and model the neural mechanisms that mediate human visual perception and cognition. What allows us to detect the presence of a clump of weeds in a lawn, to recognize an animal hiding behind a bush, or to remember the precise hue and texture of an ocean surface during sunset? A core assumption in his work is that early visual representations have a powerful but under appreciated role in higher cognitive operations, and that higher-level mechanisms of attention and working memory serve to modulate processing at early visual sites to select and maintain task-relevant visual information. Characterizing and modeling the interplay between early visual representations and higher order representations represents a long-term goal of this work. The work relies on behavioral and psychophysical methods, high-resolution fMRI, and advanced computational approaches for both data analysis and modeling. The lab has developed novel methods for decoding feature-selective responses from patterns of fMRI activity in the human visual cortex (Kamitani \u0026amp; Tong, Nature Neuroscience, 2005; Current Biology, 2006; Tong \u0026amp; Pratte, Annual Review of Psychology, 2012), and shown how these approaches can be used to characterize the neural bases of visual working memory (Harrison \u0026amp; Tong, Nature, 2009; Pratte et al., 2014) and object-based attentional selection (Pratte et al., J Neurophysiology, 2013; Cohen \u0026amp; Tong, Cerebral Cortex, 2015). In ongoing work, the Tong lab is developing, training, and testing deep convolutional neural networks as potential models for understanding the neural bases of human visual processing.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"\u0022Can Deep Neural Networks Model the Robustness of Human Object Recognition?\u0022 - Frank Tong, Ph.D. - Vanderbilt University"}],"uid":"27195","created_gmt":"2021-10-01 13:40:25","changed_gmt":"2021-10-01 13:45:01","author":"Colly Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2021-11-15T11:15:00-05:00","event_time_end":"2021-11-15T12:15:00-05:00","event_time_end_last":"2021-11-15T12:15:00-05:00","gmt_time_start":"2021-11-15 16:15:00","gmt_time_end":"2021-11-15 17:15:00","gmt_time_end_last":"2021-11-15 17:15:00","rrule":null,"timezone":"America\/New_York"},"extras":[],"groups":[{"id":"1292","name":"Parker H. Petit Institute for Bioengineering and Bioscience (IBB)"},{"id":"1254","name":"Wallace H. Coulter Dept. of Biomedical Engineering"}],"categories":[],"keywords":[{"id":"187423","name":"go-bio"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1795","name":"Seminar\/Lecture\/Colloquium"}],"invited_audience":[{"id":"78761","name":"Faculty\/Staff"},{"id":"177814","name":"Postdoc"},{"id":"78771","name":"Public"},{"id":"174045","name":"Graduate students"},{"id":"78751","name":"Undergraduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:terry.kauffman@bme.gatech.edu\u0022\u003ETerry Kauffman\u003C\/a\u003E - event inquiries\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}