{"610978":{"#nid":"610978","#data":{"type":"news","title":"Georgia Tech to Present Nine Poster Papers at ECCV 2018","body":[{"value":"\u003Cp\u003ENext week, a group of Georgia Tech students and faculty will travel to Munich, Germany to attend the \u003Ca href=\u0022https:\/\/eccv2018.org\/\u0022\u003EEuropean Conference on Computer Vision (ECCV) 2018\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMore than 700 organizations from industry, academia, and government are represented at the 2018 conference, which is held every two years. Georgia Tech will present eight papers during poster sessions at the premier event and, it is among\u0026nbsp;the top 3 percent of participating institutions based on accepted research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with presenting several papers, Georgia Tech faculty members have also participated in organizing ECCV 2018. \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E, \u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E, \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E, and \u003Cstrong\u003EFuxin Li\u003C\/strong\u003E served as area chairs for the event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ECCV is an exciting conference to participate in. There\u0026rsquo;s a lot of good work that gets presented from top computer vision labs in the world, and it is great that Georgia Tech is one of them! It is a great venue to share our latest ideas and hear what others in the research community are thinking about these days.\u0026rdquo; said \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E, assistant professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech organized the first \u003Ca href=\u0022\/\/\/Users\/dparikh\/Library\/Containers\/com.apple.mail\/Data\/Library\/Mail%20Downloads\/8035C01B-953E-4839-B1B8-4956B4756504\/%5bhttps:\/visualdialog.org\/challenge\/2018\u0022\u003EVisual Dialog Challenge,\u003C\/a\u003E designed to find methods for artificial intelligence agents to hold a meaningful dialog with humans in natural, conversational language about visual content. Winners will be announced at the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference takes place Sept. 8 through 14 in the heart of Munich at the Gasteig Cultural Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo see an interactive visualization of the entire ECCV 2018 program, please click \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ECCV2018-MainProgram\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no\u0022\u003Ehere.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor an interactive visualization of ECCV 2018 by institutions with accepted research, please click \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ECCV2018-Top3\/Dashboard2?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no\u0022\u003Ehere.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAn interactive visualization of ECCV 2018 by people and institutions can be viewed \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ECCV2018-MainProgram\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no\u0022\u003Ehere.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBelow are the titles of Georgia Tech\u0026rsquo;s research being presented this week.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGeorgia Tech at ECCV 2018\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1804.04259.pdf\u0022\u003ELearning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Zhaoyang Lv*, GEORGIA TECH; Kihwan Kim, NVIDIA; Alejandro Troccoli, NVIDIA; Deqing Sun, NVIDIA; Kautz Jan, NVIDIA; James Rehg, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERead our blog post about this paper on the ML@GT blog \u003Ca href=\u0022https:\/\/mlatgt.blog\/2018\/09\/06\/learning-rigidity-and-scene-flow-estimation\/\u0022\u003Ehere.\u003C\/a\u003E \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/web.engr.oregonstate.edu\/~lif\/1925.pdf\u0022\u003EMulti-object Tracking with Neural Gating using bilinear LSTMs\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Chanho Kim*, Georgia Tech; Fuxin Li, Oregon State University; James Rehg, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Yin_Li_In_the_Eye_ECCV_2018_paper.pdf\u0022\u003EIn the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Vision\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYin Li*, CMU; Miao Liu, Georgia Tech; James Rehg, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1808.02861.pdf\u0022\u003EChoose Your Neuron: Incorporating Domain Knowledge through Neuron Importance\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Ramprasaath Ramasamy Selvaraju*, Georgia Tech; Prithvijit Chattopadhyay, Georgia Institute of Technology; Mohamed Elhoseiny, Facebook; Tilak Sharma, Facebook; Dhruv Batra, Georgia Tech \u0026amp; Facebook AI Research; Devi Parikh, Georgia Tech \u0026amp; Facebook AI Research; Stefan Lee, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERead our blog post about this paper on the ML@GT blog \u003Ca href=\u0022https:\/\/mlatgt.blog\/2018\/09\/05\/choose-your-neuron-incorporating-domain-knowledge-through-neuron-importance\/\u0022\u003Ehere.\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/users.ece.cmu.edu\/~skottur\/papers\/corefnmn_eccv18.pdf\u0022\u003EVisual Coreference Resolution in Visual Dialog using Neural Module Networks\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Satwik Kottur*, Carnegie Mellon University; Jos\u0026eacute; M. F. Moura, Carnegie Mellon University; Devi Parikh, Georgia Tech \u0026amp; Facebook AI Research; Dhruv Batra, Georgia Tech \u0026amp; Facebook AI Research; Marcus Rohrbach, Facebook AI Research\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1808.00191.pdf\u0022\u003EGraph R-CNN for Scene Graph Generation\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Jianwei Yang*, Georgia Institute of Technology; Jiasen Lu, Georgia Institute of Technology; Stefan Lee, Georgia Institute of Technology; Dhruv Batra, Georgia Tech \u0026amp; Facebook AI Research; Devi Parikh, Georgia Tech \u0026amp; Facebook AI Research\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERead our blog post about this paper on the ML@GT \u003Ca href=\u0022https:\/\/mlatgt.blog\/2018\/09\/04\/what-is-graph-r-cnn\/\u0022\u003Eblog here.\u003C\/a\u003E \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/wyliu.com\/papers\/LiuECCV18.pdf\u0022\u003ESEAL: A Framework Towards Simultaneous Edge Alignment and Learning\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Zhiding Yu*, NVIDIA; Weiyang Liu, Georgia Tech; Yang Zou, Carnegie Mellon University; Chen Feng, Mitsubishi Electric Research Laboratories (MERL); Srikumar Ramalingam, University of Utah; B. V. K. Vijaya Kumar, CMU, USA; Kautz Jan, NVIDIA\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/www.eye.gatech.edu\/swapnet\/paper.pdf\u0022\u003ESwapNet: Image Based Garment Transfer\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Amit Raj, Georgia Tech; Patsorn Sangkloy, Georgia Tech; Huiwen Chang, Princeton; James Hays, Georgia Tech; Duygu Ceylan, Adobe; and Jingwan Lu, Adobe\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Eunji_Chong_Connecting_Gaze_Scene_ECCV_2018_paper.pdf\u0022\u003E\u003Cstrong\u003EConnecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEunji Chong, Nataniel Ruiz, Yongxin Wang, Yun Zhang, Agata Rozga, James M. Rehg, Georgia Tech\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech faculty and students will travel to Munich, Germany to present their research at the European Conference on Computer Vision (ECCV)."}],"uid":"34773","created_gmt":"2018-09-06 16:16:33","changed_gmt":"2018-09-07 17:56:07","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-09-06T00:00:00-04:00","iso_date":"2018-09-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"610984":{"id":"610984","type":"image","title":"ECCV 2018 will be held in Munich, Germany","body":null,"created":"1536253387","gmt_created":"2018-09-06 17:03:07","changed":"1536253387","gmt_changed":"2018-09-06 17:03:07","alt":"","file":{"fid":"232621","name":"Munich_skyline_1-1 copy.jpg","image_path":"\/sites\/default\/files\/images\/Munich_skyline_1-1%20copy.jpg","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/Munich_skyline_1-1%20copy.jpg","mime":"image\/jpeg","size":667391,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Munich_skyline_1-1%20copy.jpg?itok=J4QOyPKU"}}},"media_ids":["610984"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}