<node id="610978">
  <nid>610978</nid>
  <type>news</type>
  <uid>
    <user id="34773"><![CDATA[34773]]></user>
  </uid>
  <created>1536250593</created>
  <changed>1536342967</changed>
  <title><![CDATA[Georgia Tech to Present Nine Poster Papers at ECCV 2018]]></title>
  <body><![CDATA[<p>Next week, a group of Georgia Tech students and faculty will travel to Munich, Germany to attend the <a href="https://eccv2018.org/">European Conference on Computer Vision (ECCV) 2018</a>.</p>

<p>More than 700 organizations from industry, academia, and government are represented at the 2018 conference, which is held every two years. Georgia Tech will present eight papers during poster sessions at the premier event and, it is among&nbsp;the top 3 percent of participating institutions based on accepted research.</p>

<p>Along with presenting several papers, Georgia Tech faculty members have also participated in organizing ECCV 2018. <strong>Devi Parikh</strong>, <strong>Irfan Essa</strong>, <strong>Dhruv Batra</strong>, and <strong>Fuxin Li</strong> served as area chairs for the event.</p>

<p>&ldquo;ECCV is an exciting conference to participate in. There&rsquo;s a lot of good work that gets presented from top computer vision labs in the world, and it is great that Georgia Tech is one of them! It is a great venue to share our latest ideas and hear what others in the research community are thinking about these days.&rdquo; said <strong>Devi Parikh</strong>, assistant professor in Georgia Tech&rsquo;s <a href="https://www.ic.gatech.edu/">School of Interactive Computing.</a></p>

<p>Georgia Tech organized the first <a href="///Users/dparikh/Library/Containers/com.apple.mail/Data/Library/Mail%20Downloads/8035C01B-953E-4839-B1B8-4956B4756504/%5bhttps:/visualdialog.org/challenge/2018">Visual Dialog Challenge,</a> designed to find methods for artificial intelligence agents to hold a meaningful dialog with humans in natural, conversational language about visual content. Winners will be announced at the conference.</p>

<p>The conference takes place Sept. 8 through 14 in the heart of Munich at the Gasteig Cultural Center.</p>

<p>To see an interactive visualization of the entire ECCV 2018 program, please click <a href="https://public.tableau.com/views/ECCV2018-MainProgram/Dashboard1?:embed=y&amp;:display_count=yes&amp;:showVizHome=no">here.</a></p>

<p>For an interactive visualization of ECCV 2018 by institutions with accepted research, please click <a href="https://public.tableau.com/views/ECCV2018-Top3/Dashboard2?:embed=y&amp;:display_count=yes&amp;:showVizHome=no">here.</a></p>

<p>An interactive visualization of ECCV 2018 by people and institutions can be viewed <a href="https://public.tableau.com/views/ECCV2018-MainProgram/Dashboard1?:embed=y&amp;:display_count=yes&amp;:showVizHome=no">here.</a></p>

<p>Below are the titles of Georgia Tech&rsquo;s research being presented this week.</p>

<p><strong>Georgia Tech at ECCV 2018</strong></p>

<p><strong><a href="https://arxiv.org/pdf/1804.04259.pdf">Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation</a></strong></p>

<p>By Zhaoyang Lv*, GEORGIA TECH; Kihwan Kim, NVIDIA; Alejandro Troccoli, NVIDIA; Deqing Sun, NVIDIA; Kautz Jan, NVIDIA; James Rehg, Georgia Institute of Technology</p>

<p><strong>Read our blog post about this paper on the ML@GT blog <a href="https://mlatgt.blog/2018/09/06/learning-rigidity-and-scene-flow-estimation/">here.</a> </strong></p>

<p><strong><a href="https://web.engr.oregonstate.edu/~lif/1925.pdf">Multi-object Tracking with Neural Gating using bilinear LSTMs</a></strong></p>

<p>By Chanho Kim*, Georgia Tech; Fuxin Li, Oregon State University; James Rehg, Georgia Institute of Technology</p>

<p><strong><a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Yin_Li_In_the_Eye_ECCV_2018_paper.pdf">In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Vision</a></strong></p>

<p>Yin Li*, CMU; Miao Liu, Georgia Tech; James Rehg, Georgia Institute of Technology</p>

<p><strong><a href="https://arxiv.org/pdf/1808.02861.pdf">Choose Your Neuron: Incorporating Domain Knowledge through Neuron Importance</a></strong></p>

<p>By Ramprasaath Ramasamy Selvaraju*, Georgia Tech; Prithvijit Chattopadhyay, Georgia Institute of Technology; Mohamed Elhoseiny, Facebook; Tilak Sharma, Facebook; Dhruv Batra, Georgia Tech &amp; Facebook AI Research; Devi Parikh, Georgia Tech &amp; Facebook AI Research; Stefan Lee, Georgia Institute of Technology</p>

<p><strong>Read our blog post about this paper on the ML@GT blog <a href="https://mlatgt.blog/2018/09/05/choose-your-neuron-incorporating-domain-knowledge-through-neuron-importance/">here.</a></strong></p>

<p><strong><a href="http://users.ece.cmu.edu/~skottur/papers/corefnmn_eccv18.pdf">Visual Coreference Resolution in Visual Dialog using Neural Module Networks</a></strong></p>

<p>By Satwik Kottur*, Carnegie Mellon University; Jos&eacute; M. F. Moura, Carnegie Mellon University; Devi Parikh, Georgia Tech &amp; Facebook AI Research; Dhruv Batra, Georgia Tech &amp; Facebook AI Research; Marcus Rohrbach, Facebook AI Research</p>

<p><strong><a href="https://arxiv.org/pdf/1808.00191.pdf">Graph R-CNN for Scene Graph Generation</a></strong></p>

<p>By Jianwei Yang*, Georgia Institute of Technology; Jiasen Lu, Georgia Institute of Technology; Stefan Lee, Georgia Institute of Technology; Dhruv Batra, Georgia Tech &amp; Facebook AI Research; Devi Parikh, Georgia Tech &amp; Facebook AI Research</p>

<p><strong>Read our blog post about this paper on the ML@GT <a href="https://mlatgt.blog/2018/09/04/what-is-graph-r-cnn/">blog here.</a> </strong></p>

<p><strong><a href="http://wyliu.com/papers/LiuECCV18.pdf">SEAL: A Framework Towards Simultaneous Edge Alignment and Learning</a></strong></p>

<p>By Zhiding Yu*, NVIDIA; Weiyang Liu, Georgia Tech; Yang Zou, Carnegie Mellon University; Chen Feng, Mitsubishi Electric Research Laboratories (MERL); Srikumar Ramalingam, University of Utah; B. V. K. Vijaya Kumar, CMU, USA; Kautz Jan, NVIDIA</p>

<p><strong><a href="http://www.eye.gatech.edu/swapnet/paper.pdf">SwapNet: Image Based Garment Transfer</a></strong></p>

<p>By Amit Raj, Georgia Tech; Patsorn Sangkloy, Georgia Tech; Huiwen Chang, Princeton; James Hays, Georgia Tech; Duygu Ceylan, Adobe; and Jingwan Lu, Adobe</p>

<p><a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Eunji_Chong_Connecting_Gaze_Scene_ECCV_2018_paper.pdf"><strong>Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency</strong></a></p>

<p>Eunji Chong, Nataniel Ruiz, Yongxin Wang, Yun Zhang, Agata Rozga, James M. Rehg, Georgia Tech</p>
]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2018-09-06T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Georgia Tech faculty and students will travel to Munich, Germany to present their research at the European Conference on Computer Vision (ECCV).]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="610984">
            <nid>610984</nid>
            <type>image</type>
            <title><![CDATA[ECCV 2018 will be held in Munich, Germany]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>232621</fid>
                  <filename><![CDATA[Munich_skyline_1-1 copy.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/Munich_skyline_1-1%20copy.jpg]]></filepath>
                  <file_full_path><![CDATA[http://www.tlwarc.hg.gatech.edu//sites/default/files/images/Munich_skyline_1-1%20copy.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Allie McFadden</p>

<p>Communications Officer</p>

<p>allie.mcfadden@cc.gatech.edu</p>
]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
          <item>1299</item>
          <item>576481</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Computer Science/Information Technology and Security]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>153</tid>
        <value><![CDATA[Computer Science/Information Technology and Security]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="39501"><![CDATA[People and Technology]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
          <item>1299</item>
          <item>576481</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[GVU Center]]></item>
          <item><![CDATA[ML@GT]]></item>
          <item><![CDATA[School of Interactive Computing]]></item>
      </og_groups_both>
  <field_keywords>
      </field_keywords>
  <field_userdata>
      <![CDATA[]]>
  </field_userdata>
</node>
