<node id="627425">
  <nid>627425</nid>
  <type>news</type>
  <uid>
    <user id="34773"><![CDATA[34773]]></user>
  </uid>
  <created>1570650888</created>
  <changed>1570709503</changed>
  <title><![CDATA[Premier Computer Vision Conference Accepts 10 Georgia Tech Papers]]></title>
  <body><![CDATA[<p>From helping chair umpires make better line calls in professional tennis to teaching robots to &ldquo;see&rdquo;, the field of computer vision continues to expand and become a part of people&rsquo;s everyday lives. A subfield of artificial intelligence, computer vision teaches computers to understand and interpret the visual world through photos or videos.</p>

<p>The <a href="http://iccv2019.thecvf.com/">International Conference on Computer Vision (ICCV)</a> takes place from Oct. 27 to Nov. 2 and brings together researchers from Georgia Tech and around the world to discuss recent breakthroughs and research in the field. Researchers in the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> have ten accepted papers in the conference.</p>

<p><a href="http://ml.gatech.edu/">School of Interactive Computing (IC)</a> and ML@GT associate professor <strong>Devi Parikh</strong> leads with seven research papers. Her work spans from <a href="https://www.voguebusiness.com/technology/facebook-ai-fashion-styling">using artificial intelligence (AI) to help people make more stylish outfit choices</a> to <a href="http://bit.ly/2ndC6qv">embodied visual recognition</a>.</p>

<p>IC assistant professor <strong>Judy Hoffman </strong>and professor <strong>James Rehg</strong> are 2019 area chairs.</p>

<p>&ldquo;As the computer vision field continues to expand and create novel ideas, conferences like ICCV become increasingly important. There was a lot of impressive work submitted to the conference this year. With computer vision being one of ML@GT&rsquo;s strongest areas, I&rsquo;m thrilled to see the center&rsquo;s presence in this premier conference,&rdquo; said Hoffman.</p>

<p>Other work from Georgia Tech includes papers on <a href="https://mlatgt.blog/2019/09/10/overcoming-large-scale-annotation-requirements-for-understanding-videos-in-the-wild/">lessening the need for additional annotation in videos</a>, making vision and language models more grounded, and <a href="http://bit.ly/2ndC6qv">agents learning to move to better perceive objects.</a></p>

<p>&quot;Having a paper accepted, especially as an oral presentation, especially in a top conference gives me lots of confidence and encouragement for my Ph.D. research. I can&#39;t wait to attend ICCV to share my work, talk with other talented people, and learn other interesting topics in both academic and industrial areas,&quot; said <strong>Min-Hung Chen</strong>, a sixth-year electrical and computer engineering Ph.D. student.</p>

<p>Organized by IEEE, ICCV is one of the premier international computer vision conferences and will take place at the COEX Convention Center in Seoul, South Korea.</p>

<p>For more information on ML@GT&rsquo;s involvement with the conference, visit <a href="http://bit.ly/339BYaS">http://bit.ly/339BYaS</a></p>
]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2019-10-10T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[The Machine Learning Center will make a splash at the International Conference on Computer Vision later this month.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="627424">
            <nid>627424</nid>
            <type>image</type>
            <title><![CDATA[Seoul, South Korea]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>238886</fid>
                  <filename><![CDATA[sunyu-kim-HjsWTyyVDgg-unsplash.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/sunyu-kim-HjsWTyyVDgg-unsplash.jpg]]></filepath>
                  <file_full_path><![CDATA[http://www.tlwarc.hg.gatech.edu//sites/default/files/images/sunyu-kim-HjsWTyyVDgg-unsplash.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[allie.mcfadden@cc.gatech.edu]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Allie McFadden</p>

<p>Communications Officer</p>

<p>allie.mcfadden@cc.gatech.edu</p>
]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>576481</item>
          <item>1299</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>576481</item>
          <item>1299</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[ML@GT]]></item>
          <item><![CDATA[GVU Center]]></item>
          <item><![CDATA[School of Interactive Computing]]></item>
      </og_groups_both>
  <field_keywords>
      </field_keywords>
  <field_userdata>
      <![CDATA[]]>
  </field_userdata>
</node>
