<node id="671058">
  <nid>671058</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1699991608</created>
  <changed>1699991608</changed>
  <title><![CDATA[PhD Defense by Patrick Grady]]></title>
  <body><![CDATA[<p><span><span><strong>Title</strong>: Sensing Touch from Vision for Humans and Robots</span></span></p>

<p>&nbsp;</p>

<p><span><span><strong>Date</strong>: Tuesday, November 28, 2023</span></span></p>

<p><span><span><strong>Time</strong>: 2:00pm-4:00pm EST</span></span></p>

<p><span><span><strong>Location</strong>: Klaus 1116<br />
<strong>Zoom</strong>: <a href="https://gatech.zoom.us/j/96467976963?pwd=MkxSUDFnaFJ6eFp0dXBkMHNQU3BtZz09">https://gatech.zoom.us/j/96467976963?pwd=MkxSUDFnaFJ6eFp0dXBkMHNQU3BtZz09</a></span></span></p>

<p>&nbsp;</p>

<p><span><span><strong>Patrick Grady</strong></span></span></p>

<p><span><span>Robotics PhD Student</span></span></p>

<p><span><span>School of Electrical and Computer Engineering</span></span></p>

<p><span><span>Georgia Institute of Technology</span></span></p>

<p><span><span><strong>Committee</strong>:</span></span></p>

<p><span><span>Dr. James Hays (Advisor) – School of Interactive Computing, Georgia Tech</span></span></p>

<p><span><span>Dr. Charlie Kemp (Advisor) – CTO, Hello Robot</span></span></p>

<p><span><span>Dr. Seth Hutchinson – School of Interactive Computing, Georgia Tech</span></span></p>

<p><span><span>Dr. Animesh Garg – School of Interactive Computing, Georgia Tech</span></span></p>

<p><span><span>Dr. Chengcheng Tang – Meta Reality Labs</span></span></p>

<p>&nbsp;</p>

<p><span><span><strong>Abstract</strong>: </span></span></p>

<p><span><span>To affect their environment, humans and robots use their hands and grippers to push, pick up, and manipulate the world around them. At the core of this interaction is physical contact which determines the underlying mechanics of the grasp. While contact is useful in understanding manipulation, it is difficult to measure. In this thesis, we explore methods to estimate contact between humans, robots, and objects using easy-to-collect imagery. First, we demonstrate a method which leverages subtle visual changes to infer the pressure between a human hand and surface using RGB images. We initially explore this work in a constrained laboratory setting, but also develop a weakly-supervised data collection technique to estimate hand pressure in less constrained settings. A parallel approach allows us to estimate the pressure and force that soft robotic grippers apply to their environments, allowing for precise closed-loop control of a robot. Finally, we develop a joint pose and contact estimator which may generalize to internet-scale images. Our model leverages multiple heterogeneously labeled datasets and images with contact labeled by human annotators. Overall, this thesis makes progress towards understanding human and robot manipulation from only visual sensing.</span></span></p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Sensing Touch from Vision for Humans and Robots]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><span><span>Sensing Touch from Vision for Humans and Robots</span></span></p>
]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2023-11-28T14:00:00-05:00]]></value>
      <value2><![CDATA[2023-11-28T16:00:00-05:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Klaus 1116]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
      </field_keywords>
  <userdata><![CDATA[]]></userdata>
</node>
