<node id="669338">
  <nid>669338</nid>
  <type>news</type>
  <uid>
    <user id="32045"><![CDATA[32045]]></user>
  </uid>
  <created>1693572680</created>
  <changed>1693573083</changed>
  <title><![CDATA[New Technology Promises More Efficient and Practical Virtual Reality Systems]]></title>
  <body><![CDATA[<p>Glitchy games and bulky headsets may soon be things of the past thanks to a&nbsp;new&nbsp;eye-tracking system&nbsp;for&nbsp;virtual reality/augmented reality&nbsp;(VR/AR).&nbsp;&nbsp;</p>

<p>Eye tracking is an essential component of AR/VR systems, but current systems have some limitations. These include a large size due to bulkier lens-based cameras and the high communication cost between the camera and the backend system.</p>

<p>Georgia Tech School of Computer Science Associate Professor Yingyan (Celine) Lin, Ph.D. student Hoaran You, and postdoctoral student Yang (Katie) Zhao have&nbsp;developed a new&nbsp;eye-tracking&nbsp;system&nbsp;that works around these limitations by combining&nbsp;a&nbsp;recently developed lens-less&nbsp;camera, algorithm, and acceleration processor designs.</p>

<p>“The current VR headsets are too heavy, gaming can lag, and using the controller is cumbersome. Combined, this prevents users from having a truly immersive experience. We mitigate all these problems,” said You.</p>

<p>This new system,&nbsp;<em>EyeCoD: An Accelerated Eye Tracking System via FlatCam-based Algorithm &amp; Accelerator Co-Design</em>, replaces the traditional camera lens with FlatCam, a lensless camera 5x – 10x thinner and lighter. Combined with FlatCam, the team’s system enables eye tracking to function at a reduced size, with improved efficiency, and without sacrificing the accuracy of the tracking algorithm. The system could also enhance user privacy by not including a lens-based camera.</p>

<p>Another feature of the&nbsp;<em>EyeCoD</em>&nbsp;system&nbsp;is that it&nbsp;only puts the&nbsp;portion&nbsp;of the&nbsp;screen&nbsp;that&nbsp;a&nbsp;user’s eyes focus on&nbsp;in high resolution.&nbsp;It does this by predicting where a user’s eyes may land, then&nbsp;instantaneously&nbsp;rendering&nbsp;these areas&nbsp;in high&nbsp;res. These&nbsp;computational savings,&nbsp;plus&nbsp;a dedicated accelerator,&nbsp;underpin&nbsp;<em>EyeCoD</em>’s&nbsp;ability to boost&nbsp;processing speeds and efficiency.&nbsp;&nbsp;&nbsp;</p>

<p>The&nbsp;team received the&nbsp;<a href="https://licensing.research.gatech.edu/" target="_blank">Office of Technology Licensing’s</a>&nbsp;Tech Ready Grant&nbsp;for its efforts earlier this year. Tech Ready Grants&nbsp;offer&nbsp;$25,000 to help faculty&nbsp;transition&nbsp;projects from the lab to&nbsp;the&nbsp;marketplace.&nbsp;&nbsp;</p>

<p>The team hopes to use the funds to integrate the current demos into a compact eye-tracking system for use in commercial VR/AR headsets.</p>

<p>Along with winning the Tech Ready Grant, the&nbsp;team presented&nbsp;<em>EyeCoD&nbsp;</em>at&nbsp;the&nbsp;International Symposium<strong>&nbsp;</strong>on Computer Architecture (ISCA) 2022.&nbsp;IEEE&nbsp;Micro&nbsp;included the&nbsp;work&nbsp;in its&nbsp;<em>Top Picks from the Computer Architecture Conferences</em>&nbsp;for 2023.&nbsp;The annual publication&nbsp;highlights “significant research papers in computer architecture based on novelty and potential for long-term impact.”&nbsp;&nbsp;</p>

<p><em>EyeCoD&nbsp;</em>is a collaborative work.&nbsp;Collaborators include&nbsp;Rice University Professor&nbsp;Ashok&nbsp;Veeraraghavan, whose team provided&nbsp;the technical support and design of the&nbsp;FlatCam&nbsp;camera in&nbsp;<em>EyeCoD</em>;&nbsp;and&nbsp;Ziyun&nbsp;Li, of Meta, who provided&nbsp;technical inputs to ensure that the&nbsp;EyeCoD&nbsp;system aligns with industry AR/VR specifications.&nbsp;&nbsp;</p>
]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2023-08-30T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[A new eye-tracking system developed at Georgia Tech uses a lensless camera to reduce the size and weight of VR/AR headsets, improves efficiency, and enhances user privacy.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>A new eye-tracking system called EyeCoD, developed by researchers at Georgia Tech, uses a lensless camera to reduce the size and weight of VR/AR headsets, improve efficiency, and enhance user privacy while selectively rendering high-resolution screen areas based on where the user is focusing at any given moment.</p>
]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="671567">
            <nid>671567</nid>
            <type>image</type>
            <title><![CDATA[A closeup of glass panels on the College of Computing's Binary Bridge]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>254652</fid>
                  <filename><![CDATA[news-default-image - New.png]]></filename>
                  <filepath><![CDATA[/sites/default/files/2023/09/01/news-default-image%20-%20New.png]]></filepath>
                  <file_full_path><![CDATA[http://www.tlwarc.hg.gatech.edu//sites/default/files/2023/09/01/news-default-image%20-%20New.png]]></file_full_path>
                  <filemime>image/png</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[A closeup of glass panels on the College of Computing's Binary Bridge]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Morgan Usry, Communications Officer I</p>

<p>School of Computer Science</p>

<p>morgan.usry@cc.gatech.edu</p>
]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>50875</item>
          <item>47223</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Research]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>135</tid>
        <value><![CDATA[Research]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="39501"><![CDATA[People and Technology]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>50875</item>
          <item>47223</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[School of Computer Science]]></item>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[School of Interactive Computing]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>10199</tid>
        <value><![CDATA[Daily Digest]]></value>
      </item>
      </field_keywords>
  <field_userdata>
      <![CDATA[]]>
  </field_userdata>
</node>
