<node id="62604">
  <nid>62604</nid>
  <type>news</type>
  <uid>
    <user id="27174"><![CDATA[27174]]></user>
  </uid>
  <created>1289218840</created>
  <changed>1475896062</changed>
  <title><![CDATA[Georgia Tech Engaged in $100 Million Next-Generation Computing Initiative]]></title>
  <body><![CDATA[<p>Imagine that one of the world's most powerful high performance 
computers could be packed into a single rack just 24 inches wide and 
powered by a fraction of the electricity consumed by comparable current 
machines.  That would allow an unprecedented amount of computing power 
to be installed on aircraft, carried onto the battlefield for commanders
 -- and made available to researchers everywhere.</p>
<p>Putting this computing power into a small and energy-efficient 
package, and making it reliable and easier to program, are among the 
goals of the new DARPA Ubiquitous High Performance Computing (UHPC) 
initiative.  Georgia Tech researchers from three different units are 
supporting key components of this $100 million challenge, which will 
require development of revolutionary approaches not bound by existing 
computing paradigms.
</p>
<p>If UHPC meets its ambitious eight-year goals, the new approaches and 
technologies it develops could redefine the way that computing systems 
are envisioned, designed and used.
</p>
<p>"The opportunity we have is to go far beyond the current product 
roadmaps," said David Bader, a professor in Georgia Tech's School of 
Computational Science and Engineering.  "We really have the opportunity 
to change the industry and to design our applications with new computing
 architectures.  For the first time in the history of computing, we will
 be able to work with a clean slate."
</p>
<p>To attain the program's ambitious goals, DARPA funded four groups -- 
led by NVIDIA Corp., Intel Corp., the Massachusetts Institute of 
Technology and Sandia National Laboratories -- to develop UHPC 
prototypes.  A fifth group, led by the Georgia Tech Research Institute 
(GTRI), will develop applications, benchmarking and metrics that will be
 used to drive UHPC system design considerations and support performance
 analysis of the developing system designs.
</p>
<p>"Our team is developing a set of five difficult problems of a size 
and scope that the machines they are talking about should be able to 
accomplish," said Dan Campbell, a GTRI principal research engineer who 
is co-principal investigator of the benchmarking initiative.  "Our 
challenge is picking the right problems and specifying them at the right
 level of abstraction to allow innovation and properly represent what 
the DoD will need in 2018."
</p>
<p>The five problems highlight the unique computing needs of the U.S. military:
</p>
<p>• Analysis of the vast streams of data originating with widespread 
sensor systems, unmanned aerial vehicles and new generations of radar 
systems.  The data will be analyzed for nuggets of useful information in
 ways that are not possible today.
</p>
<p>• A dynamic graph challenge, in which many entities interact to 
create a problem of "connecting the dots."  That could mean analyzing 
relationships in social media to find possible adversaries, or 
understanding network traffic for cyber-security challenges.
</p>
<p>• The decision tree, comparable to a chess game in which many 
possible interconnected options, each with complex implications, must be
 analyzed quickly.  This could help field commanders or corporate CEOs 
make better decisions.
</p>
<p>• Materials shock and hydrodynamics issues, challenges important to improving future generations of materials.
</p>
<p>• Molecular dynamics simulations, which use high-performance 
computers to understand interactions between very large systems, such as
 protein folding.
</p>
<p>"We need to be able to take in a lot more data and understand it a 
lot more thoroughly than we can now," said Mark Richards, a principal 
research engineer in the Georgia Tech School of Electrical and Computer 
Engineering and co-principal investigator of the benchmarking team.  
"That might allow us to find adversaries we can't find now because we're
 unable to tease that information out of the data flow."
</p>
<p>While the benefits of making such computing power widely available 
are obvious, how these machines will be designed, built and reliably 
operated is not.
</p>
<p>"Meeting these very ambitious program goals will pose significant 
technical challenges," said Bader, who leads application development on 
the NVIDIA team and is part of the benchmarking group.  "The technology 
roadmaps in such areas as interconnection networks, microprocessor 
design and technology fabrication will be pushed to their limits."
</p>
<p>Meeting power limitations of just 57 kilowatts per rack -- the amount
 of electricity produced by a portable military generator -- may be the 
toughest among them.  The fastest computer currently in operation 
requires seven megawatts of power.  
</p>
<p>"Reducing the power consumption means less energy per computation," 
noted Richards.  "But as we lower the device voltage, we get closer to 
the physical noise.  That will allow more errors due to the physics of 
the devices, and all kinds of things will have to be done to address 
that."
</p>
<p>And the entire machine will have to fit into a 24-inch wide, 78-inch high and 40-inch deep cabinet.
</p>
<p>But the physical implementation of the machines is just one part of 
the challenge, Bader noted.  How people will work with them poses a 
perhaps more difficult challenge because it will require thinking about 
computers in a new way.
</p>
<p>"Over the past 20 or 30 years, we've taken a single computing design 
and kept tweaking it through advances like miniaturizing parts," he 
said.  "But we really haven't changed the global nature of how the 
machine works. To meet DARPA's power efficiency goals, we really will 
need to change the way we program the machine."
</p>
<p>That also affects the humans who interact with these highly-parallel 
machines, which could have as many as a half-million separate threads 
operating at the same time.  DARPA's initial goal is to build machines 
capable of petaflop speed -- a trillion operations per second -- which 
could lead into the next generation of exascale computers a thousand 
times more capable.
</p>
<p>"We will need to find new ways of thinking about computers that will 
make it feasible for humans to comprehend what is going on inside," 
Campbell said. "It's a huge programming challenge."
</p>
<p>To encourage collaboration in solving these complex problems, DARPA 
has embraced the idea of open innovation.  It expects the organizations 
to work together on common critical topics, creating a collaborative 
environment to address the system challenges.  New technology generated 
by the program -- believed to be today's largest DoD computing research 
initiative -- is likely to move quickly into industry.
</p>
<p>"There is certainly an expectation among the companies that what they
 are doing in this project is going to change how we do mainstream 
computing," Bader said. "The technology transfer implications are 
certainly obvious."
</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[DARPA Program Will Put Petascale Computer into a 24-inch Cabinet]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2010-11-08T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Georgia Tech is supporting a major new computing initiative.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Georgia Tech researchers are engaged in a $100 million DARPA program to fit a high performance petaflop computer into a single rack just 24 inches wide and power it with a fraction of the electricity consumed by comparable current machines. <em>Source: GT Research News</em></p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="62602">
            <nid>62602</nid>
            <type>image</type>
            <title><![CDATA[Georgia Tech UHPC researchers]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>191520</fid>
                  <filename><![CDATA[tmv30679.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/tmv30679_0.jpg]]></filepath>
                  <file_full_path><![CDATA[http://www.tlwarc.hg.gatech.edu//sites/default/files/images/tmv30679_0.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Georgia Tech UHPC researchers]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Stefany Sanders</p><p>College of Computing</p><p><a href="mailto:stefany@cc.gatech.edu">stefany@cc.gatech.edu</a></p><p>404-312-6620</p>]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>3427</tid>
        <value><![CDATA[High performance computing]]></value>
      </item>
      </field_keywords>
  <field_userdata>
      <![CDATA[]]>
  </field_userdata>
</node>
