{"51209":{"#nid":"51209","#data":{"type":"news","title":"Multithreaded Supercomputer Seeks Software For Data-Intensive Computing","body":[{"value":"\u003Cp\u003ENew collaboration to develop software for advanced supercomputers\u003C\/p\u003E\n\u003Cp\u003ERICHLAND, Wash. (July 14, 2008) -- The newest breed of supercomputers have hardware set up not just for speed, but also to better tackle large networks of seemingly random data. And now, a multi-institutional group of researchers has been awarded $4.0 million to develop software for these supercomputers. Applications include anywhere complex webs of information can be found: from internet security and power grid stability to complex biological networks.\u003C\/p\u003E\n\u003Cp\u003EThe difference between the new breed and traditional supercomputers is how they access data, a difference that significantly increases computing power. But old software won\u0027t run on the new hardware any more than a PC program will run on a Mac. So, the Department of Defense provided the funding this month to seed the Center for Adaptive Supercomputing Software, a joint project between the Department of Energy\u0027s Pacific Northwest National Laboratory and Cray, Inc, in Seattle.\u003C\/p\u003E\n\u003Cp\u003E\u0022The system will allow much faster analysis of complex problems, like understanding and predicting how the power grid behaves -- one of the most complex engineering systems ever built,\u0022 said Moe Khaleel, director of Computational Sciences and Mathematics at PNNL, which is leading the project.\u003C\/p\u003E\n\u003Cp\u003EOther researchers in the software collaboration hail from Sandia National Laboratories, Georgia Institute of Technology, Washington State University and the University of Delaware.\u003C\/p\u003E\n\u003Cp\u003EThese new machines are built with so-called \u0022multithreaded processors\u0022 that enable multiple, simultaneous processing compared with the linear and slower approach of conventional systems. The Center will focus on applications for the multithreaded Cray XMT, one of which Cray delivered to PNNL in September 2007 (\u003Ca href=\u0022http:\/\/www.pnl.gov\/news\/release.asp?id=271\u0022 title=\u0022http:\/\/www.pnl.gov\/news\/release.asp?id=271\u0022\u003Ehttp:\/\/www.pnl.gov\/news\/release.asp?id=271\u003C\/a\u003E).\u003C\/p\u003E\n\u003Cp\u003E\u0022Traditional supercomputers are not well suited for certain kinds of data analysis, so we want to explore this advanced architecture,\u0022 said PNNL computational scientist Daniel Chavarr\u00eda.\u003C\/p\u003E\n\u003Cp\u003EIn previously published work, PNNL computational scientist Jarek Nieplocha used a predecessor of the Cray XMT to run typical software programs that help operators keep the power grid running smoothly. Adapted to the advanced hardware, these programs ran 10 times faster on the multithreaded machine. \u0022That was the best speed ever reported. We\u0027re getting closer to being able to track the grid in real time,\u0022 said Nieplocha.\u003C\/p\u003E\n\u003Cp\u003EIn biology, another complex web is woven by genes (or their protein products) working together inside people\u0027s cells. \u201cWe have discovered genes implicated in breast cancer using a massively multithreaded algorithm on the Cray XMT,\u201d said Georgia Tech computational scientist and engineer David A. Bader. \u201cIt\u2019s like finding a needle in a haystack. The algorithm searches for genes whose removal quickly causes networks and pathways in the cell to breakdown.\u0022\u003C\/p\u003E\n\u003Cp\u003EThe processors and computer memory in the advanced computers interact in a novel way. In traditional supercomputers, each processing chip gets an dollop of memory to use for its computations. To perform a calculation, the chip dips into the memory, does its work, then accesses the memory again for its next calculation, like an elephant dipping its trunk into a bag of peanuts and eating them one at a time. Each processor-memory unit is linked together over a network, and performance improvements come with more and faster processors and sleek network connections.\u003C\/p\u003E\n\u003Cp\u003EThe Cray XMT multithreaded system lumps all the memory together, and the processors freely access the much larger memory pool. But like an elephant with many trunks, each processor has multiple threads: it dips into memory with one thread, and while that thread is performing the calculation at hand, another thread goes into the memory, and another.\u003C\/p\u003E\n\u003Cp\u003EBy the time all the threads have dipped, the original thread has finished its calculation and is ready for another trip to the memory bank. A many-trunked elephant would have a distinct speed advantage plowing through a bag of peanuts over its hungrier zoo-mate, just as a multithreaded system does.\u003C\/p\u003E\n\u003Cp\u003E\u0022The processors are doing useful work all the time, so the computer can be faster,\u0022 said Chavarr\u00eda. Each Cray XMT processor has 128 hardware threads with which to access the shared memory.\u003C\/p\u003E\n\u003Cp\u003EConceptually, this advantage translates into the machines being able to handle complex, random networks of data. Mainstream machines split up the data, assigning parcels of data to individual processing units. For example, a supercomputer trying to model how a community of microbes behaves would subdivide the community spatially.\u003C\/p\u003E\n\u003Cp\u003EThe computer would then analyze what goes on within each subdivision, but it couldn\u0027t reach across other subdivisions to find out what happened to the microbe that wandered off to the other side of its habitat. Multithreaded machines, however, can examine the whole space at once, essentially assigning each thread to a microbe.\u003C\/p\u003E\n\u003Cp\u003E\u0022If all of your microbes move to the other side of the territory, it doesn\u0027t matter, because the threads still have access,\u0022 said Chavarr\u00eda.\u003C\/p\u003E\n\u003Cp\u003EAnother advantage multithreaded machines have over mainsteam computers is in power consumption. Although the Cray has not yet been tested, other multithreaded machines have shown reduced energy usage compared to traditional architectures.\u003C\/p\u003E\n\u003Cp\u003EThe Computational Science \u0026amp; Engineering division at the Georgia Institute of Technology (CSE) was established in 2005 to strengthen and better reflect the critical role that computation plays in the science and engineering disciplines. CSE supports interdisciplinary research and education in computer science and applied mathematics. The Georgia Tech CSE program is designed to innovate and create new expertise, technologies, and practitioners in areas including high performance and grid computing, modeling and simulation, and data analysis and mining. \u003Ca href=\u0022http:\/\/www.cse.cc.gatech.edu\u0022 title=\u0022http:\/\/www.cse.cc.gatech.edu\u0022\u003Ehttp:\/\/www.cse.cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\n\u003Cp\u003ESandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy\u2019s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R\u0026amp;D responsibilities in national security, energy and environmental technologies, and economic competitiveness.\u003C\/p\u003E\n\u003Cp\u003EPacific Northwest National Laboratory is a Department of Energy Office of Science national laboratory where interdisciplinary teams advance science and technology and deliver solutions to America\u0027s most intractable problems in energy, national security and the environment. PNNL employs 4,000 staff, has a $855 million annual budget, and has been managed by Ohio-based Battelle since the lab\u0027s inception in 1965. \u003Ca href=\u0022http:\/\/www.pnl.gov\/news\u0022 title=\u0022http:\/\/www.pnl.gov\/news\u0022\u003Ehttp:\/\/www.pnl.gov\/news\u003C\/a\u003E\u003C\/p\u003E\n\u003Cp\u003E\n\u003C\/p\u003E\n\u003Cp\u003EMedia Contacts:\u003C\/p\u003E\n\u003Cp\u003EMary Beckman, for PNL\u003Cbr \/\u003E\u003Ca href=\u0022mailto:marybeckman@pnl.gov\u0022\u003Emarybeckman@pnl.gov\u003C\/a\u003E\u003Cbr \/\u003E(509) 375-3688\u003C\/p\u003E\n\u003Cp\u003EStefany Wilson, for Georgia Tech\u003Cbr \/\u003E\u003Ca href=\u0022mailto:stefany@cc.gatech.edu\u0022\u003Estefany@cc.gatech.edu\u003C\/a\u003E\u003Cbr \/\u003E404-312-6620\u003C\/p\u003E\n\u003Cp\u003EChristopher Miller, for Sandia\u003Cbr \/\u003E\u003Ca href=\u0022mailto:cmiller@sandia.gov\u0022\u003Ecmiller@sandia.gov\u003C\/a\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ERICHLAND, Wash. (July 14, 2008) -- Georgia Tech is part of a multi-institutional collaboration that received $4 million from the U.S. Department of Defense to develop software for the newest generation of supercomputers. This was announced today by Pacific Northwest National Laboratory.\u003Cbr \/\u003E\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":"","uid":"27154","created_gmt":"2010-02-09 21:40:52","changed_gmt":"2016-10-08 03:04:36","author":"Louise Russo","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2008-07-14T00:00:00-04:00","iso_date":"2008-07-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"47223","name":"College of Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}