{"595380":{"#nid":"595380","#data":{"type":"news","title":"Georgia Tech Awarded $1.2M NSF Grant to Protect Machine Learning Based Systems from Malicious Attacks","body":[{"value":"\u003Cp\u003EGeorgia Tech is leading a $1.2 million project to develop a system to protect the security of machine learning (ML) based systems. School of Computational Science and Engineering (CSE) Assistant Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dchau\/\u0022\u003EDuen Horng (Polo) Chau\u003C\/a\u003E leads the project, funded by the \u003Ca href=\u0022https:\/\/www.nsf.gov\u0022\u003ENational Science Foundation\u003C\/a\u003E (NSF), alongside Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/wenke-lee\u0022\u003EWenke Lee\u003C\/a\u003E,\u0026nbsp;Associate\u0026nbsp;Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~lsong\/\u0022\u003ELe Song\u003C\/a\u003E, and Assistant Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/taesoo-kim\u0022\u003ETaesoo Kim\u003C\/a\u003E of Georgia Tech.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrom applications in education to science and technology as a whole, the profound reach and use of machine learning is undeniable and ubiquitous. This, in turn, means that any damage caused by ML based systems can be extensive and devastating. Already, attackers can poison ML models by intentionally injecting maliciously crafted training data, causing the model to make wrong decisions. The history of cybersecurity suggests that attackers rendering machine learning based security analysis ineffective by gaining control of the input data or computation procedures will become more prevalent in the real-world soon.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project team has extensive accomplishments and experience in machine learning, systems and network security, botnet and intrusion detection, and malware analysis. The project itself, titled, \u003Cem\u003ESaTC: CORE: Medium: Understanding and Fortifying Machine Learning Based Security Analytics\u003C\/em\u003E, undertakes the challenge of developing a systematic, foundational, and practical framework to understand attacks, quantify vulnerabilities, and fortify machine learning based security analytics. The ultimate aim of the four-year project is to change how machine learning based systems will be designed, developed, and deployed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The ever-increasing volume of data that can be collected and made available for security analysis presents both great opportunities and great challenges. We can now apply powerful data analysis techniques, in particular, machine learning algorithms, that have been developed in recent years to gain new security insights and develop new solutions,\u0026rdquo; said Chau. \u0026ldquo;However, preliminary research has demonstrated that by gaining control of the input training data or the classification process, attackers can render machine learning based security analysis ineffective.\u0026rdquo; \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESong explained further, \u0026ldquo;To determine how adversaries can attack ML based security analytics, we will study the theoretical vulnerabilities of ML algorithms, such as how adversaries may smartly select the most uncertain examples to optimize exploratory attacks, and how they may launch sophisticated causative attacks even when the choices of ML models and algorithms are not known.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe findings from this research may lead to new kinds of adaptive cyberdefense systems. These systems would be highly resilient and efficient against future cybersecurity attacks, helping protect the nation and its citizens from harm. In a very tangible way, the proposed ideas in this NSF project push forward the envelope of state-of-the-art machine learning research, shaping systems now and into the future.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2016, a $1.5 million gift from Intel Corporation was given to Georgia Tech to establish a new research center \u0026ndash; the \u003Ca href=\u0022http:\/\/istc-arsa.iisp.gatech.edu\u0022\u003EIntel Science \u0026amp; Technology Center for Adversary-Resilient Security Analytics\u003C\/a\u003E (ISTC-ARSA) \u0026ndash;\u0026nbsp;dedicated to the emerging field of ML cybersecurity. The new center focuses on strengthening the analytics behind malware detection and threat analysis. The research exploration with Intel helped the research team identify and formalize important new research questions that form the pillars of this NSF project. These include the crucial need for developing a theoretical machine learning framework to formally quantify the level of impact by different types of attacks, and using this theoretical thinking to guide and increase defender systems in a principled way.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NSF project will leverage multiple channels to accelerate knowledge dissemination and tech transfer. From \u003Ca href=\u0022https:\/\/www.symantec.com\u0022\u003ESymantec\u003C\/a\u003E, the leading security solution provider, to various industry partners, the project team has obtained strong commitment to collaborate on the proposed research. Chau explained, \u0026ldquo;They will share with us malware samples, and help transition the developed research into practice, into their malware analysis engine, via Intel Software Guard Extension. We will open-source all developed software and datasets.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"NSF Grant Awarded to College of Computing Faculty for Machine Learning Based Systems Security"}],"field_summary":"","field_summary_sentence":[{"value":"Georgia Tech is leading a $1.2 million project to develop a system to protect the security of machine learning (ML) based systems. "}],"uid":"34540","created_gmt":"2017-08-31 19:30:47","changed_gmt":"2017-09-25 15:48:02","author":"Kristen Perez","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-08-31T00:00:00-04:00","iso_date":"2017-08-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"595381":{"id":"595381","type":"image","title":"cybersecurity_machinelearning","body":null,"created":"1504208307","gmt_created":"2017-08-31 19:38:27","changed":"1504208307","gmt_changed":"2017-08-31 19:38:27","alt":"","file":{"fid":"226902","name":"cybersecurity.jpg","image_path":"\/sites\/default\/files\/images\/cybersecurity.jpg","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/cybersecurity.jpg","mime":"image\/jpeg","size":219807,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/cybersecurity.jpg?itok=6dVvq3Iu"}}},"media_ids":["595381"],"groups":[{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"47223","name":"College of Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"9167","name":"machine learning"},{"id":"170215","name":"cyberattacks"},{"id":"1404","name":"Cybersecurity"},{"id":"13277","name":"anti-malware"},{"id":"76231","name":"Computational Science and Engineering"}],"core_research_areas":[{"id":"145171","name":"Cybersecurity"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":["kristen.perez@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}