{"606678":{"#nid":"606678","#data":{"type":"news","title":"Georgia Tech Teams up with Intel to Protect Artificial Intelligence from Malicious Attacks Using SHIELD","body":[{"value":"\u003Cdiv\u003E\r\n\u003Cp\u003EWhat if a self-driving car read a stop sign as a yield sign,\u0026nbsp;or worse yet,\u0026nbsp;could not see the sign altogether? The consequences could be catastrophic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA team of Georgia Tech researchers has developed a fast and practical way to protect artificial intelligence (AI) systems \u0026ndash; such as those used in self-driving cars \u0026ndash; from malicious attacks on image recognition software that could lead to such a disastrous event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cse.gatech.edu\/\u0022\u003ESchool of Computational Science and Engineering\u003C\/a\u003E\u0026nbsp;(CSE) Ph.D. student\u0026nbsp;\u003Cstrong\u003ENilaksh Das\u0026nbsp;\u003C\/strong\u003Ewho leads this investigation, deep neural networks (DNNs), which are used to train AI systems, are highly vulnerable to maliciously generated images or pixel manipulation.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis type of manipulation could result in targeted attacks, such as misleading machines into reading stop signs as yield signs, or untargeted attacks, which could render the system unable to see the stop sign at all. Simply put, pixel manipulation confuses machines from performing the right tasks, such as stopping, because the system is unable to read the symbol correctly.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo combat these types of potential attacks, Das and his collaborators created\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1802.06816.pdf\u0022\u003ESHIELD\u003C\/a\u003E, an AI defense framework that stands\u0026nbsp;for\u0026nbsp;\u003Cem\u003ESecure Heterogeneous Image Ensemble with Local Denoising.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESHIELD, set to be presented in August at\u0026nbsp;\u003Ca href=\u0022http:\/\/www.kdd.org\/kdd2018\/\u0022\u003EKDD 2018\u003C\/a\u003E, the most prestigious data mining conference, offers a novel and efficient approach using JPEG compression to vaccinate DNNs from malicious pixel manipulation.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The threat of adversarial attack casts a shadow over deploying DNNs in security and safety-critical applications. There is an urgent need to resolve this threat with fast, practical approaches, for which we leverage JPEG compression in this work, which is already a widely-used and mature technique,\u0026rdquo; said Das.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJPEG compression is a commonly used method of compression for digital images, particularly used in digital photography. This method gives users the option to adjust the quality of an image while discarding information that the human eye cannot see. Higher quality images are less compressed. Lower are more compressed. This compression removes malicious pixel manipulation.\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a fast and practical way to protect AI. There has been a lot of research into coming up with methods to harm or attack AI, but much less on how to protect them, and even less on designing fast and practical methods \u0026ndash; and SHIELD targets this need,\u0026rdquo; said CSE Associate Professor and SHIELD researcher\u0026nbsp;\u003Cstrong\u003EPolo Chau\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChau continued, \u0026ldquo;To immunize a DNN model from artifacts introduced by compression, SHIELD vaccinates a model by re-training it with compressed images.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdditionally, SHIELD provides an additional layer of protection that utilizes randomization at test time, making it harder for an adversary to estimate the transformation performed.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team plans to explore the feasibility of their approach on more hardware platforms and in more types of compression scenarios such as with audio compression for voice recognition software.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChau and Das are joined in this research enterprise with team members and fellow CSE Ph.D. students\u0026nbsp;\u003Cstrong\u003EShang-Tse Chen\u0026nbsp;\u003C\/strong\u003Eand\u0026nbsp;\u003Cstrong\u003EFred Hohman\u003C\/strong\u003E,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.scs.gatech.edu\/\u0022\u003ESchool of Computer Science\u003C\/a\u003E\u0026nbsp;M.S. student\u0026nbsp;\u003Cstrong\u003EMadhuri Shanbhogue\u003C\/strong\u003E, undergraduate researcher\u0026nbsp;\u003Cstrong\u003ESiwei Li\u003C\/strong\u003E, co-primary investigator and lead at\u0026nbsp;\u003Ca href=\u0022http:\/\/istc-arsa.iisp.gatech.edu\/\u0022\u003EIntel Science and Technology Center for Adversary-Resilient Security Analytics\u003C\/a\u003E\u0026nbsp;\u003Ca href=\u0022http:\/\/istc-arsa.iisp.gatech.edu\/pages\/li-chen.html\u0022\u003E\u003Cstrong\u003ELi Chen\u003C\/strong\u003E\u003C\/a\u003E, and Intel Research Scientist\u0026nbsp;\u003Ca href=\u0022http:\/\/istc-arsa.iisp.gatech.edu\/pages\/michael-kounavis.html\u0022\u003E\u003Cstrong\u003EMichael\u0026nbsp;E. Kounavis\u003C\/strong\u003E\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A team of Georgia Tech researchers has developed a fast and practical way to protect artificial intelligence (AI) systems \u2013 such as those used in self-driving cars \u2013 from malicious attacks on image recognition software that could lead to such a disastrous"}],"uid":"34540","created_gmt":"2018-06-01 15:27:59","changed_gmt":"2018-06-07 13:56:23","author":"Kristen Perez","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-06-01T00:00:00-04:00","iso_date":"2018-06-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"606392":{"id":"606392","type":"image","title":"SHIELD - Diagram: Secure Heterogeneous Image Ensemble with Localized Denoising","body":null,"created":"1527085937","gmt_created":"2018-05-23 14:32:17","changed":"1527085937","gmt_changed":"2018-05-23 14:32:17","alt":"SHIELD - Diagram: Secure Heterogeneous Image Ensemble with Localized Denoising","file":{"fid":"231276","name":"SHIELD.jpg","image_path":"\/sites\/default\/files\/images\/SHIELD.jpg","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/SHIELD.jpg","mime":"image\/jpeg","size":148547,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/SHIELD.jpg?itok=EQJo3KhY"}}},"media_ids":["606392"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39431","name":"Data Engineering and Science"},{"id":"39481","name":"National Security"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EKristen Perez\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003Ekristen.perez@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["kristen.perez@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}