{"620251":{"#nid":"620251","#data":{"type":"news","title":"Georgia Tech\u2019s Newest AI System Explains Its Decisions to People in Real-Time to Understand User Preferences","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe agent also uses everyday language that non-experts can understand. The explanations, or \u0026ldquo;rationales\u0026rdquo; as the researchers call them, are designed to be relatable and inspire trust in those who might be in the workplace with AI machines or interact with them in social situations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If the power of AI is to be democratized, it needs to be accessible to anyone regardless of their technical abilities,\u0026rdquo; said \u003Cstrong\u003EUpol Ehsan\u003C\/strong\u003E, Ph.D. student in the School of Interactive Computing at Georgia Tech and lead researcher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users. Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers developed a participant study to determine if their AI agent could offer rationales that mimicked human responses. Spectators watched the AI agent play the videogame Frogger and then ranked three on-screen rationales in order of how well each described the AI\u0026rsquo;s game move.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOf the three anonymized justifications for each move \u0026ndash; a human-generated response, the AI-agent response, and a randomly generated response \u0026ndash; the participants preferred the human-generated rationales first, but the AI-generated responses were a close second.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrogger offered the researchers the chance to train an AI in a \u0026ldquo;sequential decision-making environment,\u0026rdquo; which is a significant research challenge because decisions that the agent has already made influence future decisions. Therefore, explaining the chain of reasoning to experts is difficult, and even more so when communicating with non-experts, according to researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe human spectators understood the goal of Frogger in getting the frog safely home without being hit by moving vehicles or drowned in the river. The simple game mechanics of moving up, down, left or right, allowed the participants to see what the AI was doing, and to evaluate if the rationales on the screen clearly justified the move.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe spectators judged the rationales based on:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EConfidence\u003C\/strong\u003E \u0026ndash; the person is confident in the AI to perform its task\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EHuman-likeness\u003C\/strong\u003E \u0026ndash; looks like it was made by a human\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EAdequate justification\u003C\/strong\u003E \u0026ndash; adequately justifies the action taken\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EUnderstandability\u003C\/strong\u003E \u0026ndash; helps the person understand the AI\u0026rsquo;s behavior\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAI-generated rationales that were ranked higher by participants were those that showed recognition of environmental conditions and adaptability, as well as those that communicated awareness of upcoming dangers and planned for them. Redundant information that just stated the obvious or mischaracterized the environment were found to have a negative impact.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This project is more about understanding human perceptions and preferences of these AI systems than it is about building new technologies,\u0026rdquo; said Ehsan. \u0026ldquo;At the heart of explainability is sensemaking. We are trying to understand that human factor.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA second related study validated the researchers\u0026rsquo; decision to design their AI agent to be able to offer one of two distinct types of rationales:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EConcise, \u0026ldquo;focused\u0026rdquo; rationales \u003C\/strong\u003Eor\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EHolistic, \u0026ldquo;complete picture\u0026rdquo; rationales\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EIn this second study, participants were only offered AI-generated rationales\u0026nbsp;after watching the AI play Frogger. They were\u0026nbsp;asked to\u0026nbsp;select\u0026nbsp;the answer that\u0026nbsp;they preferred in a scenario\u0026nbsp;where an AI made a mistake or behaved unexpectedly. They did not know the rationales were grouped into the two categories.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy a 3-to-1 margin, participants favored answers that were classified in the \u0026ldquo;complete picture\u0026rdquo; category. Responses showed that people appreciated the AI thinking about future steps rather than just what was in the moment, which might make them more prone to making another mistake. People also wanted to know more so that they might directly help the AI fix the errant behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The situated understanding of the perceptions and preferences of people working with AI machines give us a powerful set of actionable insights that can help us design better human-centered, rationale-generating, autonomous agents,\u0026rdquo; said \u003Cstrong\u003EMark Riedl\u003C\/strong\u003E, professor of Interactive Computing and lead faculty member on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA possible future direction for the research will apply the findings to autonomous agents of various types, such as companion agents, and how they might respond based on the task at hand. Researchers will also look at how agents might respond in different scenarios, such as during an emergency response or when aiding teachers in the classroom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=9L4CZ5n7rQY\u0022\u003Epresented in March\u003C\/a\u003E\u0026nbsp;at the Association for Computing Machinery\u0026rsquo;s Intelligent User Interfaces 2019 Conference. The paper is titled \u003Cem\u003EAutomated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions\u003C\/em\u003E. Ehsan will present a position paper highlighting the design and evaluation challenges of human-centered Explainable AI systems at the upcoming \u003Cem\u003EEmerging Perspectives in Human-Centered Machine Learning\u003C\/em\u003E workshop at the ACM CHI 2019 conference, May 4-9, in Glasgow, Scotland.\u003C\/p\u003E\r\n\r\n\u003Cdiv\u003E\u0026nbsp;\u003C\/div\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Institute of Technology researchers have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions."}],"uid":"27592","created_gmt":"2019-04-09 19:42:53","changed_gmt":"2019-04-09 20:06:57","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-09T00:00:00-04:00","iso_date":"2019-04-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620255":{"id":"620255","type":"image","title":"Explainable AI for Frogger","body":null,"created":"1554840392","gmt_created":"2019-04-09 20:06:32","changed":"1554840392","gmt_changed":"2019-04-09 20:06:32","alt":"AI study with Frogger","file":{"fid":"236161","name":"Explainable AI.png","image_path":"\/sites\/default\/files\/images\/Explainable%20AI.png","image_full_path":"http:\/\/www.tlwarc.hg.gatech.edu\/\/sites\/default\/files\/images\/Explainable%20AI.png","mime":"image\/png","size":48748,"path_740":"http:\/\/www.tlwarc.hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Explainable%20AI.png?itok=sA2GtADe"}}},"media_ids":["620255"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGVU Center, College of Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003E678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}