{"619641":{"#nid":"619641","#data":{"type":"event","title":"GT Neuro Seminar Series","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003E\u0026ldquo;Integrating New Knowledge into a Neural Network without Catastrophic Interference: Computational and Theoretical Investigations in a Hierarchically Structured Environment\u0026rdquo;\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJames L. McClelland, Ph.D.\u003Cbr \/\u003E\r\nLucie Stern Professor in the Social Sciences\u003Cbr \/\u003E\r\nDirector, Center for Mind, Brain and Computation\u003Cbr \/\u003E\r\nDepartment of Psychology\u003Cbr \/\u003E\r\nStanford University, Stanford, CA\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to complementary learning systems theory, integrating new memories into a multi-layer neural network without interfering with what is already known depends on interleaving presentation of the new memories with ongoing presentations of items previously learned. This putative dependence is both costly for machine learning and biologically implausible for real brains which are unlikely to have sufficient time for such massive interleaving, even during sleep. We use deep linear neural networks in hierarchically structured environments previously analyzed by Saxe, McClelland, and Ganguli () to gain new insights into how integration of new knowledge might be made more efficient. For this type of environment, its content can be described by the singular value decomposition (SVD) of the environment\u0026#39;s input-output covariance matrix, in which each successive dimension corresponds to categorical split in the hierarchical environment. Prior work showed that deep linear networks are sufficient to learn the content of the environment, and they do so in a stage-line way, with each dimension strength rising from near-zero to its maximum strength after a delay inversely proportional to the strength of the dimension, as previously demonstrated by Saxe et al capturing patterns previously observed in deeper non-linear neural networks by Rogers and McClelland (2004). Several observations are then accessible when we consider learning a new item previously not encountered in the micro-environment. (1) The item can be examined in terms of its projection onto the existing structure, and the degree to which it adds a new categorical split. (2) To the extent the item projects onto existing structure, including it in the training corpus leads to the rapid adjustment of the representation of the categories involved, and effectively no adjustment occurs to categories onto which the new item does not project at all. (3) Learning a new split, however, is slow, and its learning dynamics show the same delayed rise to maximum that depends on the dimension\u0026#39;s strength. These observations then motivate the development of ideas about how the new information might be acquired efficiently, combining interleaved learning with other strategies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Cem\u003EThis presentation can be seen via BlueJeans: https:\/\/bluejeans.com\/824485104\/\u003C\/em\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"\u201cIntegrating New Knowledge into a Neural Network without Catastrophic Interference: Computational and Theoretical Investigations in a Hierarchically Structured Environment\u201d - James L. McClelland, Ph.D."}],"uid":"27349","created_gmt":"2019-03-26 13:35:30","changed_gmt":"2019-03-26 13:35:30","author":"Floyd Wood","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2019-04-15T12:15:00-04:00","event_time_end":"2019-04-15T13:15:00-04:00","event_time_end_last":"2019-04-15T13:15:00-04:00","gmt_time_start":"2019-04-15 16:15:00","gmt_time_end":"2019-04-15 17:15:00","gmt_time_end_last":"2019-04-15 17:15:00","rrule":null,"timezone":"America\/New_York"},"extras":[],"related_links":[{"url":"http:\/\/www.neuro.gatech.edu","title":"GT Neuro"},{"url":"https:\/\/stanford.edu\/~jlmcc\/","title":"McClelland profile"}],"groups":[{"id":"1292","name":"Parker H. Petit Institute for Bioengineering and Bioscience (IBB)"}],"categories":[],"keywords":[{"id":"126571","name":"go-PetitInstitute"},{"id":"248","name":"IBB"},{"id":"172970","name":"go-neuro"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1795","name":"Seminar\/Lecture\/Colloquium"}],"invited_audience":[{"id":"78761","name":"Faculty\/Staff"},{"id":"177814","name":"Postdoc"},{"id":"174045","name":"Graduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:crozell@gatech.edu\u0022\u003EChris Rozell,\u003C\/a\u003E faculty host\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}