{"666516":{"#nid":"666516","#data":{"type":"event","title":"AI4OPT Tutorial Lectures: Sanjay Shakkottai","body":[{"value":"\u003Cp\u003EDates: From Monday, March 13 to Friday, March 17, between the hours of 10:00 AM to 12:00 PM (noon).\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELocation:\u0026nbsp;See locations\u0026nbsp;below in \u0026#39;Schedule\u0026#39;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELive stream link:\u0026nbsp;\u003Ca href=\u0022https:\/\/gatech.zoom.us\/j\/99381428980\u0022\u003Ehttps:\/\/gatech.zoom.us\/j\/99381428980\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Ch2\u003ECausal Inference Course\u003C\/h2\u003E\r\n\r\n\u003Cp\u003ESpeaker:\u0026nbsp;\u003Ca href=\u0022https:\/\/sites.google.com\/view\/sanjay-shakkottai\/\u0022\u003ESanjay Shakkottai\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMoving away from decision-making based on observed correlations in data, causal inference develops the mathematical foundations for reasoning about the direction of implication \u0026mdash; aka cause and effect \u0026ndash; for observed dependencies in data. These foundations lead to tools and techniques that can be used for improved models and better decision-making for emerging data-driven systems. This short course covers the motivation, mathematical foundations, and machine learning algorithms for causal reasoning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESchedule\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Col\u003E\r\n\t\u003Cli\u003EMon, Mar 13: Lecture 1, 10 am \u0026ndash; noon,\u0026nbsp;\u003Ca href=\u0022https:\/\/goo.gl\/maps\/qmWirzco7rpoYjNX8\u0022\u003ESkiles\u003C\/a\u003E\u0026nbsp;006 (\u003Cem\u003ECoffee and snacks provided\u003C\/em\u003E)\u003C\/li\u003E\r\n\t\u003Cli\u003ETue, Mar 14: Lecture 2, 10 am \u0026ndash; noon,\u0026nbsp;\u003Ca href=\u0022https:\/\/goo.gl\/maps\/YQmVNP6KuWocLtUN9\u0022\u003EGroseclose\u003C\/a\u003E\u0026nbsp;119 (\u003Cem\u003ELunch provided\u003C\/em\u003E)\u003C\/li\u003E\r\n\t\u003Cli\u003EWed, Mar 15: Lecture 3, 10 am \u0026ndash; noon,\u0026nbsp;\u003Ca href=\u0022https:\/\/goo.gl\/maps\/whhrD1CVaLbNDGaj9\u0022\u003ELove Manufacturing Building\u003C\/a\u003E184 (\u003Cem\u003ECoffee and snacks provided\u003C\/em\u003E)\u003C\/li\u003E\r\n\t\u003Cli\u003EThu, Mar 16: Lecture 4, 10 am \u0026ndash; noon,\u0026nbsp;\u003Ca href=\u0022https:\/\/goo.gl\/maps\/YQmVNP6KuWocLtUN9\u0022\u003EGroseclose\u003C\/a\u003E\u0026nbsp;119 (\u003Cem\u003ELunch provided\u003C\/em\u003E)\u003C\/li\u003E\r\n\t\u003Cli\u003EFri, Mar 17: Lecture 5, 10 am \u0026ndash; noon,\u0026nbsp;\u003Ca href=\u0022https:\/\/goo.gl\/maps\/whhrD1CVaLbNDGaj9\u0022\u003ELove Manufacturing Building\u003C\/a\u003E\u0026nbsp;184 (\u003Cem\u003ECoffee and snacks provided\u003C\/em\u003E)\u003C\/li\u003E\r\n\u003C\/ol\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETopics\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Col\u003E\r\n\t\u003Cli\u003EOverview\u0026nbsp;\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003EMotivation, Examples, Interventions\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\t\u003Cli\u003EIndependence, Conditional Independence and D-Separation\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003EConditional Independence (CI)\u003C\/li\u003E\r\n\t\t\u003Cli\u003EDirected Acyclic Graphs (DAGs)\u003C\/li\u003E\r\n\t\t\u003Cli\u003ED-separation Properties\u003C\/li\u003E\r\n\t\t\u003Cli\u003EGlobal Markov Property\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\t\u003Cli\u003EMathematical Formalism\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003EStructural Causal Model (SCM)\u003C\/li\u003E\r\n\t\t\u003Cli\u003EGraphical Representation\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\t\u003Cli\u003EInterventions Overview\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003EObservational vs interventional SCM\u003C\/li\u003E\r\n\t\t\u003Cli\u003E\u0026lsquo;Do\u0026rsquo; Operation With SCM\u003C\/li\u003E\r\n\t\t\u003Cli\u003ETypes Of Interventions\u003C\/li\u003E\r\n\t\t\u003Cli\u003EAlternate representations of \u0026lsquo;do\u0026rsquo;\u003C\/li\u003E\r\n\t\t\u003Cli\u003ETotal Causal Effect\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\t\u003Cli\u003EInterventions Calculus\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003EComputing the intervention distribution using the observational distribution\r\n\t\t\u003Cul\u003E\r\n\t\t\t\u003Cli\u003Etruncated factorization theorem\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003EAverage Causal Effect (ACE)\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003Ekidney stone example (Simpson\u0026rsquo;s paradox)\u003C\/li\u003E\r\n\t\t\u003C\/ul\u003E\r\n\t\t\u003C\/li\u003E\r\n\t\t\u003Cli\u003EAdjustment\r\n\t\t\u003Cul\u003E\r\n\t\t\t\u003Cli\u003EDefinition of confounding\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003EValid adjustment set\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003Einvariant conditionals\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003EAdjustment theorem (parental adjustment, backdoor criterion)\u003C\/li\u003E\r\n\t\t\u003C\/ul\u003E\r\n\t\t\u003C\/li\u003E\r\n\t\t\u003Cli\u003EDo-calculus\r\n\t\t\u003Cul\u003E\r\n\t\t\t\u003Cli\u003EGeneral rules for deriving intervention distribution from the observational distribution (this generalizes the adjustment theorem)\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003EFront door theorem\u003C\/li\u003E\r\n\t\t\u003C\/ul\u003E\r\n\t\t\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\t\u003Cli\u003ELearning Causal Models\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003ELearning with infinite samples\r\n\t\t\u003Cul\u003E\r\n\t\t\t\u003Cli\u003ELearning up to Markov equivalence (CPDAG)\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003EFaithfulness\u003C\/li\u003E\r\n\t\t\u003C\/ul\u003E\r\n\t\t\u003C\/li\u003E\r\n\t\t\u003Cli\u003EAlgorithms for structure learning\r\n\t\t\u003Cul\u003E\r\n\t\t\t\u003Cli\u003EPC Algorithm for CPDA\u003C\/li\u003E\r\n\t\t\t\u003Cli\u003EICA algorithm for LiNGAM\u003C\/li\u003E\r\n\t\t\u003C\/ul\u003E\r\n\t\t\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\t\u003Cli\u003EHidden Variables (Latent confounders)\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003EInstrument variables and 2SLS method\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\t\u003Cli\u003EConditional Independence (CI) Testing\r\n\t\u003Cul\u003E\r\n\t\t\u003Cli\u003EHardness of CI testing\u003C\/li\u003E\r\n\t\t\u003Cli\u003EPartial correlation coefficient\u003C\/li\u003E\r\n\t\t\u003Cli\u003EKernel based methods\u003C\/li\u003E\r\n\t\t\u003Cli\u003EConditional randomization\u003C\/li\u003E\r\n\t\t\u003Cli\u003EClassifier based testing\u003C\/li\u003E\r\n\t\u003C\/ul\u003E\r\n\t\u003C\/li\u003E\r\n\u003C\/ol\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBio:\u003C\/strong\u003E\u0026nbsp;Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. Shakkottai is a professor in the Engineering department at University of Texas at Austin and holds the Cockrell Family Chair in Engineering #15. He received the NSF CAREER award (2004) and was elected as an IEEE Fellow in 2014. He was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021 and is currently the Editor in Chief of IEEE\/ACM Transactions on Networking. Shakkottai\u0026rsquo;s research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information click \u003Ca href=\u0022https:\/\/www.ai4opt.org\/news-events\/ai4opt-tutorial-lectures-sanjay-shakkottai\u0022\u003Ehere\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EMoving away from decision-making based on observed correlations in data, causal inference develops the mathematical foundations for reasoning about the direction of implication \u0026mdash; aka cause and effect \u0026ndash; for observed dependencies in data. These foundations lead to tools and techniques that can be used for improved models and better decision-making for emerging data-driven systems. This course will cover the motivation, mathematical foundations, and machine learning algorithms for causal reasoning.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"AI4OPT is is offering a course program on \u0022Causal Inference\u0022 the week of March 17. "}],"uid":"36348","created_gmt":"2023-03-07 21:07:42","changed_gmt":"2023-03-07 21:43:38","author":"Breon Martin","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2023-03-13T11:00:00-04:00","event_time_end":"2023-03-17T13:00:00-04:00","event_time_end_last":"2023-03-17T13:00:00-04:00","gmt_time_start":"2023-03-13 15:00:00","gmt_time_end":"2023-03-17 17:00:00","gmt_time_end_last":"2023-03-17 17:00:00","rrule":null,"timezone":"America\/New_York"},"extras":[],"related_links":[{"url":"https:\/\/twitter.com\/Ai4Opt\/status\/1633205103586082816?s=20","title":"Twitter post"},{"url":"https:\/\/www.linkedin.com\/feed\/update\/urn:li:activity:7038970811340259328","title":"LinkedIn post"},{"url":"https:\/\/www.ai4opt.org\/news-events\/ai4opt-tutorial-lectures-sanjay-shakkottai","title":"Website post "}],"groups":[{"id":"1214","name":"News Room"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1789","name":"Conference\/Symposium"},{"id":"1795","name":"Seminar\/Lecture\/Colloquium"},{"id":"26411","name":"Training\/Workshop"}],"invited_audience":[{"id":"78761","name":"Faculty\/Staff"},{"id":"177814","name":"Postdoc"},{"id":"78771","name":"Public"},{"id":"174045","name":"Graduate students"},{"id":"78751","name":"Undergraduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}