Abstract: As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. This requires AI systems to exhibit behavior that is explainable to humans. Synthesizing such behavior requires AI systems to reason not only with their own models of the task at hand, but also about the mental models of the human collaborators. Using several case-studies from our ongoing research, I will discuss how such multi-model planning forms the basis for explainable behavior. I will also touch on the cognitive intelligence aspects of human-AI interaction by discussing how explicit shared knowledge and vocabularies are critical (and how it is important for AI researchers to resist Polanyi's revenge c.f. https://bit.ly/2ZdAXye).
Bio: Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He writes a column on the societal and policy implications of the advances in Artificial Intelligence for The Hill. He can be followed on Twitter @rao2z.