Abstract: Our field has shifted from traditional machine learning techniques that are mostly based on pattern recognition to sequence-to-sequence models. The future of universal personal assistance for discovery and learning is upon us. How will multimodality image, video, and audio understanding, and reasoning abilities of large foundation models change how we build these systems? I will shed some initial lights on this topic by discussing some trends: First, the move to a single multimodal large model with reasoning abilities; Second, the fundamental research on personalization and user alignment; Third, the combination of System 1 and System 2 cognitive abilities into a single universal assistant.
Bio: Ed H. Chi is a Distinguished Scientist at Google DeepMind, leading machine learning research teams working on large language models (from LaMDA leading to launching Bard/Gemini), and neural recommendation agents. With 39 patents and ~200 research articles, he is also known for research on user behavior in web and social media. As the Research Platform Lead, he helped launched Bard/Gemini, a conversational AI experiment, and delivered significant improvements for YouTube, News, Ads, Google Play Store at Google with >660 product improvements since 2013.
Abstract: The US National Science Foundation (NSF) has been a leading funder of artificial intelligence research since the field's earliest days. This talk provides a window into the ways the NSF has supported, is supporting, and plans to support efforts to develop and understand the ideas behind trustworthy and responsible automated systems as they become more and more central to computing, science, and society as a whole.
Bio: Michael L. Littman is currently serving as Division Director for Information and Intelligent Systems at the National Science Foundation. The division is home to the programs and program officers that support researchers in artificial intelligence, human-centered computing, data management, and assistive technologies, as well as those exploring the impact of intelligent information systems on society. Littman is also University Professor of Computer Science at Brown University, where he studies machine learning and decision-making under uncertainty. He has earned multiple university-level awards for teaching and his research has been recognized with three best-paper awards and three influential paper awards. Littman is a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery.
Abstract: While congress continued to study the risks and possibilities of AI, the Biden-Harris Administration took bold action: driving agency actions to address risks to civil rights, equity, competition, economic opportunity, and national security; establishing new federal guidance to guide agency development, use, and procurement of AI, and a new AI Safety Institute; and boosting the capacity of government to use and regulate AI by bringing in new tech and tech-related talent and driving public and private investments in the growing public interest tech ecosystem. Drawing on my eighteen months of service in the White House Office of Science and Technology Policy during the Biden-Harris Administration, I will describe key AI initiatives and, drawing on my prior research, describe how these initiatives pave the way for the government to purposefully use technology to embed values or set policy—what I call "governance-by-design"—in a manner that supports fundamental democratic governance norms of intentional, deliberative, participatory, and expert public decision making, free from capture or caprice, and centers the public's rights and safety over private interests. Lastly, I will explain why these new directions in tech governance make growing the cultural and institutional supports for public service across the computing field should be an important shared national priority.
Bio: Mulligan served as Principal Deputy U.S. Chief Technology Officer at the White House Office of Science and Technology Policy, and Director of the National Artificial Intelligence Initiative Office (NAIIO), in the Biden-Harris Administration. At OSTP, Mulligan led the Technology Team that works to advance technology and data to benefit all Americans. Under her leadership the Tech Team leveraged technology and data to equitably deliver services, brought technology and data expertise to federal policy formation and implementation, and ensured that America led the world in values-driven technological research and innovation.
Abstract: Summary: Recent technological trends in cybersecurity, data platform, AI and IoT have rapidly accelerated the transformation of society. From TDK’s perspective this provides opportunities to serve society better by bringing value to our extended business portfolio developed through our history of continuous innovation and venture spirit. In this keynote, I will introduce TDK as a Company, our long term vision - "TDK Transformation" - and how Digital Transformation (DX) is accelerating the realization of this vision through innovation and value creation.
Bio: Dr. Roshan Thapliya is Corporate Officer, Chief Digital Transformation Officer (CDXO) and General Manager of Management Systems HQ, at TDK Corporation headquartered in Tokyo, Japan. He is responsible for global strategy, corporate policies, and implementation of Digital and IT technologies throughout the TDK Group that covers Europe, Americas and Asia. His responsibilities include developing and promoting IT/Digital technologies for TDK Groups core businesses in the field of Automotive, ICT and Industrial & Energy sectors which drive Sustainability, Digital Transformation (DX) and Energy Transformation (EX). As CDXO, he is also responsible for the operational and process transformation through DX within TDK Group to support new value creation and enhance operational efficiency through global collaboration. Dr. Thapliya has experience in the field of developing and implementing digital technologies in a variety of industries throughout his career. He was the Chief Digital Director and Division Head at Bridgestone Corporation headquartered at Tokyo, were his roles and responsibilities included global strategy formulation, execution, and business transfer in areas of tire-centric and mobility solutions with special focus in promoting DX through organizational transformation and customer facing value creation. There he helped to develop SaaS and tire sensor/AI technologies to remotely monitor tire health in-situ for new business models. He was also Group Manager at former Fuji Xerox Co., Ltd. (currently, Fuji Film Business Innovation Corp.) where he led incubation teams in the field of IoT, edge computing, robotics and bandwidth allocation algorithms in cellular telecommunications. There, he was responsible for establishing research in Social Robotics for office applications and served as Advisory Board Member at the National Facility for Human-Robot Interaction Research at the University of New South Wales, Sydney.
Abstract: Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, and healthcare, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted. How can we deliver on the promise of the benefits of AI but address scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI? Under the umbrella of trustworthy computing, employing formal methods for ensuring trust properties such as reliability and security has led to scalable success. Just as for trustworthy computing, formal methods could be an effective approach for building trust in AI-based systems. However, we would need to extend the set of properties to include fairness, robustness, and interpretability, etc.; and to develop new verification techniques to handle new kinds of artifacts, e.g., data distributions and machine-learned models. This talk poses a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.
Bio: Jeannette M. Wing is Executive Vice President for Research and Professor of Computer Science at Columbia University. She previously served as Avanessians Director of the Data Science Institute. Her current research interests are in trustworthy AI. Her areas of research expertise include security and privacy, formal methods, programming languages, and distributed and concurrent systems. She is widely recognized for her intellectual leadership in computer science, and more recently in data science. Wing’s seminal essay, titled “Computational Thinking,” was published more than a fifteen years ago and is credited with helping to establish the centrality of computer science to problem-solving in all other disciplines. Wing came to Columbia from Microsoft, where she served as Corporate Vice President of Microsoft Research, overseeing research labs worldwide. Before joining Microsoft, she was on the faculty at Carnegie Mellon University, where she served as Head of the Department of Computer Science and as Associate Dean for Academic Affairs of the School of Computer Science. During a leave from Carnegie Mellon, she served at the National Science Foundation as Assistant Director of the Computer and Information Science and Engineering Directorate, where she oversaw the federal government’s funding of academic computer science research. Wing has been recognized with distinguished service awards from the Computing Research Association and the Association for Computing Machinery. She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science, the Association for Computing Machinery, and the Institute of Electrical and Electronic Engineers. She holds bachelor’s, master’s, and doctoral degrees from MIT.