Panels for IEEE CIC/CogMI/TPS


Panel 1: Bias in Evolving Collaborative Computing

Wednesday, Dec. 14th, 2022 (US EST, GMT-5) 12:10pm – 1:40pm (US EST, GMT-5) Zoom Meeting

Change is the only constant. This timeless adage from Heraclitus has acquired new meaning, urgency, and impact during the COVID-19 pandemic. Instead of an occasional interruption followed by a return to normal, the continual rise of the new normal forever has transformed our economy, customs, and living conditions. The challenge of the new normal is the constant change, where each prevailing normality becomes an episode in a real-life series. As illustrated by the mutations of the thriving coronavirus, each wave of the new normal replaced the previously “new” normal: The Delta variant became dominant in 2021, only to be replaced by the Omicron in 2022, followed by a succession of prevailing subvariants such as B4 and B5. The new normal reflect the evolution of bias over a wide range of time scales. In contrast to the virus mutations at weeks/months, the new normal of civil rights and gender equality movements showed those social transformations continuing for generations, leading to an evolution (a revolution for some) of the perception and understanding of bias. While the socio-political dimensions of bias are very significant issues, this panel will focus on the quantitative and statistical side of bias, particularly the evolution of bias data collection and analytics over time. As concrete examples, evolving demographics (e.g., higher birth rates among the minorities) and migration patterns (e.g., accelerating urbanization) have changed the definition of representative samples for each decadal census. For population-related studies, any statistically representative data sets collected in the 1950’s (and each decade since) would have become increasingly biased over time because (parts of) the population have evolved, moved, and changed. As a starting point, we will discuss two distinguishing properties of the new normal: (1) never-seen-before novelty, and (2) short-lived abundance. By definition, the arrival of never-seen-before novelty (e.g., the Omicron variants) will introduce out-of-distribution data that previously trained classifiers could never know. To aggravate the problem, the new normal become statistically significant soon after they arise. Classifiers that remain completely ignorant of the significant never-seen-before novelty will exhibit behavior increasingly similar to a fixed-minded dogmatist. The panel will consider the effects of timeless assumption in gold standard machine learning evaluation based on fixed (closed) data sets, e.g., through k-fold cross-validation over the entire data set. In an evolving world as new-normal phenomena emerge and fade away, biases may change accordingly (and significantly) in the evolving context. Unfortunately, the evolution of bias as population changes would disappear into a single point when evaluated in fixed data sets. Methods to avoid mistreatment of bias (or missing biases altogether) due to ignorance of changes will be discussed, including potential remedies such as the introduction of time awareness into the measurement and evaluation of biases.

Panel Moderator

Ashish
Calton Pu
Professor, Georgia Institute of Technology, USA



Panelists

stuart
R.Stuart Geiger
University of California San Diego, USA
james
James Joshi
NSF and University of Pittsburgh, USA
LingLiu
Fred Morstatter
USC Information Sciences Institute , USA
LingLiu
Hanghang Tong
UIUC, USA




Panel 2: Securing the Election Infrastructure: Challenges and Opportunities

Thursday, Dec. 15th, 2022 (US EST, GMT-5) 11:45am - 1:15pm (US EST, GMT-5) Zoom Meeting

Fair and secure elections are the bedrock of our democracy. In today’s world, voting and elections rely on a complex infrastructure comprising voter registration databases, multiple types of electronic devices (voting machines, electronic pollbooks, optical scanners, etc.), protocols to securely transmit data from polling places to central processing facilities, various software applications to count, tabulate and analyze votes, and physical facilities to securely store ballots and voting equipment. People’s confidence in the result of elections heavily relies on a nation’s ability to secure such complex infrastructure and guarantee the integrity and confidentiality of the vote. The Cybersecurity and Infrastructure Security Administration (CISA), a United States agency charged with securing the nation’s cyber and physical infrastructure, classified election infrastructure as “critical infrastructure”. In fact, election infrastructure and processes are subject to attack by malicious actors just like any other critical infrastructure (e.g., energy systems, transportation systems, and financial systems). Recent events have shown how attacks against voting systems and election infrastructure, disinformation and misinformation campaigns, and claims of election fraud, whether founded or not, can affect people’s confidence in the integrity of the system and alienate voters. As threats evolve and become more sophisticated, industry, government entities, and the research community are called to find novel solutions to ensure the security of the election infrastructure and the confidentiality and integrity of the vote. This panel will bring together experts from industry, government, and academia to discuss the challenges that the election infrastructure is facing today and will continue to face in the foreseeable future. Panelists will offer insights into how collaboration between industry, government, and academia can create opportunities to solve or mitigate these challenges and ensure people’s confidence in the results of elections.

Panel Moderator

Jaideep
Massimiliano Albanese
Associate Professor, George Mason University, USA


Bio - Dr. Albanese is an Associate Professor and the Associate Chair for Research in the Department of Information Sciences and Technology at George Mason University, and he also serves as an Associate Director of the Center for Secure Information Systems. Dr. Albanese received his Ph.D. degree in Computer Science and Engineering in 2005 from the University of Naples Federico II and joined George Mason University in 2011 after serving as a Postdoctoral Researcher at the University of Maryland, College Park. His research interests are in the area of Information and Network Security, with particular emphasis on Graph-based Modeling and Detection of Cyber Attacks, Cyber Situational Awareness, Network Hardening, Moving Target Defense, Configuration Security, and Vulnerability Metrics. Dr. Albanese has served on the technical program committee of numerous conferences and serves as an Associate Editor for the IEEE Internet of Things Journal, Springer International Journal of Information Security, and IET Information Security. Dr. Albanese is involved in several initiatives in the broad area of election security: (i) he organized the 1st International Workshop on Election Infrastructure Security (EIS 2022), in conjunction with the 27th European Symposium on Research in Computer Security (ESORICS 2022); (ii) he leads the Mason team in a multi-university funded internship program with the Virginia Department of Elections; and (iii) has organized the 1st Election Security Hackathon (Eleckathon™️ 2022) at George Mason University, which was held on October 28, 2022.


Panelists

Josh Benaloh
Senior Principal Cryptographer, Microsoft Research, USA

Bio Josh Benaloh is the Senior Principal Cryptographer at Microsoft Research and an Affiliate Professor in the Allen School of Computer Science and Engineering at the University of Washington. His 1987 doctoral dissertation, “Verifiable Secret-Ballot Elections,” introduced the use of homomorphic encryption to enable election verifiability, and he has published and spoken extensively on election technologies and systems. Dr. Benaloh is an author of numerous studies and reports including the 2015 U.S. Vote Foundation report on “The Future of Voting,” the 2018 U.S. National Academies of Science, Engineering, and Medicine report “Securing the Vote – Protecting American Democracy,” and a soon to be published report on the feasibility of Internet voting by the Goldman School of Public Policy of the University of California at Berkeley. He currently chairs the over 200-member Election Verification Network and is the principal designer of Microsoft’s free, open-source ElectionGuard toolkit which is being used by numerous vendors to incorporate end-to-end verifiability into their election systems.

Matt Bernhard
Research Engineer, VotingWorks, USA

Bio - Matt Bernhard is a Research Engineer at VotingWorks, a non-profit, open-source election technology organization. Matt earned his Ph.D. from the University of Michigan, writing his dissertation on election security. His areas of focus include post-election auditing, voting system security, and human-centered research into security technologies.

Gerome
Chad Houck
Chief Deputy Secretary of State, Idaho, USA

Bio - Chad Houck is the founder of Opearent, a nonprofit created to provide states with the tools to better administer elections. Finishing his final month as Idaho’s Chief Deputy Secretary of State and with a former consulting focus on operational and process improvements in IT and commerce, Mr. Houck was responsible for all aspects (Fiscal, Elections, IT, and Corporate Divisions) of the Secretary of State Office, including both the state appropriated budget and IDSOS Federal Grants programs. Mr. Houck served on the 2017 and 2022 Idaho Governor’s Cybersecurity Task Forces and has assisted in execution of both virtual and in-person elections cybersecurity exercises in partnership with DHS, BSU, Harvard, and the state of Idaho. As a founder and Adjunct Faculty of the INSURE Elections Cybersecurity Center at Boise State University, Mr. Houck has provided testimony and presented at various security conferences and legislative hearings around the nation for CISA, the National Association of Secretary of State, Idaho Legislature, the Great State of Louisiana, and the Naval Postgraduate School, where Houck received his Master’s in Homeland Security Studies in 2021.

Vivek
Mark Lindeman
Policy & Strategy Director, Verified Voting, USA

Bio - Mark Lindeman is Policy & Strategy Director at Verified Voting and has been working to make elections more secure and verifiable for over a decade. Widely known and respected in the elections community, Mark worked on risk-limiting audits (RLAs) before they had a name and advises legislators, election officials, and other decision makers on audit methods. Mark has helped with RLA rulemaking and implementation in Georgia, Pennsylvania, California, Colorado, Rhode Island, and other states. He has co-authored several papers on RLAs including “A Gentle Introduction to Risk-Limiting Audits” and served as executive editor on the white paper “Risk-Limiting Audits: Why and How.” Mark has also served on the Coordinating Committee of the Election Verification Network since 2010. Mark has a Ph.D. in political science from Columbia University. He has frequently taught undergraduate and graduate courses in quantitative methods, public opinion, and various topics in American politics.

Arielle Schneider
Privacy Officer, Virginia Department of Elections, USA

Bio - As the Virginia Department of Elections' (ELECT) first Privacy Officer, Arielle Anderson Schneider has spent the last two years building a privacy program for state and local elections from the ground up. A firm believer that cybersecurity predicates meaningful privacy, she regularly advises local elections offices and ELECT’s technology and business divisions on privacy by design, election law and locality election security. As the Chair of the Virginia Voter Registration Systems Security Advisory Workgroup, she’s spearheading a substantial revision of existing cybersecurity standards for local election offices, slated for approval by the State Board of Elections in November 2022. Her expanded focus on real-world election technology, security and privacy allows her to explore solutions that promote privacy in elections at a national level; she most recently spoke about election privacy program management at the MS-ISAC/EI-ISAC Annual Meeting in August 2022. A native of Virginia since law school, Arielle received undergraduate degrees from the University of North Florida, an M.Sc. from the London School of Economics, and a J.D. from the University of Virginia School of Law, while completing a Florida Gubernatorial Fellowship in 2011, a Google Fellowship in 2013, and a Kennedy Fellowship in 2016.

Philip B. Stark
Distinguished Professor, University of California, Berkeley, USA

Bio - Philip B. Stark is Distinguished Professor of Statistics at the University of California, Berkeley, where he has served as department chair and associate dean. In 2007, he invented “risk-limiting election audits” (RLAs), endorsed by the National Academies and the American Statistical Association, among others, and now required or authorized by law in about 15 states. He designed and helped conduct the first dozen pilot RLAs. He has worked with the Secretaries of State of California and Colorado to create RLA procedures, laws, and regulations, and he helped draft RLA legislation for several states. In 2012, he and David Wagner introduced the notion of “evidence-based elections.” He has consulted for USDOJ, FTC, USDA, US Census Bureau, HUD, U.S. Department of Veterans Affairs, the California Attorney General, the California Highway Patrol, the Georgia Department of Law, the Illinois State Attorney, the New Hampshire Secretary of State, and the New Hampshire Attorney General. He has testified to the U.S. House of Representatives Subcommittee on the Census; the State of California Senate Committee on Elections, Reapportionment and Constitutional Amendments; the California Assembly Committee on Elections and Redistricting; the California Senate Committee on Natural Resources; and the California Little Hoover Commission. Stark serves on the Board of Directors of the Election Integrity Foundation, the Strategic Board of Advisors of the Open-Source Election Technology (OSET) Institute, and on the Board of Advisors of the U.S. Election Assistance Commission.





Panel 3: Robust and Fair AI

Friday, Dec. 16st, 2022 (US EST, GMT-5) 3:30 PM - 5:00 PM (US EST, GMT-5) Zoom Meeting

In the current era, people and society have grown increasingly reliant on artificial intelligence (AI) technologies. AI has the potential to drive us toward a future in which all of humanity flourishes. However, as we enjoy more advances in AI and machine learning, there is a growing concern about whether and to what extent AI can be robust and fair. On one hand, robustness and fairness share some theoretical foundations such as causality; on the other hand, there may be an inherent tension between ensuring both fairness and robustness. This panel consists of top-notch research experts in fair AI and robust AI. They will share their vision and perspectives on the intersection of fairness and robustness issues and challenges on research questions such as: are fairness and robustness inherently conflicting? Can black-box models be both fair and robust? How might causality bridge the gap, or lack thereof? What technological advances do you expect to see?

Panel Moderator

Ashish
Aidong Zhang
Professor, University of Virginia, USA


Bio - Dr. Aidong Zhang is currently a William Wulf Faculty Fellow and Professor of Computer Science in the School of Engineering and Applied Sciences at University of Virginia (UVA). She also holds joint appointments with Department of Biomedical Engineering and School of Data Science at University of Virginia. Her research interests include machine learning, data mining, bioinformatics, and health informatics. Dr. Zhang is a fellow of ACM and IEEE.


Panelists

Emily Black
Stanford University, USA

Bio - Dr. Emily Black is currently a postdoc at Stanford's RegLab with Dan Ho and will start a faculty position at Barnard in Fall 2023. Her research centers around understanding the impacts of machine learning and deep learning models in society. In particular, her research focuses on showing ways in which commonly used machine learning models may act unfairly; finding ways to pinpoint when models are behaving in a harmful manner in practice; and developing ways to mitigate harmful behavior when possible. Currently, she is especially interested in the intersection between fairness, model stability, and procedural justice.

Thomas Dietterich
Oregon State University, USA

Bio - Dr. Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He also serves as head moderator of cs.LG, the machine learning section of arXiv.

Xia Hu
Rice University, USA

Bio - Dr. Xia “Ben” Hu is an Associate Professor at Rice University in the Department of Computer Science, Dr. Hu has published over 100 papers in several major academic venues, including NeurIPS, ICLR, KDD, WWW, IJCAI, AAAI, etc. An open-source package developed by his group, namely AutoKeras, has become the most used automated deep learning system on Github (with over 8,000 stars and 1,000 forks). Also, his work on deep collaborative filtering, anomaly detection and knowledge graphs have been included in the TensorFlow package, Apple production system and Bing production system, respectively.

Kush Varshney
IBM

Bio - Dr. Varshney is a distinguished research scientist and manager with IBM Research at the Thomas J. Watson Research Center, Yorktown Heights, NY, where he leads the machine learning group in the Foundations of Trustworthy AI department. He is the founding co-director of the IBM Science for Social Good initiative. He applies data science and predictive analytics to human capital management, healthcare, olfaction, computational creativity, public affairs, international development, and algorithmic fairness, which has led to the Extraordinary IBM Research Technical Accomplishment for contributions to workforce innovation and enterprise transformation, and IBM Corporate Technical Awards for Trustworthy AI and for AI-Powered Employee Journey.





Evenning Session 1: Experiences in Academia and Research

Wednesday, Dec. 14th, 2022 (US EST, GMT-5) 6:30 PM - 8:00 PM (US EST, GMT-5) Zoom Meeting

Panel Moderator

Abhilasha Bhargav-Spantzel
Microsoft Security Partner Architect, Ex-Intel Principal Engineer

Bio - Abhilasha Bhargav-Spantzel is a Partner Security Architect at Microsoft. Previously she was at Intel for 14 years, focusing on hardware-based security product architecture. She completed her doctorate from Purdue University, which focused on identity and privacy protection using cryptography and biometrics. Abhilasha drives thought leadership and the future evolution of cybersecurity platforms through innovation, architecture, and education. She has given numerous talks at conferences and universities as part of distinguished lecture series and workshops. She has written 5 book chapters and 30+ ACM and IEEE articles and has 25+ patents. Abhilasha leads multiple D&I and actively drives the retention and development of women in technology. She is passionate about STEM K-12 cybersecurity education initiatives, as well as co-organizes regular camps and workshops for the same.


Panelists

Gaowen Liu
Cisco Research

Bio - Gaowen Liu received her Ph.D. in Computer Science at the University of Trento in 2017 and M.S. at the university of Trento and Nanjing University of Science and Technology. She was a visiting scholar at the Carnegie Mellon University and the University of Michigan. She has published 20+ research papers in the fields of computer vision, machine learning and multimedia. She received IBM best student Paper Award in ICPR 2014 and ICMR 2014 Student Travel Grant. Her main research interests relate to the investigation and implementation of new techniques in the fields of computer vision and multimedia. Specifically, She addresses a large spectrum of themes including model compression, human-behavior analysis, action recognition, object detection, etc. The specific research topics include cross media retrieval, multi-modal learning, social media analysis, cross-modal generation, etc.

Karthikeyan Shanmugam
Google Research India

Bio - Karthikeyan Shanmugam is currently a Research Scientist at Google Research India in Machine Learning and Optimization Team. Previously, he was a Research Staff Member with the IBM Research AI, NY during the period 2017-2022 and a Herman Goldstine Postdoctoral Fellow at IBM Research, NY in the period 2016-2017. He obtained my Ph.D. in ECE from UT Austin in 2016. He is a recipient of the IBM Corporate Technical Award in 2021 for his work in Explainable AI and Causal Inference. His research focus is on causal inference, online learning, and interpretability in Machine Learning. He is also interested in Information Theory and Coding Theory.

Dave Zage,
Intel Product Security Architect

Bio - David Zage is a Security Architect in the Networking and Edge (NEX) Group, focusing on defining security architectures for edge platforms and creating security capabilities for future usage. Prior to NEX, he was part of the Transportation Solution Division, enabling/creating the needed hardware and software building blocks to provide defense-in-depth solutions for the connected, software-defined vehicle. Before joining Intel, David worked at Sandia National Laboratories as a security lead on various projects including securing cloud storage and the theory and practical application of write-optimized data structures. He obtained his BS and PhD in computer science from Purdue University. His research interests span multiple areas including self-healing systems, fault-tolerant protocols, applied machine learning, and large-scale data analytics. David has authored over twenty peer-reviewed publications, awarded multiple patents, and regularly serves on the program committee for academic conferences. Outside of work, David enjoys spending time hiking, reading, and traveling.





Evenning Session 2: Startups, Innovation and Entrepreneurship

Thursday, Dec. 15th, 2022 (US EST, GMT-5) 6:30 PM - 8:00 PM (US EST, GMT-5) Zoom Meeting

Panel Moderator

Mummoorthy Murugesan
Normalyze Inc

Bio - Mummoorthy Murugesan is currently the founding Director of Engineering at Normalyze Inc. Earlier, he worked at Teradata R&D where he developed the incremental planning and execution of queries. He has worked in start-ups such as Netskope, and Turn to build highly scalable systems. Before Normalyze, he led the cloud infrastructure initiatives for Workday's Prism analytics. Dr. Murugesan's interests span data, analytics, security and cloud infrastructure. He received his Ph.D. in Computer Science from Purdue University, and Masters degree from Syracuse University.


Panelists

LingLiu
Debabrata Dash
Arista Networks

Bio - Debabrata Dash (Dash) is a Distinguished Data Scientist at Arista Networks. Before Arista, he co-founded Awake Security - a network detection and response company. Dash works on ingesting, analyzing, and interactively exploring vast network data on minimal hardware footprints. Before co-founding Awake, he led the engineering team at CipherCloud and was a distinguished technologist in Hewlett-Packard’s enterprise security division. At HP, he was an early member of the ArcSight engineering team, building market-leading technologies in event correlation, event management, and real-time analytics. Dash earned a degree in computer science and engineering from the Indian Institute of Technology and a Ph.D. from Carnegie Mellon University.

Pei-Yun Sabrina Hsueh
Pfizer Inc

Bio - Pei-Yun Sabrina Hsueh (Ph.D., FAMIA) has over a decade of experience innovating, operationalizing, and productizing AI solutions, either by partnering with start-ups from big corporations or directly in start-ups with top ventures, e.g., A16Z. In her roles, she is actively leading the industry's best practices in health AI, focusing on establishing a responsible AI governance framework and operationalizing AI in workflows. She is currently the Director of Ethical AI and External Innovation at Pfizer Inc., serving on the Practitioners Board of the ACM, the co-Chair of the KDD Applied Health Data Science Workshop, the Vice-Chair of the AMIA 2022 SPC, and the Co-Chair of the AMIA AI Evaluation Showcase 2023. She has also served on the IEEE editor search committee and standard committee for AI nudging. She led and owned a series of initiatives in AI Evaluation and Governance and RWD Evidence Strategy, achieving 10X speed from codes to clinics for deploying AI/ML models in clinical settings with cross-functional teams. Previously at IBM Research, she co-chaired the Health Informatics Professional Community and was elected as an IBM Academy of Technology Member. With a focus on both entrepreneurship and intrapreneurship, she has participated in the founding of a global collaborator and a behavioral analytics group to develop the innovation and commercialization strategy for Wellness Analytics capabilities on cloud platforms and push the boundaries of AI innovation in a responsible framework. Her dedication has won her recognitions such as the AMIA Distinguished Paper Award, Fellow of the AMIA, Google European Anita Borg Scholar, High-Value Inventions, Eminence and Excellence, and Manager Choice awards. She is on the Editorial Board of Sensors Journal, Frontiers in Public Health, and JAMIA OPEN Special Issue on Precision Medicine. Her commitment has led to 20+ patents, 50+ technical articles, and two new textbooks: Machine Learning for Medicine and Healthcare (in prep.) and Personal Health Informatics - Patient Participation in Precision Health (in print by Springer Nature).