Speaker: Yongfeng Zhang Rutgers University Piscataway, NJ Title: Explainable AI for Science Abstract: AI has been more and more integrated into the scientific discovery process and has made several important achievements, such as helping with discovering symbolic rules underlying physical observational data, predicting the chemical properties of drug molecules, and predicting the folding structure of proteins. However, existing AI for science research mostly focus on addressing the “what” problem but not the “why” problem. More specifically, they can make good predictions based on the massive data and complicated black-box models, but are less capable of explaining the prediction results and reveal the insights to human scientists. However, science is not only about the know how, but also about the know why. Actually, in many cases, understanding the “why” behind the result is even more important than just knowing the result itself, because knowing the why implies real growth of knowledge and helps in making critical decisions. Furthermore, if AI accumulates more and more knowledge that are not understandable to humans, which is already happening, it may eventually lead to a singularity where humans are lagged behind on the conquest of knowledge. This talk highlights the importance of Explainable AI for scientific research, which requires AI to not only make good scientific predictions but also explain its results to human scientists. We will use two examples to highlight the idea and research methodology, one is rediscovering the Kepler’s and Newton’s laws from Tycho’s data based on Explainable AI, and another is explaining the AI-predicted molecular properties in drug discovery. Through this talk, we hope to raise attention in the community on the importance of Explainable AI for Science and inspire new research directions on this emerging area. Bio: Yongfeng Zhang is an Assistant Professor in the Department of Computer Science at Rutgers University. His research interest is in Machine Learning, Machine Reasoning, Explainable AI, AI for Science, and AI Ethics. Previously he was a postdoc at UMass Amherst, and did his PhD and BE in Computer Science at Tsinghua University, with a BS in Economics at Peking University. He serves as associate editor for ACM Transactions on Information Systems (TOIS), ACM Transactions on Recommender Systems (TORS), and Frontiers in Big Data. He is a Siebel Scholar of the class 2015 and an NSF career awardee in 2021.