Speakers
Jeff Dean
Keynote Speaker
Exciting Directions in Machine Learning for Computer Systems
Jeff Dean joined Google in mid-1999, and is currently Google's Chief Scientist, focusing on AI advances for Google DeepMind and Google Research. His areas of focus include machine learning and AI and applications of AI to problems that help billions of people in societally beneficial ways. Jeff co-founded Google Brain and is a co-designer and implementer of Tensorflow, MapReduce, BigTable and Spanner. He has been involved in several ML for Systems projects, including Learned Index Structures and ML for chip floorplanning.
Richard Ho
Hardware
Navigating Scaling and Efficiency Challenges of ML Systems
Richard is Head of Hardware at OpenAI working to co-optimize ML models and the massive compute hardware they run on. Richard was one of the early engineers working on Google TPUs and helped lead the team through TPUv5. Before Google, Richard was part of the D. E. Shaw Research team that built the Anton 1 and Anton 2 molecular dynamics simulation supercomputers, both of which won the Gordon Bell Prize. Richard started his career as co-founder and Chief Architect of 0-In Design Automation, a pioneer in formal verification tools for chip design which was acquired by Mentor Graphics/Siemens. Richard has a Ph.D. in Computer Science from Stanford University and M.Eng, B.Sc. from University of Manchester, UK.
Tim Kraska
Retrospective
ML and Generative AI for Data Systems
Tim Kraska is a director of applied science at Amazon Web Services (AWS), a professor of Electrical Engineering and Computer Science (EECS) in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), co-director of MIT's Data System and AI LAB (DSAIL@CSAIL), and was a co-founder of Instancio and of Einblick Analytics (both acquired). Currently, his research focuses on using ML/GAI for data systems. Before joining MIT, Tim was an Assistant Professor at Brown and spent time at Google Brain. Tim is a 2017 Alfred P. Sloan Research Fellow in computer science and received several awards including the VLDB Early Career Research Contribution Award, the Intel Outstanding Researcher Award, the VMware Systems Research Award, the university-wide Early Career Research Achievement Award at Brown University, an NSF CAREER Award, as well as several best paper and demo awards at VLDB, SIGMOD, and ICDE.
Natasha Jaques
Special Topic: MARL
Leveraging Deep Multi-agent Reinforcement Learning (MARL) for NP-hard Combinatorial Optimization
Natasha Jaques is an Assistant Professor of Computer Science and Engineering at the University of Washington, and a Senior Research Scientist at Google DeepMind. Her research focuses on Social Reinforcement Learning in multi-agent and human-AI interactions. During her PhD at MIT, she developed techniques for learning from human feedback signals to train language models which were later built on by OpenAI’s series of work on Reinforcement Learning from Human Feedback (RLHF). In the multi-agent space, she has developed techniques for improving coordination through the optimization of social influence, and adversarial environment generation for improving the robustness of RL agents. Natasha’s work has received various awards, including Best Demo at NeurIPS, an honourable mention for Best Paper at ICML, and the Outstanding PhD Dissertation Award from the Association for the Advancement of Affective Computing. Her work has been featured in Science Magazine, MIT Technology Review, Quartz, IEEE Spectrum, Boston Magazine, and on CBC radio, among others. Natasha earned her Masters degree from the University of British Columbia, undergraduate degrees in Computer Science and Psychology from the University of Regina, and completed a postdoc at UC Berkeley.
Ahmed El-Kishky
Special Topic: CodeGen
OpenAI o1 Competing in International Olympiad of Informatics
Ahmed El-Kishky is a Research Lead at OpenAI, where he focuses on advancing language models and improving AI reasoning through reinforcement learning. He was instrumental in developing OpenAI o1, a model built for complex problem-solving, and led the creation of OpenAI o1-IOI, which competed in prestigious programming competitions such as the International Olympiad in Informatics. Ahmed earned his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign, where his research centered on scalable machine learning algorithms and natural language processing.