Workshop on ML for Systems at NeurIPS 2024, December 15, Vancouver Convention Center, West Room 201
Workshop on ML for Systems at NeurIPS '24, Dec 15, West Room 201

Speakers

Jeff Dean

Keynote Speaker

Exciting Directions in Machine Learning for Computer Systems

Jeff Dean joined Google in mid-1999, and is currently Google's Chief Scientist, focusing on AI advances for Google DeepMind and Google Research. His areas of focus include machine learning and AI and applications of AI to problems that help billions of people in societally beneficial ways. Jeff co-founded Google Brain and is a co-designer and implementer of Tensorflow, MapReduce, BigTable and Spanner. He has been involved in several ML for Systems projects, including Learned Index Structures and ML for chip floorplanning.

Richard Ho

Hardware

Navigating Scaling and Efficiency Challenges of ML Systems

Richard is Head of Hardware at OpenAI working to co-optimize ML models and the massive compute hardware they run on. Richard was one of the early engineers working on Google TPUs and helped lead the team through TPUv5. Before Google, Richard was part of the D. E. Shaw Research team that built the Anton 1 and Anton 2 molecular dynamics simulation supercomputers, both of which won the Gordon Bell Prize. Richard started his career as co-founder and Chief Architect of 0-In Design Automation, a pioneer in formal verification tools for chip design which was acquired by Mentor Graphics/Siemens. Richard has a Ph.D. in Computer Science from Stanford University and M.Eng, B.Sc. from University of Manchester, UK.

Tim Kraska

Retrospective

ML and Generative AI for Data Systems

Tim Kraska is a director of applied science at Amazon Web Services (AWS), a professor of Electrical Engineering and Computer Science (EECS) in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), co-director of MIT's Data System and AI LAB (DSAIL@CSAIL), and was a co-founder of Instancio and of Einblick Analytics (both acquired). Currently, his research focuses on using ML/GAI for data systems. Before joining MIT, Tim was an Assistant Professor at Brown and spent time at Google Brain. Tim is a 2017 Alfred P. Sloan Research Fellow in computer science and received several awards including the VLDB Early Career Research Contribution Award, the Intel Outstanding Researcher Award, the VMware Systems Research Award, the university-wide Early Career Research Achievement Award at Brown University, an NSF CAREER Award, as well as several best paper and demo awards at VLDB, SIGMOD, and ICDE.

Natasha Jaques

Special Topic: MARL

Leveraging Deep Multi-agent Reinforcement Learning (MARL) for NP-hard Combinatorial Optimization

Natasha Jaques is an Assistant Professor of Computer Science and Engineering at the University of Washington, and a Senior Research Scientist at Google DeepMind. Her research focuses on Social Reinforcement Learning in multi-agent and human-AI interactions. During her PhD at MIT, she developed techniques for learning from human feedback signals to train language models which were later built on by OpenAI’s series of work on Reinforcement Learning from Human Feedback (RLHF). In the multi-agent space, she has developed techniques for improving coordination through the optimization of social influence, and adversarial environment generation for improving the robustness of RL agents. Natasha’s work has received various awards, including Best Demo at NeurIPS, an honourable mention for Best Paper at ICML, and the Outstanding PhD Dissertation Award from the Association for the Advancement of Affective Computing. Her work has been featured in Science Magazine, MIT Technology Review, Quartz, IEEE Spectrum, Boston Magazine, and on CBC radio, among others. Natasha earned her Masters degree from the University of British Columbia, undergraduate degrees in Computer Science and Psychology from the University of Regina, and completed a postdoc at UC Berkeley.

Ahmed El-Kishky

Special Topic: CodeGen

OpenAI o1 Competing in International Olympiad of Informatics

Ahmed El-Kishky is a Research Lead at OpenAI, where he focuses on advancing language models and improving AI reasoning through reinforcement learning. He was instrumental in developing OpenAI o1, a model built for complex problem-solving, and led the creation of OpenAI o1-IOI, which competed in prestigious programming competitions such as the International Olympiad in Informatics. Ahmed earned his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign, where his research centered on scalable machine learning algorithms and natural language processing.

What To Expect

The ML for Systems workshop presents cutting-edge work on ML in computer systems and aims to develop a unified methodology for the field.

Machine Learning (ML) for Systems describes the application of machine learning techniques to problems related to computer systems. By leveraging supervised learning and reinforcement learning (RL) approaches, machine learning can replace longstanding heuristics that currently drive many of these systems. This includes a wide range of topics, including multi-objective tasks such as designing new data structures 1, integrated circuits 2, 3, or design verification 20, 21, as well as implementing control algorithms for applications such as compilers 12, 13, 19, databases 8, memory management 9, 10, or ML frameworks 11. While the systems community increasingly recognizes the importance of ML in solving a variety of different systems problems 23, ML for Systems remains an emerging area without widely established best practices, methods and strategies for the application of state-of-the-art machine learning techniques 22. The goal of this workshop is to provide an interdisciplinary venue for ML and Systems experts to push this boundary and start new directions within the ML for Systems area.

Workshop Direction

In previous 6 editions, we showcased specific approaches and frameworks to solve problems, bringing together researchers and practitioners at NeurIPS from both the ML and systems communities. While breaking new grounds, we encouraged collaborations and development in a broad range of ML for Systems works, many later published in top-tier conferences 11, 13, 14, 15, 16, 17, 18. This year, we plan to continue this path while encouraging work in key emerging areas such as Large Language Model (LLM) training and serving, and unifying benchmarks on key problems such as scheduling and compiling through a competition.

Recently, the rise of Large Language Models (LLMs) has presented new opportunities and challenges within the domain of computer systems. Our community is well-positioned to produce science and stimulate discussion for adapting to the new paradigm, especially how LLMs can be used to solve systems problems, and using ML to address systems issues that emerge from LLM training and serving. Additionally, as the field matures, we emphasize on keeping the research open, and the science reproducible. To that end, we are supplementing our main program with a competition track to crystallize the field’s progress.

Workshop Goals

NeurIPS provides a unique opportunity to bring together systems researchers and researchers from other sub-areas of ML who had not previously considered applying their techniques in a computer systems context. We see the goal of our workshop as solving the following two objectives:

  • Opening up connections between research areas that were not previously considered, connecting the ML and Systems communities, growing the scope of ML for Systems work and unlocking new research opportunities.
  • Developing best practices, methodologies and benchmarks for the ML for Systems field.

To build commonalities on the topic of LLMs interacting with computational systems, we specifically include seminal talks on emerging trends on training and serving LLMs from seasoned researchers and practitioners as a part of our invited speakers. Our call for papers also includes topics at the intersection of Systems and LLMs.

Our program include contributed speakers and poster sessions from selected works. Our schedule is available. We invited researchers to submit relevant papers through our call for papers.

Organizing Committee

Steering Committee

Contact Us

Contact us at mlforsystems@googlegroups.com.