Workshop on ML for Systems at NeurIPS 2022, December 3rd, New Orleans
Workshop on ML for Systems at NeurIPS '22, Dec 3rd

Schedule Overview

Time Section
Opening Remarks
8:40AM - 9:30AMInvited Speaker 1: Jeff Dean
9:30AM - 10:15AMInvited Speaker 2: Dawn Song
10:20AM - 10:50AMPoster Session #1
10:50AM - 11:55AMContributed Talks
12:00PM - 1:00PMLunch
1:05PM - 1:45PMContributed Talks
1:45PM - 2:15PMPoster Session #2
2:15PM - 3:00PMInvited Speaker 3: Steve Keckler
3:05PM - 3:50PMInvited Speaker 4: Newsha Ardalani
3:55PM - 4:40PMInvited Speaker 5: Riyadh Baghdadi
17:40Closing

Invited Speakers

Jeff Dean

Keynote Speaker

Senior Fellow, SVP, Google AI, Google Health. Google Brain lead and co-founder. Co-designer and implementor of Tensorflow, MapReduce, BigTable, Spanner.

In this talk, I'll touch on the myriad ways that machine learning is being used to dramatically rethink how computer systems are approached. I'll highlight research work in domains covering a variety of problems in ASIC chip design, computer architecture, distributed systems, database systems, compilers, content delivery systems, and more. I'll also highlight how building simple interfaces that allow "learned choices" to be integrated into the middle of existing hand coded computer software can dramatically ease the breadth and ease with which machine learning can be applied to a variety of different kinds of decisions, including many decisions at the core of computer systems. This talk presents work by a great number of Google Research colleagues and is meant to be an overview of the exciting advances in applying ML to computer systems problems.

Dawn Song

Dawn Song is the Faculty co-Director of the UC Berkeley Center on Responsible Decentralized Intelligence (RDI). She is also part of the Berkeley Artificial Intelligence Research (BAIR) Lab, the Berkeley Deep Drive (BDD), and Berkeley Center for Human-Compatible AI. Her research interests include deep learning, security, and blockchain.

Steve Keckler

Dr. Stephen W. Keckler is the Vice President of Architecture Research at NVIDIA and an Adjunct Professor of Computer Science at the University of Texas at Austin, where he served on the faculty from 1998-2012. His research interests include parallel computer architectures, high-performance computing, energy-efficient architectures, and embedded computing. Dr. Keckler is a Fellow of the ACM, a Fellow of the IEEE, an Alfred P. Sloan Research Fellow, and a recipient of the NSF CAREER award, the ACM Grace Murray Hopper award, the President's Associates Teaching Excellence Award at UT-Austin, and the Edith and Peter O’Donnell award for Engineering. He earned a B.S. in Electrical Engineering from Stanford University and M.S. and Ph.D. degrees in Computer Science from the Massachusetts Institute of Technology.

Applying ML to Practical System Design

While Machine Learning is actively being used in a wide range of fields ranging from computer graphics to language understanding, its use in computer system is still in its early stages. This talk will describe practical uses of machine learning in both electronic design automation and computer architecture to illustrate how NVIDIA is developing and using ML techniques to accelerate the development of better computer systems.

Newsha Ardalani

Newsha Ardalani is a Research Scientist at Meta AI (FAIR). She received her Ph.D. degree in Computer Sciences from University of Wisconsin-Madison. Her research interest lies at the intersection of Machine learning, System, Hardware and Data. She is currently working on data quality and its implications on large-scale model and system design.

While the role of model architecture and hardware system on training performance is well-understood and appreciated, the role of data quality and quantity is often overlooked. In this talk, I will highlight the performance implications of data quality, particularly on speed of training and scaling efficiency, and argue that it is time to shift our paradigm from HW/SW co-design to HW/SW/Data tri-design.

Riyadh Baghdadi

Riyadh Baghdadi is an assistant professor at NYUAD (New York University Abu Dhabi, UAE) and a research affiliate at CSAIL/MIT (Massachusetts Institute of Technology, USA). His research interests include the intersection of applied machine learning and compilers, and compilers and programming models for high performance computing and compute intensive areas.

Enabling compilers to automatically optimize code has been a longstanding goal for the compiler community. Efficiently solving this problem requires using precise cost models. These models predict whether applying a sequence of code transformations reduces the execution time of the program. Building an analytical cost model to do so is hard in modern x86 architectures due to the complexity of the microarchitecture. In this paper, we present a novel deep learning based cost model for automatic code optimization. This model was integrated in a search method and implemented in the Tiramisu compiler to select the best code transformations. The input of the proposed model is a set of simple features representing the unoptimized code and a sequence of code transformations. The model predicts the speedup expected when the code transformations are applied. Unlike previous models, the proposed one works on full programs and does not rely on any heavy feature engineering. The proposed model has only 16% of mean absolute percentage error in predicting speedups on full programs. The proposed model enables Tiramisu to automatically find code transformations that match or are better than state-of-the-art compilers without requiring the same level of heavy feature engineering required by those compilers.

Detailed Schedule

What to expect

Machine Learning (ML) for Systems is an important direction for applying ML in the real world. It has been shown that ML can replace long standing heuristics in computer systems by leveraging supervised learning and reinforcement learning (RL) approaches. The computer systems community recognizes the importance of ML in tackling strenuous multi-objective tasks such as designing new data structures 1, integrated circuits 2,3, or schedulers, as well as implementing control algorithms for applications such as compilers 12,13, databases 8, memory management 9,10 or ML frameworks 6.

General Workshop Direction. This is the fifth iteration of this workshop. In previous editions, we showcased approaches and frameworks to solve problems, bringing together researchers and practitioners at NeurIPS from both ML and systems communities. While breaking new grounds, we encouraged collaborations and development in a broad range of ML for Systems works, many later published in top-tier conferences 6,13,14,15,16,17,18. This year, we plan to continue on this path while expanding our call for paper to encourage emerging works on minimizing energy footprint, reaching carbon neutrality, and using machine learning for system security and privacy.

Focusing the Workshop on Unifying Works. As the field of ML for Systems is maturing, we are adapting the focus and format of the workshop to evolve with it. The community has seen several efforts to consolidate different subfields of ML for Systems 4, 5, 6, 7. However, such efforts need more support. To boost recent advances in shared methodology, tools, and frameworks, this year we will welcome submissions presenting datasets, simulators, or benchmarks that can facilitate research in the area.

The workshop will host 5 speakers and we invite researchers to submit relevant papers through our call for papers.

Accepted Papers

Organizing Committee

Steering Committee

Contact Us

Contact us at mlforsystems@googlegroups.com.