Workshop on ML for Systems at NeurIPS 2018, December 8th, 8:30AM-6:00PM, Room 510 AC
Workshop on ML for Systems at NeurIPS '18, Dec 8th, 8:30AM-6PM, Room 510 AC

Overview

Designing specialized hardware for deep learning is a topic that has received significant research attention, both in industrial and academic settings, leading to exponential increases in compute capability in GPUs and accelerators. However, using machine learning to optimize and accelerate software and hardware systems is a lightly explored but promising field, with broad implications for computing as a whole. Very recent work has outlined a broad scope where deep learning vastly outperforms traditional heuristics including topics such as: scheduling1,2, data structure design3, microarchitecture4, compilers5, and control of warehouse scale computing systems6.

The focus of this workshop is to expand upon this recent work and build a community focused on using machine learning in computer systems problems. We seek to improve the state of the art in the areas where learning has already proven to perform better than traditional heuristics, as well as expand to new areas throughout the system stack such as hardware/circuit design and operating/runtime systems.

By forming a community of academic and industrial researchers who are excited about this area, we seek to build towards intelligent, self optimizing systems and answer questions such as: How do we generate and share high quality datasets that span the layers of the system stack? Which learned representations best represent code performance and runtime? Which simulators and simulation methodologies provide a tractable proving ground techniques like reinforcement learning?

To this end, the target audience for this workshop includes a wide variety of attendees from state-of-the-art researchers in machine learning to domain experts in computer systems design. We have invited a broad set of expert speakers to present the potential for impact of combining deep learning research with computer systems. We hope that by providing a formal venue for researchers from both fields to meet and interact, that the result will include both fundamental research in ML as well as real-world impact to computer systems design and implementation.

The workshop hosted 6 speakers and we invited researchers to submit relevant papers through our call for papers. The speakers, and potentially other relevant stakeholders, were invited to participate in a panel discussion to end the workshop. See the schedule for details and recordings of the last 2 talks, and you will find the accepted papers here.

Speakers

Jeff Dean

Keynote Speaker

Senior Fellow, Google AI. Google Brain lead and co-founder. Co-designer and implementor of Tensorflow, MapReduce, BigTable, Spanner.

Song Han

Song Han is an assistant professor in the EECS Department of Massachusetts Institute of Technology (MIT) and PI for HAN Lab: Hardware, AI and Neural-nets. Dr. Han's research focuses on energy-efficient deep learning and domain-specific architectures. He proposed "Deep Compression" that widely impacted the industry. He was the co-founder and chief scientist of DeePhi Tech which was acquired by Xilinx. Dr. Han received the Ph.D. degree in Electrical Engineering from Stanford University advised by Prof. Bill Dally, and he has a B.S. degree in Electrical Engineering from Tsinghua University.

Sanjay Krishnan

Assistant Professor of Computer Science at the University of Chicago.

Partha Ranganathan

Parthasarathy (Partha) Ranganathan is currently at Google designing their next-generation systems. Before this, he was a HP Fellow and Chief Technologist at Hewlett Packard Labs where he led their research on systems and datacenters. Dr. Ranganathan's research interests are in systems architecture and manageability, energy-efficiency, and systems modeling and evaluation. He has done extensive work in these areas including key contributions around energy-aware user interfaces, heterogeneous multi-core processors, power capping and power-aware server designs, federated enterprise power management, energy modeling and benchmarking, disaggregated blade server architectures, and most recently, storage hierarchy and systems redesign for non-volatile memory. He was also one of the primary developers of the publicly distributed Rice Simulator for ILP Multiprocessors (RSIM).

Eric Schkufza

Eric Schkufza is a researcher with the VMware Research Group. He is interested in applying the tools of large-scale data analysis and machine learning to the design of optimizing compilers. His work focuses on the analysis and optimization of low-level machine code in the absence of its original source, most recently in the context of hardware accelerators.

Neeraja J. Yadwadkar

Neeraja recently graduated with a PhD in Computer Science from University of California, Berkeley. Her thesis was on automatic resource management in the datacenter and the cloud. She is now a post-doctoral researcher in the Computer Science Department at Stanford University where she continues to work on distributed systems, cloud computing, and machine learning.

Organizing Committee

Program Committee

  • Anna Goldie, Google Brain
  • Azalia Mirhoseini, Google Brain
  • Jonathan Raiman, OpenAI
  • Kevin Swersky, Google Brain
  • Milad Hashemi, Google
  • Simon Kornblith, Google
  • Nicholas Frosst, Google
  • Zhan Shi, University of Texas at Austin
  • Will Hang, Stanford
  • Amir Yazdanbakhsh, Google
  • Azade Nazi, Google
  • Alex Ray, OpenAI
  • Andrew Gibiansky, Voicery
  • James Bradbury, Google
  • Sharan Narang, Google Brain
  • Martin Maas, Google
  • Carlos Villavieja, Google

Contact Us

Contact us at mlforsystems@googlegroups.com.