The Vision Behind MLPerf: A Broad ML Benchmark Suite for Measuring the Performance of ML Software Frameworks, ML Hardware Accelerators, and ML Cloud and Edge Platforms

Presentation Date: 

Tuesday, October 16, 2018

Location: 

Samsung Technology Forum in Austin at Samsung Austin Research Center (SARC)

Presentation Slides: 

Deep Learning is transforming the field of machine learning (ML) from theory to practice. It has also sparked a renaissance in computer system design, fueled by the industry’s need to improve ML accuracy and performance rapidly. But despite the fast pace of innovation, there is a key issue affecting the industry at large, and that is how to enable fair and useful benchmarking of ML software frameworks, ML hardware accelerators and ML platforms. There is a need for systematic ML benchmarking that is both representative of real-world use-cases, and useful for fair comparisons across different software and hardware platforms. MLPerf seeks to address this need. MLPerf is a machine learning benchmark standard, and suite, driven by the industry and academic research community at large. It began as a collaboration between researchers at Baidu, Google, Harvard, and Stanford based on their early experiences. Since then, MLPerf has grown to include many companies, a host of universities worldwide, along with hundreds of individual participants. The talk describes the principles behind MLPerf and describes the challenges and opportunities in developing an industry-wide ML benchmark.