About me

Jeff (Jun) Zhang is a Postdoctoral Fellow at Harvard University, working with Prof. David Brooks and Prof. Gu-Yeon Wei. Jeff received his Ph.D. degree from the Electrical and Computer Engineering Department at New York University in 2020, supervised by Prof. Siddharth Garg. He also has research internship experience with Samsung Semiconductor and Microsoft Research. 

Jeff's general research interests are in deep learning, computer architecture, embedded systems, and EDA, with particular emphasis on energy-efficient and fault-tolerant design for AI/ML systems and hardware accelerators. He received best paper award nominations at DATE 2022 and VTS 2018, and best presentation award nomination at DATE Ph.D. Forum 2020. Jeff serves on the technical program committee of several top conferences in the area of computer engineering and computer hardware, and has served as a reviewer for several IEEE and ACM journals.  (CV, Research Statement)

News

**I joined the School of ECEE at Arizona State University as a tenure-track assistant professor in 2023 Spring. I am looking for motivated Ph.D. and M.S. students, postdoctoral scholars, and undergraduate researchers. If you are interested in my research, please email me with a copy of your CV to start a conversation.** (中文版)

01/2023: Joined ASU as an assistant professor of ECEE! Please visit here for updates.

01/2023: Our paper "Path Planning Under Uncertainty to Localize mmWave Sources" has been accepted at the ICRA 2023!

12/2022: We'll be organizing ACM CADathlon (“Olympic games of EDA”) again at ICCAD'23;

10/2022: Serving as a TPC member for DAC'23.

10/2022: Our paper "A 12nm 18.1TFLOPs/W Sparse Transformer Processor with Entropy-Based Early Exit, Mixed-Precision Predication, and Fine-Grained Power Management" has been accepted at the ISSCC 2023!

09/2022: Serving as a session chair for "Performance, Power and Temperature Aspects in Deep Learning" at ICCAD'22. Join us at San Diego!

09/2022: Our paper "End-to-end Synthesis of Dynamically Scheduled Machine Learning Accelerators” has been accepted at IEEE Transactions on Computers.

08/2022: Our invited paper "A Scalable Methodology for Agile Chip Development with Open-Source Hardware Components" will appear at ICCAD'22.

07/2022: Presented an invited talk at DACPS 2022.

06/2022: Our paper "Bridging Python to Silicon: The SODA Toolchain" has been accepted at IEEE Micro 2022! 

06/2022: Our paper "A 12nm Agile-Designed SoC for Swarm-Based Perception with Heterogeneous IP Blocks, a Reconfigurable Memory Hierarchy, and an 800MHz Multi-Plane NoC" has been accepted at ESSDERC/ESSCIRC 2022

05/2022: Serving as an Aritifact Evaluation co-chair for HPCA'23. Please submit your best work! 

05/2022: We're organizing ACM SIGDA 1st Job Fair at ICCAD'22; Please submit your CV!

04/2022: Serving as a session chair for "Accelerating the Inference: Transformers, Graphs and Others" at DAC'22. Find me in San Francisco! 

04/2022: We're organizing ACM CADathlon (“Olympic games of EDA”) at ICCAD'22; Please join the competition!

04/2022: Our paper "ASAP: Automatic Synthesis of Area-efficient and Precision-aware CGRA" (Collaborated with PNNL) has been accepted at ICS'22!

04/2022: Serving as a TPC member for ICCD'22.

03/2022: Two papers got accepted at DATE'22; one of them is normainated for Best Paper Award!

02/2022: We're organizing the NOPE workshop at ASPLOS 2022. Please consider submitting your NOPE papers!

02/2022: Our paper on "Millimeter Wave Wireless-assisted Robobic Navigation" has been accepted at IEEE Open Journal of the Communications Society!

12/2021: Serving as an External Reviewer for MLSys'22

10/2021: We successfully taped out our second 12nm Domain-specific SoC for autonomous driving (in collaboration with IBM, Columbia, and UIUC)!

10/2021: Serving as a TPC member for the AI/ML Design: Circuits and Architecture track of DAC'22

09/2021: Our RecPipe work passed the MICRO 2021 Artifact Evaluation process and received all 3 badges (Artifact Available, Functional, Reproduced)!! Please try them here!

07/2021: Presented our work "Towards automatic and agile ML/AI accelerator design with end-to-end synthesis" (Collaborated with PNNL) at ASAP'21! [Paper] [Slides

07/2021: Our paper "RecPipe: Co-designing Models and Hardware to Jointly Optimize Recommendation Quality and Performance" has been accepted at MICRO'21!

07/2021: Serving as a TPC member on Test, security, and verficiation track of ICCD'21.

06/2021: Serving as a TPC member for IISWC'21.

11/2020: Presented two posters at NSF Workshop on Machine Learning Hardware. [Poster1][Poster 2]

10/2020: Serving as a TPC member for the AI/ML system design track of DAC'21

10/2020: Serving as a TPC member for the architecture track of IPDPS'21

08/2020: Two U.S. Patents on data compression for AI Hardware are published with Microsoft Research and HoloLens Team. 

08/2020: Selected as a member of the Artifact Evaluation Committee for USENIX OSDI'20

08/2020: Became an alumnus of NYU EuSuRe research group and joined the Harvard Architecture, Circuits, and Compilers Group as a Postdoctoral Fellow!

07/2020: Presented thesis work at ACM SIGDA 2020 57th Design Automation Conference (DAC) Ph.D. Forum. 

07/2020: Successfully defended my Ph.D. thesis: Towards Energy Efficient and Reliable Deep Learning Inference. Thesis committee members: Prof. Siddharth Garg, Prof. Ramesh Karri, Prof. Tushar Krishna, Prof. Sundeep Rangan, and Prof. Brandon Reagen.  [Slides][Video]

07/2020: Presented our "Model-Switching: Dealing with Fluctuating Workloads in MLaaS Systems" work (collaborated with Microsoft Research and Microsoft Advertising) at USENIX HotCloud'20! [Paper][Slides][Video]

06/2020: Contributed proposal on “Resource Constrained Mobile Data Analytics Assisted by the Wireless Edge” got funded by NSF and Intel!

03/2020: Participated in the semifinals for TTTC’s E. J. McCluskey Best Doctoral Thesis 2020 at IEEE VLSI Test Symposium. 

03/2020: Attended and got nominated for Best Presentation Award at ACM SIGDA 2020 DATE Ph.D. Forum.