Home

Members

Publications

Teaching

Codes&Data

Openings

 

 

 

 Zheng Zhang's Homepage


Associate Professor [IEEE-Style Short Biography, Full CV (PDF)]
Deptatment of Electrical & Computer Engineering, University of California, Santa Barbara (UCSB)
Deptartment of Computer Science (joint appointment), UCSB

Department of Mathematics, (joint appointment, effective in 07/2019), UCSB
Education
Ph.D in EECS, Massachusetts Institute of Technology
M.Phil in EE, The University of Hong Kong
B. Eng in EE, Huazhong University of Science & Technology
Contact
Email: zhengzhang [AT] ece [dot] ucsb [dot] edu Phone: 805-893-7294
Address: 4109 Harold Frank Hall, University of California, Santa Barbara, CA 93106

Multiple Post-doc Openings: We have multiple openings related to (1) efficient large language model (LLM) pre-training, (2) energy-efficient on-device training, and (3) scientific machine learning for EDA. We are collaborating with leading research groups from industry (Intel, Amazon, Meta, HP Research Labs, Cadence) and government research labs (ANL, NIST) on these topics. Currently, we are looking for candidates with any of the following technical background:

  • Numerical optimization: stochastic optimization (e.g., theory of SGD and its variants), derivative-free optimization (e.g., zeroth-order optimization), distributed optimization. Note: we are looking for candidates with strong theoretical and numerical background. Candidates who work on the engineering application of optimization methods do NOT match this position.

  • High-performance computing (HPC) and GPU optimization: rich experience in parallel/distributed computation for large-scale training of deep learning models or for scientific computation on massive GPUs.

  • Edge AI hardware accelerators: hardware accelerator design with integrated photonics or with FPGA. We focus on training accelerator design rather than inference engines. 

  • Scientific machine learning for EDA: physics-informed neural networks and/or operator learning for PDE simulation and/or PDE-constrained optimization, small-data learning for design modeling and optimization, uncertainty quantification and uncertainty-aware optimization for chip design.

Interested candidates please send me the following information via email: your CV, representative publications, and the contact information of two referees.

We also welcome PhD applicants with prior experience in the above fields. Prior master research experience is a plus but not mandatory. For PhD applicants, please submit your application via our online graduate application system, and mention my name in your application case.


Checklist for paper writing: I have prepared a detailed checklist to help science/engineering graduate students improving their paper writing.

To prospective PhD students: Please read this document if you are thinking about pursuing a PhD degree. The skill sets required for PhD research are very different from those required for undergraduate study. In undergraduate study, a student learns existing knowledge that were created by others (probably a few hundred years ago). A PhD student is expected to create new knowledge. A student doesn't have to be super smart or to have a perfect GPA in order to be an excellent PhD student, but he/she may need to be self-motivated for scientific research, curious about unknown/new fields, open-minded to different opinions, and persistent when facing research challenges (or even failures).

 


RESEARCH INTERESTS

We work at the intersection of computational data science (e.g., uncertainty quantification, tensor computation, scientific machine learning) and hardware systems. Currently we focus on two broad directions:

  • Design automation: (1) uncertainty-aware design automation for electronics, photonics, and quantum circuits; (2) small-data and data-free scientific machine learning for multi-physics design of 3D IC and chiplet.

  • Responsible AI systems: (1) tensor-compressed methods for sustainable training of large AI models [e.g., foundation models (or large language models)]  and for resource-constraint on-device learning; (2) self-healing machine learning systems.

Our research is supported by both government funding agencies (e.g., NSF, DOE, NIST) and industries (e.g., Meta, Intel, Amazon and Samsung). We are actively collaborating with industrial research teams (e.g., Meta, Intel, Cadence, HP, Amazon, and NVIDIA) to make practical impact.


RECENT NEWS:

  • [NeuRIPS'2024] 09/25/2024: Our work CoMERA (see the paper), a computing- and memory-efficient rank-adaptive tensor-compressed (pre)-training method, is accepted by NeuRIPS'2024. This work was led by our former postdoc Dr. Zi Yang, in collaboration with Amazon and Meta.

  • [EMNLP'2024] 09/20/2024: Yifan Yang's paper (see the draft) about memory-efficient zeroth-order tensor-train adaptation method for LLM fine-tuning is accepted by EMNLP'2024. This is a collaborative work with Amazon Alexa AI.

  • [DeepOHeat Codes] 09/2024: our source codes of DeepOHeat for 3D-IC thermal analysis is released to the public (see the link). This is a collaborative work between our group and Cadence.

  • [NIST research grant] 09/2024: we got a 3-year research grant from NIST to investigate small-data and uncertainty-aware design optimization methods for analog/RF integrated circuits and systems.

  • [AI4Science Pre-training project] 09/2024: we will start a 3-year DOE research project to investigate the theory, algorithm and HPC implementation regarding energy-efficient pre-training of AI4Science foundation models. We will collaborate with Argonne National Labs closely on this project. Besides research funding, DOE will offer the access to hundreds to thousands of state-of-the-art GPUs for us to pre-train extreme-scale AI foundation models.

  • [ISIT Paper on coded tensor computation] 07/2014: our collaborative paper with Prof. Haewon Jeong is presented at IEEE International Symposium on Information Theory (ISIT) held in Athen, Greece. This work was led by Jin Lee (a PhD student of Prof. Jeong), and it investigated an interesting topic: how coded computing can be extended from matrices to tensors to help quantum circuit simulation.

  • [PhD defense] 07/03/2024: Zhuotong Chen finished his thesis defense, and he has joined Amazon to work on large language models (LLMs). Congratulations!

  • [Intel Research Project] 07/01/2024: we just started a new research project with Intel to investigate multi-physics modeling and optimization of 3D integrated circuits and systems. We have some on-going collaboration with Intel in the direction of on-device AI training, and we are excited to expand our research collaboration.

  • [TQE Paper] 06/2024: Zichang He's paper about quantum circuit optimization under imperfect uncertainty description is published by IEEE Trans. Quantum Engineering.

  • [NAACL Oral Paper] 06/16/2024: Yifan Yang and Jiajun Zhou will present their LoRETTA paper at NAACL'2024 held in Mexico City, Mexico. This paper is selected as an oral paper (top 5%) of the whole conference. This paper results from our collaboration with Amazon.

  • [NSF Project with HP Research Labs] 06/13/2024: we will start a 3-year NSF project to collaborate with HP Research Labs on scalable photonic on-device training for scientific computing. We look forward to the research results from this academia-industry collaboration.

  • [TMRL paper] 03/2024: Zhuotong Chen's paper about self-healing methods for robust large-langage models (LLMs) is published by TMLR.

  • [PhD defense] 08/07/2023: Zichang He finished his thesis defense, and he has joined JP Morgan to work on quantum computing. Congratulations!

  • [Faculty job] 07/31/2023: Our postdoc associate Zi Yang has joined SUNY Albany (State University of New York at Albany) as an Assistant Professor of Mathematics and Data Science. Congratulations, Zi!

  • [JMLR paper] 10/04/2022: Zhuotong's journal paper "Self-healing robust neural networks via closed-loop control" is accepted by the Journal of Machine Learning Research.

 


SELECTED PUBLICATIONS

More publications...


ACADEMIC AWARDS

  • 2022: Meta Research Award.

  • 2021: ACM SIGDA Outstanding New Faculty Award (link); IEEE CEDA Ernest S. Kuh Early Career Award (link).

  • 2020: Best Paper Award of IEEE Trans. on Components, Packaging and Manufacturing Technology (link to paper); Facebook Research Award; Best Student Paper Award at EPEPS (by PhD advisee Zichang He, link to paper).

  • 2019: NSF CAREER Award; Rising Stars in Computational and Data Sciences (by my advisee Chunfeng Cui); Rising Stars in EECS (by my advisee Chunfeng Cui).

  • 2018: Best Paper Award of IEEE Transactions on Components, Packaging and Manufacturing Technology (link to ppaer); Best Conference Paper Award at IEEE EPEPS (link to paper).

  • 2016: ACM Outstanding PhD Dissertation Award in Electronic Design Automation (link); Best Paper Award at International Workshop on Signal and Power Integrity.

  • 2015: MIT Microsystems Technology Labs (MTL) Doctoral Dissertation Seminar Award (link).

  • 2014: Donald O. Pederson Best Paper Award of IEEE Transactions on CAD of Integrated Circuits and Systems ( link); Best Paper Nomination at IEEE CICC.

  • 2011: Li Ka-Shing Prize (best M.Phil/Ph.D thesis award) from the University of Hong Kong (link); best paper nominations at ICCAD2011 and ASP-DAC2011.


ACADEMIC SERVICES

  • Associate Editor: ACM SIGDA Newsletters (2018-2019);

  • TPC Member: ICCAD (2016-2018), DAC (2017, 2018);

  • Award Committee: ACM SIGDA Best Dissertation Award Committee (2018), DAC Best Paper Award Committee (2018), ICCAD Best Paper Award Committee (2018)