|
|
Zheng
Zhang's Homepage
|
Associate Professor [IEEE-Style Short Biography, Full CV
(PDF)]
Deptatment of Electrical & Computer Engineering,
University of California, Santa Barbara (UCSB)
Deptartment of Computer Science (joint
appointment), UCSB
Department of Mathematics, (joint appointment,
effective in 07/2019), UCSB |
Education
Ph.D in EECS, Massachusetts Institute of Technology
M.Phil in EE,
The University of Hong Kong B. Eng in EE,
Huazhong University of Science & Technology |
Contact
Email:
zhengzhang [AT] ece [dot] ucsb [dot] edu,
Phone: 805-893-7294 Address: 4109 Harold Frank
Hall, University of California, Santa Barbara, CA 93106 |
Multiple Post-doc Openings: We have multiple openings
related to (1) efficient large language model (LLM) pre-training, (2)
energy-efficient on-device training, and (3) scientific machine learning
for EDA. We are
collaborating with leading research groups from industry (Intel, Amazon,
Meta, HP Research Labs, Cadence) and government research labs (ANL, NIST) on these topics. Currently, we are looking for
candidates with any of the following technical background:
-
Numerical optimization: stochastic optimization (e.g., theory of
SGD and its variants), derivative-free optimization (e.g.,
zeroth-order optimization), distributed optimization. Note:
we are looking for candidates with strong theoretical and
numerical background. Candidates who work on the engineering
application of optimization methods do NOT match this position.
-
High-performance computing (HPC) and GPU optimization: rich
experience in parallel/distributed computation for large-scale
training of deep learning models or for scientific computation on
massive GPUs.
-
Edge AI hardware accelerators: hardware accelerator design with
integrated photonics or with FPGA. We focus on training
accelerator design rather than inference engines.
-
Scientific machine learning for EDA: physics-informed neural
networks and/or operator learning for PDE simulation and/or PDE-constrained
optimization, small-data
learning for design modeling and optimization, uncertainty
quantification and uncertainty-aware optimization for chip
design.
Interested candidates please send me the following information via
email: your CV, representative publications, and the contact
information of two referees.
We also welcome PhD applicants with prior experience
in the above fields. Prior master research experience is a plus but
not mandatory. For PhD applicants, please submit your application via
our online graduate
application system, and mention my name in your application
case.
Checklist for paper writing: I have prepared a
detailed checklist to help science/engineering graduate
students improving their paper writing.
To prospective PhD students:
Please read
this document if you are thinking about
pursuing a PhD degree. The skill sets required for PhD research are very
different from those required for undergraduate study. In undergraduate
study, a student learns existing knowledge that were created by others
(probably a few hundred years ago). A PhD student is expected to create new knowledge.
A student doesn't have to be super smart or to have a perfect GPA in
order to be an excellent PhD student, but he/she may need to be self-motivated for scientific
research, curious about unknown/new fields, open-minded to different
opinions, and persistent when facing research challenges (or even
failures).
RESEARCH
INTERESTS
We work at the intersection of computational data science (e.g.,
uncertainty quantification, tensor computation, scientific machine
learning) and hardware systems. Currently we focus on two broad directions:
-
Design automation: (1) uncertainty-aware design automation for
electronics, photonics, and quantum circuits; (2) small-data
and data-free scientific machine learning for multi-physics design of 3D IC and chiplet.
-
Responsible AI systems: (1) tensor-compressed methods for
sustainable training of large AI models [e.g., foundation models (or
large language models)]
and for resource-constraint on-device learning; (2) self-healing
machine learning systems.
Our research is supported by both government funding agencies (e.g., NSF,
DOE, NIST)
and industries (e.g., Meta, Intel, Amazon and Samsung). We are actively collaborating with industrial
research teams (e.g., Meta, Intel, Cadence, HP, Amazon, and NVIDIA)
to make practical impact.
RECENT NEWS:
-
[NeuRIPS'2024] 09/25/2024: Our work CoMERA (see
the paper), a computing- and memory-efficient
rank-adaptive tensor-compressed (pre)-training method, is
accepted by NeuRIPS'2024. This work was led by our former
postdoc
Dr. Zi Yang, in collaboration
with Amazon and Meta.
-
[EMNLP'2024] 09/20/2024: Yifan Yang's paper (see
the draft) about memory-efficient zeroth-order
tensor-train adaptation method for LLM fine-tuning is accepted
by EMNLP'2024. This is a collaborative work with Amazon Alexa
AI.
-
[DeepOHeat Codes] 09/2024: our source codes of
DeepOHeat for 3D-IC thermal
analysis is released to the public (see
the link). This is a
collaborative work between our group and Cadence.
-
[NIST research grant] 09/2024: we got a 3-year research
grant from NIST to investigate small-data and uncertainty-aware
design optimization methods for analog/RF integrated circuits
and systems.
-
[AI4Science Pre-training project] 09/2024: we will start
a 3-year DOE research project to investigate the theory,
algorithm and HPC implementation regarding energy-efficient
pre-training of AI4Science foundation models. We will
collaborate with Argonne National Labs closely on this project.
Besides research funding, DOE will offer the access to hundreds
to thousands of state-of-the-art GPUs for us to pre-train
extreme-scale AI foundation models.
-
[ISIT Paper on coded tensor computation] 07/2014: our
collaborative paper with Prof. Haewon Jeong is presented at IEEE
International Symposium on Information Theory (ISIT) held in
Athen, Greece. This work was led by Jin Lee (a PhD student of
Prof. Jeong), and it investigated an interesting topic: how
coded computing can be extended from matrices to tensors to help
quantum circuit simulation.
-
[PhD defense] 07/03/2024: Zhuotong Chen finished his
thesis defense, and he has joined Amazon to work on large
language models (LLMs). Congratulations!
-
[Intel Research Project] 07/01/2024: we just started a
new research project with Intel to investigate multi-physics
modeling and optimization of 3D integrated circuits and systems.
We have some on-going collaboration with Intel in the direction
of on-device AI training, and we are excited to expand our
research collaboration.
-
[TQE Paper] 06/2024: Zichang He's
paper about
quantum circuit optimization under imperfect uncertainty
description is published by IEEE Trans. Quantum Engineering.
-
[NAACL Oral Paper] 06/16/2024: Yifan Yang and Jiajun Zhou
will present their
LoRETTA paper at NAACL'2024
held in Mexico City, Mexico. This paper is selected as an oral
paper (top 5%) of the whole conference. This paper results from
our collaboration with Amazon.
-
[NSF Project with HP Research Labs] 06/13/2024: we will
start a 3-year NSF project to collaborate with HP Research Labs
on scalable photonic on-device training for scientific
computing. We look forward to the research results from this
academia-industry collaboration.
-
[TMRL paper] 03/2024: Zhuotong Chen's
paper about self-healing methods for robust large-langage
models (LLMs) is published by TMLR.
-
[PhD defense] 08/07/2023: Zichang He finished his thesis
defense, and he has joined JP Morgan to work on quantum
computing. Congratulations!
-
[Faculty job] 07/31/2023: Our postdoc associate Zi Yang
has joined SUNY Albany (State University of New York at Albany)
as an Assistant Professor of Mathematics and Data Science.
Congratulations, Zi!
-
[JMLR paper] 10/04/2022: Zhuotong's journal paper
"Self-healing
robust neural networks via closed-loop control" is
accepted by the Journal of Machine Learning Research.
SELECTED PUBLICATIONS
-
Z. Liu, Y. Li, J. Hu, X. Yu, Xin Ai, Z. Zeng, and Z. Zhang, “DeepOHeat:
Operator learning-based ultra-fast thermal simulation in 3D-IC
design,” ACM/IEEE Design Automation Conference (DAC),
PP. 1-6, San Francisco, CA, June 2023
-
Y. Zhao, X. Yu, Z. Chen, Z. Liu, S. Liu and Z. Zhang,
"Tensor-compressed back-propagation-free training for
(physics-informed) neural networks," arXiv:2308.09858, Aug. 2023.
-
Z. Chen, Q. Li and Z. Zhang, "Self-healing
robust neural networks via closed-loop control,"
Journal of Machine Learning
Research, vol. 23, no. 319, pp. 1-54, 2022. .
-
C. Hawkins, X. Liu and Z.Zhang, "Towards
compact neural networks via end-to-end training: a Bayesian tensor
approach with automatic rank determination,"
SIAM Journal on Mathematics of Data Science, vol. 4, no. 1,
pp. 46-71, Jan. 2022.
-
Z. He and Z. Zhang, "High-dimensional uncertainty
quantification via tensor regression with rank determination and
adaptive sampling," IEEE Trans. Components,
Packaging and Manufacturing Technology, vol. 11, no. 9, pp.
1317-1328, Sept. 2021. (invited paper, the conference version received
the best paper award at EPEPS'2020).
-
K. Zhang, C. Hawkins, X. Zhang, C. Hao and Z. Zhang, "On-FPGA
training with ultra memory reduction: A low-precision tensor method,"
ICLR Workshop on Hardware-Aware Efficient Training (HAET), May 2021.
-
Z. Chen*, Q. Li* and Z. Zhang, "Towards
robust neural networks via close-loop control,"
International Conference on Learning Representation (ICLR) 2021
(*Equally contributing authors)
-
C. Cui and Z. Zhang, "Stochastic
collocation with non-Gaussian correlated process variations: Theory,
algorithms and applications,"
IEEE Trans. Components, Packaging and Manufacturing
Technology, vol. 9, no. 7, pp. 1362-1375, July 2019.
(arXiv:1808.09720),
Matlab codes,
Best Paper Award
-
Z. Zhang, T.-W. Weng and L. Daniel,
"Big-data
tensor recovery for high-dimensional uncertainty quantification of
process variations," IEEE Trans.
Components, Packaging and Manufacturing Technology, vol. 7, no.
5, pp. 687-697, May 2017. Best Paper Award
-
Z. Liu and Z. Zhang, "Quantum-inspired
Hamiltonian Monte Carlo for Bayesian sampling,"
submitted to Journal of Machine Learning Research (arXiv:1912.01937)
-
Z. Zhang, K. Batselier, H.
Liu, L. Daniel and N. Wong, "Tensor computation: A new framework for
high-dimensional problems in EDA," IEEE Trans.
Computer-Aided Design of Integrated Circuits and Systems, vol.
36, no. 4, pp. 521-536, April. 2017.
Invited Keynote Paper
-
Z. Zhang, T. A. El-Moselhy, I. M. Elfadel and L. Daniel,
"Stochastic testing method for transistor-level uncertainty
quantification based on generalized polynomial chaos,"
IEEE
Trans. Computer-Aided Design of Integrated Circuits and Systems
(TCAD), vol. 32, no. 10, pp. 1533-1545, Oct. 2013.
Donald O. Pederson TCAD Best Paper Award
-
Z. Zhang, X. Yang, I. V. Oseledets, G. E. Karniadakis and
L. Daniel, "Enabling high-dimensional hierarchical uncertainty
quantification by ANOVA and tensor-train decomposition,"
IEEE Trans. Computer-Aided Design of Integrated
Circuits and Systems, vol. 34, no. 1, pp. 63-76, Jan. 2015.
More publications...
-
2022: Meta Research Award.
-
2021: ACM SIGDA Outstanding New Faculty Award (link);
IEEE CEDA Ernest S. Kuh Early Career Award (link).
-
2020: Best Paper Award of IEEE Trans. on Components,
Packaging and Manufacturing Technology (link
to paper); Facebook Research Award;
Best Student Paper Award at EPEPS (by PhD advisee Zichang
He, link to
paper).
-
2019: NSF CAREER Award; Rising Stars in Computational and Data Sciences (by
my advisee Chunfeng Cui); Rising
Stars in EECS (by my advisee Chunfeng Cui).
-
2018: Best Paper Award of IEEE Transactions on
Components, Packaging and Manufacturing Technology (link
to ppaer); Best
Conference Paper
Award at IEEE EPEPS (link
to paper).
-
2016: ACM Outstanding PhD Dissertation Award in
Electronic Design Automation (link);
Best Paper Award at International Workshop on Signal and
Power Integrity.
-
2015: MIT Microsystems Technology Labs (MTL) Doctoral Dissertation Seminar Award (link).
-
2014: Donald O. Pederson Best Paper Award of IEEE
Transactions on CAD of Integrated Circuits and Systems (
link);
Best Paper Nomination at IEEE CICC.
-
2011:
Li Ka-Shing Prize (best M.Phil/Ph.D thesis award) from the
University of Hong Kong (link);
best paper nominations at ICCAD2011 and ASP-DAC2011.
-
Associate Editor: ACM SIGDA Newsletters
(2018-2019);
-
TPC Member: ICCAD (2016-2018), DAC
(2017, 2018);
-
Award Committee: ACM SIGDA Best
Dissertation Award Committee (2018), DAC Best Paper Award Committee
(2018), ICCAD Best Paper Award Committee (2018)
|
|
|
|