


default search action
Xiaohan Chen 0001
Person information
- affiliation: Alibaba Group, Damo Academy, Decision Intelligence Lab, USA
- affiliation (2020 - 2022): University of Texas at Austin, Department of Electrical and Computer and Engineering, Austin, TX, USA
- affiliation (PhD 2020): Texas A&M University, Department of Computer Science and Engineering, College Station, TX, USA
Other persons with the same name
- Xiaohan Chen — disambiguation page
- Xiaohan Chen 0002 — West Virginia University, Department of Computer Science and Electrical Engineering, Morgantown, WV, USA
- Xiaohan Chen 0003
— Xi'an Jiaotong-Liverpool University, School of Advanced Technology, Suzhou, China (and 1 more)
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[c30]Ziang Chen, Xiaohan Chen, Jialin Liu, Xinshang Wang, Wotao Yin:
Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs. ICML 2025- 2024
[j4]Haoyu Peter Wang, Jialin Liu, Xiaohan Chen, Xinshang Wang, Pan Li, Wotao Yin:
DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee. Trans. Mach. Learn. Res. 2024 (2024)
[c29]Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin:
Rethinking the Capacity of Graph Neural Networks for Branching Strategy. NeurIPS 2024
[i27]Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin:
Rethinking the Capacity of Graph Neural Networks for Branching Strategy. CoRR abs/2402.07099 (2024)
[i26]Xiaohan Chen, Jialin Liu, Wotao Yin:
Learning to optimize: A tutorial for continuous and mixed-integer optimization. CoRR abs/2405.15251 (2024)
[i25]Ziang Chen, Xiaohan Chen, Jialin Liu, Xinshang Wang, Wotao Yin:
Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs. CoRR abs/2406.05938 (2024)
[i24]Qiming Wu, Xiaohan Chen, Yifan Jiang, Zhangyang Wang:
Chasing Better Deep Image Priors between Over- and Under-parameterization. CoRR abs/2410.24187 (2024)- 2023
[j3]Qiming Wu, Xiaohan Chen, Yifan Jiang, Zhangyang Wang:
Chasing Better Deep Image Priors between Over- and Under-parameterization. Trans. Mach. Learn. Res. 2023 (2023)
[j2]Xiaohan Chen
, Yang Zhao
, Yue Wang
, Pengfei Xu, Haoran You
, Chaojian Li, Yonggan Fu, Yingyan Lin, Zhangyang Wang
:
SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training. IEEE Trans. Neural Networks Learn. Syst. 34(10): 7099-7113 (2023)
[c28]Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin:
Safeguarded Learned Convex Optimization. AAAI 2023: 7848-7855
[c27]Ruisi Cai, Xiaohan Chen, Shiwei Liu
, Jayanth Srinivasa, Myungjin Lee, Ramana Kompella, Zhangyang Wang:
Many-Task Federated Learning: A New Problem Setting and A Simple Baseline. CVPR Workshops 2023: 5037-5045
[c26]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Tommi Kärkkäinen, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang:
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity. ICLR 2023
[c25]Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin, HanQin Cai:
Towards Constituting Mathematical Structures for Learning to Optimize. ICML 2023: 21426-21449
[i23]Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin, HanQin Cai:
Towards Constituting Mathematical Structures for Learning to Optimize. CoRR abs/2305.18577 (2023)
[i22]Haoyu Wang, Jialin Liu, Xiaohan Chen, Xinshang Wang, Pan Li, Wotao Yin:
DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee. CoRR abs/2310.13261 (2023)- 2022
[j1]Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin:
Learning to Optimize: A Primer and A Benchmark. J. Mach. Learn. Res. 23: 189:1-189:59 (2022)
[c24]Sameer Bibikar, Haris Vikalo
, Zhangyang Wang, Xiaohan Chen:
Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better. AAAI 2022: 6080-6088
[c23]Allen-Jasmin Farcas, Xiaohan Chen, Zhangyang Wang, Radu Marculescu:
Model elasticity for hardware heterogeneity in federated learning systems. FedEdge@MobiCom 2022: 19-24
[c22]Xiaohan Chen, Jason Zhang, Zhangyang Wang:
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently. ICLR 2022
[c21]Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity. ICLR 2022
[c20]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy:
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training. ICLR 2022
[c19]Ruisi Cai, Zhenyu Zhang, Tianlong Chen, Xiaohan Chen, Zhangyang Wang:
Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets. NeurIPS 2022
[i21]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy:
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training. CoRR abs/2202.02643 (2022)
[i20]Shiwei Liu
, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Constantin Mocanu
, Zhangyang Wang:
More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity. CoRR abs/2207.03620 (2022)- 2021
[c18]Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu:
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets. ACL/IJCNLP (1) 2021: 2195-2207
[c17]Lida Zhang, Xiaohan Chen, Tianlong Chen, Zhangyang Wang, Bobak J. Mortazavi:
DynEHR: Dynamic adaptation of models with data heterogeneity in electronic health records. BHI 2021: 1-4
[c16]Tianjian Meng, Xiaohan Chen, Yifan Jiang, Zhangyang Wang:
A Design Space Study for LISTA and Beyond. ICLR 2021
[c15]Jiayi Shen, Xiaohan Chen, Howard Heaton, Tianlong Chen, Jialin Liu, Wotao Yin, Zhangyang Wang:
Learning A Minimax Optimizer: A Pilot Study. ICLR 2021
[c14]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration. NeurIPS 2021: 9908-9922
[c13]Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin:
Hyperparameter Tuning is All You Need for LISTA. NeurIPS 2021: 11678-11689
[c12]Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang:
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? NeurIPS 2021: 12749-12760
[c11]Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang:
The Elastic Lottery Ticket Hypothesis. NeurIPS 2021: 26609-26621
[i19]Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu:
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets. CoRR abs/2101.00063 (2021)
[i18]Xiaohan Chen, Yang Zhao, Yue Wang, Pengfei Xu, Haoran You, Chaojian Li, Yonggan Fu, Yingyan Lin, Zhangyang Wang:
SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training. CoRR abs/2101.01163 (2021)
[i17]Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin:
Learning to Optimize: A Primer and A Benchmark. CoRR abs/2103.12828 (2021)
[i16]Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang:
The Elastic Lottery Ticket Hypothesis. CoRR abs/2103.16547 (2021)
[i15]Tianjian Meng, Xiaohan Chen, Yifan Jiang, Zhangyang Wang:
A Design Space Study for LISTA and Beyond. CoRR abs/2104.04110 (2021)
[i14]Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu:
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration. CoRR abs/2106.10404 (2021)
[i13]Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu
, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
:
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity. CoRR abs/2106.14568 (2021)
[i12]Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang:
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? CoRR abs/2107.00166 (2021)
[i11]Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin:
Hyperparameter Tuning is All You Need for LISTA. CoRR abs/2110.15900 (2021)
[i10]Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen:
Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better. CoRR abs/2112.09824 (2021)- 2020
[c10]Zepeng Huo, Arash Pakbin, Xiaohan Chen, Nathan C. Hurley, Ye Yuan, Xiaoning Qian, Zhangyang Wang, Shuai Huang, Bobak Mortazavi:
Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery. AISTATS 2020: 3894-3904
[c9]Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin:
Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks. ICLR 2020
[c8]Yang Zhao
, Xiaohan Chen, Yue Wang, Chaojian Li, Haoran You, Yonggan Fu, Yuan Xie, Zhangyang Wang, Yingyan Lin:
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation. ISCA 2020: 954-967
[c7]Xiaohan Chen, Zhangyang Wang, Siyu Tang
, Krikamol Muandet:
MATE: Plugging in Model Awareness to Task Embedding for Meta Learning. NeurIPS 2020
[c6]Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin:
ShiftAddNet: A Hardware-Inspired Deep Network. NeurIPS 2020
[i9]Zepeng Huo, Arash Pakbin, Xiaohan Chen, Nathan C. Hurley, Ye Yuan, Xiaoning Qian, Zhangyang Wang, Shuai Huang, Bobak Mortazavi:
Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery. CoRR abs/2003.01753 (2020)
[i8]Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin:
Safeguarded Learned Convex Optimization. CoRR abs/2003.01880 (2020)
[i7]Yang Zhao, Xiaohan Chen, Yue Wang, Chaojian Li, Haoran You, Yonggan Fu, Yuan Xie, Zhangyang Wang, Yingyan Lin:
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation. CoRR abs/2005.03403 (2020)
[i6]Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin:
ShiftAddNet: A Hardware-Inspired Deep Network. CoRR abs/2010.12785 (2020)
2010 – 2019
- 2019
[c5]Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin:
ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA. ICLR (Poster) 2019
[c4]Ernest K. Ryu, Jialin Liu, Sicheng Wang, Xiaohan Chen, Zhangyang Wang, Wotao Yin:
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers. ICML 2019: 5546-5557
[c3]Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang:
E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings. NeurIPS 2019: 5139-5151
[i5]Ernest K. Ryu, Jialin Liu
, Sicheng Wang, Xiaohan Chen, Zhangyang Wang, Wotao Yin:
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers. CoRR abs/1905.05406 (2019)
[i4]Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Yingyan Lin, Zhangyang Wang, Richard G. Baraniuk:
Drawing early-bird tickets: Towards more efficient training of deep networks. CoRR abs/1909.11957 (2019)
[i3]Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang:
E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving. CoRR abs/1910.13349 (2019)- 2018
[c2]Nitin Bansal, Xiaohan Chen, Zhangyang Wang:
Can We Gain More from Orthogonality Regularizations in Training Deep Networks? NeurIPS 2018: 4266-4276
[c1]Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin:
Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds. NeurIPS 2018: 9079-9089
[i2]Xiaohan Chen, Jialin Liu
, Zhangyang Wang, Wotao Yin:
Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds. CoRR abs/1808.10038 (2018)
[i1]Nitin Bansal, Xiaohan Chen, Zhangyang Wang:
Can We Gain More from Orthogonality Regularizations in Training Deep CNNs? CoRR abs/1810.09102 (2018)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-01-28 04:57 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







