


default search action
Yao-Xiang Ding 0001
Person information
- affiliation: Zhejiang University, State Key Lab of CAD & CG, China
- affiliation: Nanjing University, National Key Laboratory for Novel Software Technology, China
Other persons with the same name
- Yaoxiang Ding 0002 — Peking University, Speech and Hearing Research Center, Key Lab of Machine Perception, Beijing, China
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
[j6]Zipei Chen, Yumeng Li, Zhong Ren, Yao-Xiang Ding
, Kun Zhou:
Appearance as reliable evidence: Reconciling appearance and generative priors for monocular motion estimation. Comput. Graph. 132: 104404 (2025)
[j5]Jingming Liu, Yumeng Li, Boyuan Xiao, Yichang Jian, Ziang Qin, Tianjia Shao, Yao-Xiang Ding, Kun Zhou:
Autonomous Imagination: Closed-Loop Decomposition of Visual-to-Textual Conversion in Visual Reasoning for Multimodal Large Language Models. Trans. Mach. Learn. Res. 2025 (2025)
[j4]Yuting Tang, Xin-Qiang Cai, Yao-Xiang Ding, Qiyu Wu, Guoqing Liu, Masashi Sugiyama:
Reinforcement Learning from Bagged Reward. Trans. Mach. Learn. Res. 2025 (2025)
[j3]Lanjihong Ma
, Yao-Xiang Ding, Peng Zhao
, Zhi-Hua Zhou
:
Learning Objective Adaptation by Correlation-Based Model Reuse. IEEE Trans. Neural Networks Learn. Syst. 36(8): 14440-14451 (2025)
[c14]Qing Chang, Yao-Xiang Ding, Kun Zhou:
Enhancing Identity-Deformation Disentanglement in StyleGAN for One-Shot Face Video Re-Enactment. AAAI 2025: 1247-1255
[c13]Lanjihong Ma
, Yao-Xiang Ding
, Zhen-Yu Zhang
, Zhi-Hua Zhou
:
Achieving Nearly-Optimal Regret and Sample Complexity in Dueling Bandits with Applications in Online Recommendations. KDD (1) 2025: 1008-1019
[c12]Yifei Peng, Zijie Zha, Yu Jin
, Zhexu Luo, Wang-Zhou Dai
, Zhong Ren, Yao-Xiang Ding
, Kun Zhou:
Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings. KDD (2) 2025: 2291-2302
[c11]Bohong Chen, Yumeng Li, Youyi Zheng, Yao-Xiang Ding, Kun Zhou:
Motion-example-controlled Co-speech Gesture Generation Leveraging Large Language Models. SIGGRAPH (Conference Paper Track) 2025: 55:1-55:12
[i15]Yu Jin, Jingming Liu, Zhexu Luo, Yifei Peng, Ziang Qin, Wang-Zhou Dai, Yao-Xiang Ding, Kun Zhou:
Pre-Training Meta-Rule Selection Policy for Visual Generative Abductive Learning. CoRR abs/2503.06427 (2025)
[i14]Jingming Liu, Yumeng Li, Wei Shi, Yao-Xiang Ding, Hui Su, Kun Zhou:
Harnessing the Power of Reinforcement Learning for Language-Model-Based Information Retriever via Query-Document Co-Augmentation. CoRR abs/2506.18670 (2025)
[i13]Bohong Chen, Yumeng Li, Youyi Zheng, Yao-Xiang Ding, Kun Zhou:
Motion-example-controlled Co-speech Gesture Generation Leveraging Large Language Models. CoRR abs/2507.20220 (2025)
[i12]Yifei Peng, Yaoli Liu, Enbo Xia, Yu Jin, Wang-Zhou Dai, Zhong Ren, Yao-Xiang Ding, Kun Zhou:
Abductive Logical Rule Induction by Bridging Inductive Logic Programming and Multimodal Large Language Models. CoRR abs/2509.21874 (2025)
[i11]Yaoli Liu, Yao-Xiang Ding, Kun Zhou:
FreeFuse: Multi-Subject LoRA Fusion via Auto Masking at Test Time. CoRR abs/2510.23515 (2025)- 2024
[j2]Yumeng Li
, Bohong Chen
, Zhong Ren
, Yao-Xiang Ding
, Libin Liu
, Tianjia Shao
, Kun Zhou
:
CPoser: An Optimization-after-Parsing Approach for Text-to-Pose Generation Using Large Language Models. ACM Trans. Graph. 43(6): 196:1-196:13 (2024)
[c10]Yu-Cheng He, Yao-Xiang Ding, Han-Jia Ye, Zhi-Hua Zhou:
Learning Only When It Matters: Cost-Aware Long-Tailed Classification. AAAI 2024: 12411-12420
[c9]Yu Jin, Jingming Liu, Zhexu Luo, Yifei Peng, Ziang Qin, Wang-Zhou Dai, Yao-Xiang Ding, Kun Zhou:
Pre-Training Meta-Rule Selection Policy for Visual Generative Abductive Learning. IJCLR 2024: 163-180
[c8]Lanjihong Ma
, Zhen-Yu Zhang
, Yao-Xiang Ding
, Zhi-Hua Zhou
:
Handling Varied Objectives by Online Decision Making. KDD 2024: 2130-2140
[c7]Bohong Chen
, Yumeng Li
, Yao-Xiang Ding
, Tianjia Shao
, Kun Zhou
:
Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation. ACM Multimedia 2024: 6774-6783
[i10]Yuting Tang, Xin-Qiang Cai, Yao-Xiang Ding, Qiyu Wu, Guoqing Liu, Masashi Sugiyama:
Reinforcement Learning from Bagged Reward: A Transformer-based Approach for Instance-Level Reward Redistribution. CoRR abs/2402.03771 (2024)
[i9]Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, Kun Zhou:
Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation. CoRR abs/2410.00464 (2024)
[i8]Yuting Tang, Xin-Qiang Cai, Jing-Cheng Pang, Qiyu Wu, Yao-Xiang Ding, Masashi Sugiyama:
Beyond Simple Sum of Delayed Rewards: Non-Markovian Reward Modeling for Reinforcement Learning. CoRR abs/2410.20176 (2024)
[i7]Jingming Liu, Yumeng Li, Boyuan Xiao, Yichang Jian, Ziang Qin, Tianjia Shao, Yao-Xiang Ding, Kun Zhou:
Enhancing Visual Reasoning with Autonomous Imagination in Multimodal Large Language Models. CoRR abs/2411.18142 (2024)- 2023
[c6]Xin-Qiang Cai, Yao-Xiang Ding, Zi-Xuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou:
Seeing Differently, Acting Similarly: Heterogeneously Observable Imitation Learning. ICLR 2023
[c5]Yi-Kai Zhang, Ting-Ji Huang, Yao-Xiang Ding, De-Chuan Zhan, Han-Jia Ye:
Model Spider: Learning to Rank Pre-Trained Models Efficiently. NeurIPS 2023
[i6]Yi-Kai Zhang, Ting-Ji Huang, Yao-Xiang Ding, De-Chuan Zhan, Han-Jia Ye:
Model Spider: Learning to Rank Pre-Trained Models Efficiently. CoRR abs/2306.03900 (2023)
[i5]Yifei Peng, Yu Jin, Zhexu Luo, Yao-Xiang Ding, Wang-Zhou Dai, Zhong Ren, Kun Zhou:
Generating by Understanding: Neural Visual Generation with Logical Symbol Groundings. CoRR abs/2310.17451 (2023)
[i4]Yumeng Li, Yaoxiang Ding, Zhong Ren, Kun Zhou:
QPoser: Quantized Explicit Pose Prior Modeling for Controllable Pose Generation. CoRR abs/2312.01104 (2023)- 2022
[c4]Yao-Xiang Ding, Xi-Zhu Wu, Kun Zhou, Zhi-Hua Zhou:
Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning. NeurIPS 2022- 2021
[c3]Xin-Qiang Cai, Yao-Xiang Ding, Yuan Jiang, Zhi-Hua Zhou:
Imitation Learning from Pixel-Level Demonstrations by HashReward. AAMAS 2021: 279-287
[i3]Xin-Qiang Cai, Yao-Xiang Ding, Zi-Xuan Chen, Yuan Jiang, Masashi Sugiyama, Zhi-Hua Zhou:
Seeing Differently, Acting Similarly: Imitation Learning with Heterogeneous Observations. CoRR abs/2106.09256 (2021)- 2020
[c2]Yao-Xiang Ding, Zhi-Hua Zhou:
Boosting-Based Reliable Model Reuse. ACML 2020: 145-160
2010 – 2019
- 2019
[i2]Xin-Qiang Cai, Yao-Xiang Ding, Yuan Jiang, Zhi-Hua Zhou:
Expert-Level Atari Imitation Learning from Demonstrations Only. CoRR abs/1909.03773 (2019)- 2018
[j1]Yao-Xiang Ding, Zhi-Hua Zhou:
Crowdsourcing with unsure option. Mach. Learn. 107(4): 749-766 (2018)
[c1]Yao-Xiang Ding, Zhi-Hua Zhou:
Preference Based Adaptation for Learning Objectives. NeurIPS 2018: 7839-7848- 2016
[i1]Yao-Xiang Ding, Zhi-Hua Zhou:
Crowdsourcing with Unsure Option. CoRR abs/1609.00292 (2016)
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-01-23 02:57 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







