


default search action
Fangcheng Fu
Person information
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2026
[c30]Xinyi Liu, Yujie Wang, Fangcheng Fu, Xuefeng Xiao, Huixia Li, Jiashi Li, Bin Cui:
LAER-MoE: Load-Adaptive Expert Re-layout for Efficient Mixture-of-Experts Training. ASPLOS (2) 2026: 1055-1072
[c29]Xuanyu Wang
, Fangcheng Fu
, Haoyang Li
, Hao Ge
, Sheng Lin
, Jiawen Niu
, Bin Cui
:
Elastor: Elastic and Efficient Model Partitioning and Checkpointing for Fault-Tolerant Distributed Training. PPoPP 2026: 398-412
[i33]Haoyang Li, Sheng Lin, Fangcheng Fu, Yuming Zhou, Xiaodong Ji, Yanfeng Zhao, Lefeng Wang, Jie Jiang, Bin Cui:
Unleashing Efficient Asynchronous RL Post-Training via Staleness-Constrained Rollout Coordination. CoRR abs/2601.12784 (2026)- 2025
[j14]Pinxue Zhao
, Hailin Zhang
, Fangcheng Fu
, Xiaonan Nie
, Qibin Liu
, Fang Yang
, Yuanbo Peng
, Dian Jiao
, Shuaipeng Li
, Jinbao Xue
, Yangyu Tao
, Bin Cui
:
MEMO: Fine-grained Tensor Management For Ultra-long Context LLM Training. Proc. ACM Manag. Data 3(1): 53:1-53:28 (2025)
[j13]Haoyang Li
, Fangcheng Fu
, Hao Ge
, Sheng Lin
, Xuanyu Wang
, Jiawen Niu
, Yujie Wang
, Hailin Zhang
, Xiaonan Nie
, Bin Cui
:
Malleus: Straggler-Resilient Hybrid Parallel Training of Large-scale Models via Malleable Data and Model Parallelization. Proc. ACM Manag. Data 3(3): 185:1-185:28 (2025)
[j12]Hailin Zhang
, Xiaodong Ji
, Yilin Chen
, Fangcheng Fu
, Xupeng Miao
, Xiaonan Nie
, Weipeng Chen
, Bin Cui
:
PQCache: Product Quantization-based KVCache for Long Context LLM Inference. Proc. ACM Manag. Data 3(3): 201:1-201:30 (2025)
[j11]Haoyang Li
, Fangcheng Fu
, Sheng Lin
, Hao Ge
, Xuanyu Wang
, Jiawen Niu
, Jinbao Xue
, Yangyu Tao
, Di Wang
, Jie Jiang
, Bin Cui
:
Hydraulis: Balancing Large Transformer Model Training via Co-designing Parallel Strategies and Data Assignment. Proc. ACM Manag. Data 3(6): 1-30 (2025)
[j10]Sheng Lin, Fangcheng Fu, Haoyang Li, Hao Ge, Xuanyu Wang, Jiawen Niu, Yaofeng Tu, Bin Cui:
LobRA: Multi-tenant Fine-tuning over Heterogeneous Data. Proc. VLDB Endow. 18(8): 2616-2625 (2025)
[j9]Xiaokai Zhou, Xiao Yan, Fangcheng Fu, Ziwen Fu, Tieyun Qian, Yuanyuan Zhu, Qinbo Zhang, Bin Cui, Jiawei Jiang:
PS-MI: Accurate, Efficient, and Private Data Valuation in Vertical Federated Learning. Proc. VLDB Endow. 18(10): 3559-3572 (2025)
[j8]Jiawei Jiang
, Hao Huang
, Zhigao Zheng
, Yi Wei, Fangcheng Fu
, Xiaosen Li, Bin Cui
:
Detecting and Analyzing Motifs in Large-Scale Online Transaction Networks. IEEE Trans. Knowl. Data Eng. 37(2): 584-596 (2025)
[c28]Qinbo Zhang, Xiao Yan, Yukai Ding, Fangcheng Fu, Quanqing Xu, Ziyi Li, Chuang Hu, Jiawei Jiang:
HaCore: Efficient Coreset Construction with Locality Sensitive Hashing for Vertical Federated Learning. AAAI 2025: 22515-22523
[c27]Peichao Lai, Zhengfeng Zhang, Wentao Zhang, Fangcheng Fu, Bin Cui:
Enhancing Unsupervised Sentence Embeddings via Knowledge-Driven Data Augmentation and Gaussian-Decayed Contrastive Learning. ACL (1) 2025: 4919-4940
[c26]Yujie Wang
, Shiju Wang
, Shenhan Zhu
, Fangcheng Fu
, Xinyi Liu
, Xuefeng Xiao
, Huixia Li
, Jiashi Li
, Faming Wu
, Bin Cui
:
FlexSP: Accelerating Large Language Model Training via Flexible Sequence Parallelism. ASPLOS (2) 2025: 421-436
[c25]Yujie Wang
, Shenhan Zhu
, Fangcheng Fu
, Xupeng Miao
, Jie Zhang
, Juan Zhu
, Fan Hong
, Yong Li
, Bin Cui
:
Spindle: Efficient Distributed Training of Multi-Task Large Models via Wavefront Scheduling. ASPLOS (2) 2025: 1139-1155
[c24]Peichao Lai, Jiaxin Gan, Feiyang Ye, Wentao Zhang, Fangcheng Fu, Yilei Wang, Bin Cui:
Improving Low-Resource Sequence Labeling with Knowledge Fusion and Contextual Label Explanations. EMNLP 2025: 5655-5674
[c23]Siqi Shen, Wentao Zhang, Chengshuo Du, Chong Chen, Fangcheng Fu, Yingxia Shao, Bin Cui:
Towards Scalable and Efficient Graph Structure Learning. ICDE 2025: 1759-1772
[c22]Xiaokai Zhou, Xiao Yan, Fangcheng Fu, Xinyan Li, Hao Huang, Quanqing Xu, Chuanhui Yang, Bo Du
, Tieyun Qian, Jiawei Jiang:
Hounding Data Diversity: Towards Participant Selection in Vertical Federated Learning. ICDE 2025: 2810-2823
[c21]Xinyi Liu, Yujie Wang, Fangcheng Fu, Xupeng Miao, Shenhan Zhu, Xiaonan Nie, Bin Cui:
NetMoE: Accelerating MoE Training through Dynamic Sample Placement. ICLR 2025
[c20]Youhe Jiang, Fangcheng Fu, Xiaozhe Yao, Guoliang He, Xupeng Miao, Ana Klimovic, Bin Cui, Binhang Yuan, Eiko Yoneki:
Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs. ICML 2025
[c19]Qinbo Zhang, Xiao Yan, Yanfeng Zhao, Fangcheng Fu, Quanqing Xu, Yukai Ding, Xiaokai Zhou, Chuang Hu, Jiawei Jiang:
Model Rake: A Defense Against Stealing Attacks in Split Learning. IJCAI 2025: 7002-7010
[c18]Youhe Jiang, Fangcheng Fu, Xiaozhe Yao, Taiyi Wang, Bin Cui, Ana Klimovic, Eiko Yoneki:
ThunderServe: High-performance and Cost-efficient LLM Serving in Cloud Environments. MLSys 2025
[c17]Hao Ge
, Junda Feng
, Qi Huang, Fangcheng Fu
, Xiaonan Nie
, Lei Zuo
, Haibin Lin, Bin Cui, Xin Liu:
ByteScale: Communication-Efficient Scaling of LLM Training with a 2048K Context Length on 16384 GPUs. SIGCOMM 2025: 963-978
[i32]Youhe Jiang, Fangcheng Fu, Xiaozhe Yao, Guoliang He, Xupeng Miao, Ana Klimovic, Bin Cui, Binhang Yuan, Eiko Yoneki:
Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs. CoRR abs/2502.00722 (2025)
[i31]Youhe Jiang, Fangcheng Fu, Xiaozhe Yao, Taiyi Wang, Bin Cui, Ana Klimovic, Eiko Yoneki:
ThunderServe: High-performance and Cost-efficient LLM Serving in Cloud Environments. CoRR abs/2502.09334 (2025)
[i30]Yifei Xia, Suhan Ling, Fangcheng Fu, Yujie Wang, Huixia Li, Xuefeng Xiao, Bin Cui:
Training-free and Adaptive Sparse Attention for Efficient Long Video Generation. CoRR abs/2502.21079 (2025)
[i29]Hao Ge, Junda Feng, Qi Huang, Fangcheng Fu, Xiaonan Nie, Lei Zuo, Haibin Lin, Bin Cui, Xin Liu:
ByteScale: Efficient Scaling of LLM Training with a 2048K Context Length on More Than 12,000 GPUs. CoRR abs/2502.21231 (2025)
[i28]Haoyang Li, Fangcheng Fu, Hao Ge, Sheng Lin, Xuanyu Wang, Jiawen Niu, Xupeng Miao, Bin Cui:
Hetu v2: A General and Scalable Deep Learning System with Hierarchical and Heterogeneous Single Program Multiple Data Annotations. CoRR abs/2504.20490 (2025)
[i27]Xinyi Liu, Yujie Wang, Shenhan Zhu, Fangcheng Fu, Qingshuo Liu, Guangming Lin, Bin Cui:
Galvatron: An Automatic Distributed System for Efficient Foundation Model Training. CoRR abs/2504.21411 (2025)
[i26]Yuhang Wang, Youhe Jiang, Bin Cui, Fangcheng Fu:
Thinking Short and Right Over Thinking Long: Serving LLM Reasoning Efficiently and Accurately. CoRR abs/2505.13326 (2025)
[i25]Xiaodong Ji, Hailin Zhang, Fangcheng Fu, Bin Cui:
SALE : Low-bit Estimation for Efficient Sparse Attention in Long-context LLM Prefilling. CoRR abs/2505.24179 (2025)
[i24]Youhe Jiang, Fangcheng Fu, Wanru Zhao, Stephan Rabanser, Nicholas D. Lane, Binhang Yuan:
Cascadia: A Cascade Serving System for Large Language Models. CoRR abs/2506.04203 (2025)
[i23]Qiming Zeng, Xiao Yan, Hao Luo, Yuhao Lin, Yuxiang Wang, Fangcheng Fu, Bo Du
, Quanqing Xu, Jiawei Jiang:
How Significant Are the Real Performance Gains? An Unbiased Evaluation Framework for GraphRAG. CoRR abs/2506.06331 (2025)
[i22]Li Zhang, Youhe Jiang, Guoliang He, Xin Chen, Han Lv, Qian Yao, Fangcheng Fu, Kai Chen:
Efficient Mixed-Precision Large Language Model Inference with TurboMind. CoRR abs/2508.15601 (2025)
[i21]Sheng Lin, Fangcheng Fu, Haoyang Li, Hao Ge, Xuanyu Wang, Jiawen Niu, Yaofeng Tu, Bin Cui:
LobRA: Multi-tenant Fine-tuning over Heterogeneous Data. CoRR abs/2509.01193 (2025)
[i20]Shiju Wang, Yujie Wang, Ao Sun, Fangcheng Fu, Zijian Zhu, Bin Cui, Xu Han, Kaisheng Ma:
Data-Centric Elastic Pipeline Parallelism for Efficient Long-Context LLM Training. CoRR abs/2509.21275 (2025)
[i19]Yifei Xia, Fangcheng Fu, Hao Yuan, Hanke Zhang, Xupeng Miao, Yijun Liu, Suhan Ling, Jie Jiang, Bin Cui:
TridentServe: A Stage-level Serving System for Diffusion Pipelines. CoRR abs/2510.02838 (2025)- 2024
[j7]Fangcheng Fu
, Xuanyu Wang
, Jiawei Jiang
, Huanran Xue
, Bin Cui
:
ProjPert: Projection-Based Perturbation for Label Protection in Split Learning Based Vertical Federated Learning. IEEE Trans. Knowl. Data Eng. 36(7): 3417-3428 (2024)
[j6]Yujie Wang
, Youhe Jiang
, Xupeng Miao
, Fangcheng Fu
, Shenhan Zhu
, Xiaonan Nie
, Yaofeng Tu
, Bin Cui
:
Improving Automatic Parallel Training via Balanced Memory Workload Optimization. IEEE Trans. Knowl. Data Eng. 36(8): 3906-3920 (2024)
[c16]Zihao Yu, Haoyang Li, Fangcheng Fu, Xupeng Miao, Bin Cui:
Accelerating Text-to-Image Editing via Cache-Enabled Sparse Diffusion Inference. AAAI 2024: 16605-16613
[c15]Yuxiang Wang, Xiao Yan, Chuang Hu, Quanqing Xu, Chuanhui Yang, Fangcheng Fu, Wentao Zhang, Hao Wang, Bo Du
, Jiawei Jiang:
Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning. ICDE 2024: 3364-3378
[c14]Xupeng Miao, Shenhan Zhu, Fangcheng Fu, Ziyu Guo, Zhi Yang, Yaofeng Tu, Zhihao Jia, Bin Cui:
X-former Elucidator: Reviving Efficient Attention for Long Context Language Modeling. IJCAI 2024: 8179-8187
[c13]Xiaonan Nie, Qibin Liu, Fangcheng Fu, Shenhan Zhu, Xupeng Miao, Xiaoyang Li, Yang Zhang, Shouda Liu, Bin Cui:
LSH-MoE: Communication-efficient MoE Training via Locality-Sensitive Hashing. NeurIPS 2024
[c12]Yifei Xia, Fangcheng Fu, Wentao Zhang, Jiawei Jiang, Bin Cui:
Efficient Multi-task LLM Quantization and Serving for Multiple LoRA Adapters. NeurIPS 2024
[c11]Hao Ge
, Fangcheng Fu
, Haoyang Li
, Xuanyu Wang
, Sheng Lin
, Yujie Wang
, Xiaonan Nie
, Hailin Zhang
, Xupeng Miao
, Bin Cui
:
Enabling Parallelism Hot Switching for Efficient Training of Large Language Models. SOSP 2024: 178-194
[i18]Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Bin Cui:
Retrieval-Augmented Generation for AI-Generated Content: A Survey. CoRR abs/2402.19473 (2024)
[i17]Pinxue Zhao, Hailin Zhang, Fangcheng Fu, Xiaonan Nie, Qibin Liu, Fang Yang, Yuanbo Peng, Dian Jiao, Shuaipeng Li, Jinbao Xue, Yangyu Tao, Bin Cui:
Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs. CoRR abs/2407.12117 (2024)
[i16]Hailin Zhang, Xiaodong Ji, Yilin Chen, Fangcheng Fu, Xupeng Miao, Xiaonan Nie, Weipeng Chen, Bin Cui:
PQCache: Product Quantization-based KVCache for Long Context LLM Inference. CoRR abs/2407.12820 (2024)
[i15]Yujie Wang, Shenhan Zhu, Fangcheng Fu, Xupeng Miao, Jie Zhang, Juan Zhu, Fan Hong, Yong Li, Bin Cui:
Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management. CoRR abs/2409.03365 (2024)
[i14]Qiang Huang, Xiao Yan, Xin Wang, Susie Xi Rao, Zhichao Han, Fangcheng Fu, Wentao Zhang, Jiawei Jiang:
Retrofitting Temporal Graph Neural Networks with Transformer. CoRR abs/2409.05477 (2024)
[i13]Bozhou Li, Hao Liang, Yang Li, Fangcheng Fu, Hongzhi Yin, Conghui He, Wentao Zhang:
Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models. CoRR abs/2410.05802 (2024)
[i12]Haoyang Li, Fangcheng Fu, Hao Ge, Sheng Lin, Xuanyu Wang, Jiawen Niu, Yujie Wang, Hailin Zhang, Xiaonan Nie, Bin Cui:
Malleus: Straggler-Resilient Hybrid Parallel Training of Large-scale Models via Malleable Data and Model Parallelization. CoRR abs/2410.13333 (2024)
[i11]Xiaonan Nie, Qibin Liu, Fangcheng Fu, Shenhan Zhu, Xupeng Miao, Xiaoyang Li, Yang Zhang, Shouda Liu, Bin Cui:
LSH-MoE: Communication-efficient MoE Training via Locality-Sensitive Hashing. CoRR abs/2411.08446 (2024)
[i10]Yujie Wang, Shiju Wang, Shenhan Zhu, Fangcheng Fu, Xinyi Liu, Xuefeng Xiao, Huixia Li, Jiashi Li, Faming Wu, Bin Cui:
Data-Centric and Heterogeneity-Adaptive Sequence Parallelism for Efficient LLM Training. CoRR abs/2412.01523 (2024)
[i9]Haoyang Li, Fangcheng Fu, Sheng Lin, Hao Ge, Xuanyu Wang, Jiawen Niu, Jie Jiang, Bin Cui:
Demystifying Workload Imbalances in Large Transformer Model Training over Variable-length Sequences. CoRR abs/2412.07894 (2024)- 2023
[j5]Xiaonan Nie, Yi Liu, Fangcheng Fu, Jinbao Xue, Dian Jiao, Xupeng Miao, Yangyu Tao, Bin Cui:
Angel-PTM: A Scalable and Economical Large-scale Pre-training System in Tencent. Proc. VLDB Endow. 16(12): 3781-3794 (2023)
[j4]Xupeng Miao
, Wentao Zhang, Yuezihan Jiang, Fangcheng Fu
, Yingxia Shao, Lei Chen, Yangyu Tao, Gang Cao, Bin Cui:
P2CG: a privacy preserving collaborative graph neural network training framework. VLDB J. 32(4): 717-736 (2023)
[c10]Yuhan Wu, Siyuan Dong, Yi Zhou, Yikai Zhao, Fangcheng Fu, Tong Yang, Chaoyue Niu, Fan Wu, Bin Cui:
KVSAgg: Secure Aggregation of Distributed Key-Value Sets. ICDE 2023: 1775-1789
[c9]Youhe Jiang, Fangcheng Fu, Xupeng Miao, Xiaonan Nie, Bin Cui:
OSDP: Optimal Sharded Data Parallel for Distributed Deep Learning. IJCAI 2023: 2142-2150
[i8]Xiaonan Nie, Yi Liu, Fangcheng Fu, Jinbao Xue, Dian Jiao, Xupeng Miao, Yangyu Tao, Bin Cui:
Angel-PTM: A Scalable and Economical Large-scale Pre-training System in Tencent. CoRR abs/2303.02868 (2023)
[i7]Zihao Yu, Haoyang Li, Fangcheng Fu, Xupeng Miao, Bin Cui:
FISEdit: Accelerating Text-to-image Editing via Cache-enabled Sparse Diffusion Inference. CoRR abs/2305.17423 (2023)
[i6]Yujie Wang, Youhe Jiang, Xupeng Miao, Fangcheng Fu, Xiaonan Nie, Bin Cui:
Improving Automatic Parallel Training via Balanced Memory Workload Optimization. CoRR abs/2307.02031 (2023)
[i5]Yuxiang Wang, Xiao Yan, Chuang Hu, Fangcheng Fu, Wentao Zhang, Hao Wang, Shuo Shang, Jiawei Jiang:
Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning. CoRR abs/2310.15523 (2023)- 2022
[j3]Fangcheng Fu
, Xupeng Miao, Jiawei Jiang, Huanran Xue, Bin Cui:
Towards Communication-efficient Vertical Federated Learning Training via Cache-enabled Local Update. Proc. VLDB Endow. 15(10): 2111-2120 (2022)
[c8]Jiawei Jiang, Yusong Hu, Xiaosen Li, Wen Ouyang, Zhitao Wang, Fangcheng Fu
, Bin Cui:
Analyzing Online Transaction Networks with Network Motifs. KDD 2022: 3098-3106
[c7]Jiawei Jiang, Lukas Burkhalter, Fangcheng Fu, Bolin Ding, Bo Du, Anwar Hithnawi, Bo Li, Ce Zhang:
VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely? NeurIPS 2022
[c6]Shicheng Gao, Jie Xu, Xiaosen Li, Fangcheng Fu
, Wentao Zhang, Wen Ouyang, Yangyu Tao, Bin Cui:
K-core decomposition on super large graphs with limited resources. SAC 2022: 413-422
[c5]Fangcheng Fu
, Huanran Xue, Yong Cheng, Yangyu Tao, Bin Cui:
BlindFL: Vertical Federated Machine Learning without Peeking into Your Data. SIGMOD Conference 2022: 1316-1330
[i4]Fangcheng Fu
, Huanran Xue, Yong Cheng, Yangyu Tao, Bin Cui:
BlindFL: Vertical Federated Machine Learning without Peeking into Your Data. CoRR abs/2206.07975 (2022)
[i3]Fangcheng Fu
, Xupeng Miao, Jiawei Jiang, Huanran Xue, Bin Cui:
Towards Communication-efficient Vertical Federated Learning Training via Cache-enabled Local Updates. CoRR abs/2207.14628 (2022)- 2021
[c4]Fangcheng Fu
, Yingxia Shao, Lele Yu, Jiawei Jiang, Huanran Xue, Yangyu Tao, Bin Cui
:
VF2Boost: Very Fast Vertical Federated Gradient Boosting for Cross-Enterprise Learning. SIGMOD Conference 2021: 563-576
[i2]Shicheng Gao, Jie Xu, Xiaosen Li, Fangcheng Fu, Wentao Zhang, Wen Ouyang, Yangyu Tao, Bin Cui:
K-Core Decomposition on Super Large Graphs with Limited Resources. CoRR abs/2112.14840 (2021)- 2020
[j2]Jiawei Jiang, Fangcheng Fu
, Tong Yang
, Yingxia Shao, Bin Cui
:
SKCompress: compressing sparse and nonuniform gradient in distributed machine learning. VLDB J. 29(5): 945-972 (2020)
[c3]Fangcheng Fu, Yuzheng Hu, Yihan He, Jiawei Jiang, Yingxia Shao, Ce Zhang, Bin Cui:
Don't Waste Your Bits! Squeeze Activations and Gradients for Deep Neural Networks via TinyScript. ICML 2020: 3304-3314
2010 – 2019
- 2019
[j1]Fangcheng Fu
, Jiawei Jiang, Yingxia Shao, Bin Cui:
An Experimental Evaluation of Large Scale GBDT Systems. Proc. VLDB Endow. 12(11): 1357-1370 (2019)
[i1]Fangcheng Fu, Jiawei Jiang, Yingxia Shao, Bin Cui:
An Experimental Evaluation of Large Scale GBDT Systems. CoRR abs/1907.01882 (2019)- 2018
[c2]Jiawei Jiang, Fangcheng Fu
, Tong Yang, Bin Cui
:
SketchML: Accelerating Distributed Machine Learning with Data Sketches. SIGMOD Conference 2018: 1269-1284
[c1]Jiawei Jiang, Bin Cui
, Ce Zhang, Fangcheng Fu
:
DimBoost: Boosting Gradient Boosting Decision Tree to Higher Dimensions. SIGMOD Conference 2018: 1363-1376
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from
to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the
of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from
,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from
and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from
.
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2026-03-13 22:34 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID







