[Last update: 2025-12-29]
Hi, I’m Daiki Chijiwa

- Twitter: @dchiji_en
- E-mail: daiki.chijiwa [at] ntt.com
About
I am a researcher at NTT (Computer and Data Science Laboratories, Japan) since 2019. Previously I majored in Mathematics and studied complex algebraic geometry and Hodge theory during the master’s degree program.
Current research interests: neural networks, meta-learning, statistical learning theory
Brief CV
- April 2019 - current: Researcher at NTT Corp, inc.
- April 2016 - March 2019: M.S. in Mathematical Sciences, The University of Tokyo
- Master’s thesis: On certain algebraic cycles on abelian varieties of Fermat type.
- Advisor: Tomohide Terasoma
- April 2012 - March 2016: B.S. in Mathematics, Tokyo Institute of Technology
- Advisor: Shouhei Ma
Publications
Preprints
- A. Ito, M. Yamada, D. Chijiwa, A. Kumagai, Do We Really Need Permutations? Impact of Width Expansion on Linear Mode Connectivity, arxiv:2510.08023
- D. Chijiwa, T. Hasegawa, K. Nishida, S. Yamaguchi, T. Ohba, T. Sakao, S. Takeuchi, Lossless Vocabulary Reduction for Auto-Regressive Language Models, arxiv:2510.08102
- S. Yamaguchi, K. Nishida, D. Chijiwa, Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought, arxiv:2507.07685
- S. Yamaguchi, K. Nishida, D. Chijiwa, Y. Ida, Zero-shot Concept Bottleneck Models, arxiv:2502.09018
Journals / International Conferences
- H. Otsuka, D. Chijiwa, Y. Okoshi, D. Fujiki, S. Takeuchi, M. Motomura, The Strong Lottery Ticket Hypothesis for Multi-Head Attention Mechanisms, to appear in AAAI, 2025
- S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima, Transfer learning with pre-trained conditional generative models, Machine Learning, 2025
- Y. Yamasaki, K. Niwa, D. Chijiwa, T. Fukami, T. Miura, Plausible Token Amplification for Improving Accuracy of Differentially Private In-Context Learning Based on Implicit Bayesian Inference, International Conference on Machine Learning (ICML), 2025
- D. Chijiwa, T. Hasegawa, K. Nishida, K. Saito, S. Takeuchi, Portable Reward Tuning: Towards Reusable Fine-Tuning across Different Pretrained Models, International Conference on Machine Learning (ICML), 2025
- S. Yamaguchi, D. Feng, S. Kanai, K. Adachi, D. Chijiwa, Post-pre-training for Modality Alignment in Vision-Language Foundation Models, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025
- H. Otsuka, D. Chijiwa, Á. L. García-Arias, Y. Okoshi, K. Kawamura, T. V. Chu, D. Fujiki, S. Takeuchi, M. Motomura, Partially Frozen Random Networks Contain Compact Strong Lottery Tickets, Transactions on Machine Learning Research (TMLR), 2025
- S. Yamaguchi, S. Kanai, K. Adachi, D. Chijiwa, Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
- M. Yamada, T. Yamashita, S. Yamaguchi, D. Chijiwa, Toward Data Efficient Model Merging between Different Datasets without Performance Degradation, Asian Conference on Machine Learning (ACML), 2024
- D. Chijiwa, Transferring Learning Trajectories of Neural Networks, International Conference on Learning Representations (ICLR), 2024
- S. Yamaguchi, D. Chijiwa, S. Kanai, A. Kumagai, H. Kashima, Regularizing Neural Networks with Meta-Learning Generative Models, Advances in Neural Information Processing Systems (NeurIPS), 2023
- D. Chijiwa, S. Yamaguchi, A. Kumagai, Y. Ida, Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks, Advances in Neural Information Processing Systems (NeurIPS), 2022
- D. Chijiwa, S. Yamaguchi, Y. Ida, K. Umakoshi, T. Inoue, Pruning randomly initialized neural networks with iterative randomization, Advances in Neural Information Processing Systems (NeurIPS, selected as Spotlight), 2021
Master’s Thesis
- D. Chijiwa, On certain algebraic cycles on abelian varieties of Fermat type, 2019.