[Last update: 2025-12-29]

Hi, I’m Daiki Chijiwa

Me

About

I am a researcher at NTT (Computer and Data Science Laboratories, Japan) since 2019. Previously I majored in Mathematics and studied complex algebraic geometry and Hodge theory during the master’s degree program.

Current research interests: neural networks, meta-learning, statistical learning theory

Brief CV

Publications

Preprints

  1. A. Ito, M. Yamada, D. Chijiwa, A. Kumagai, Do We Really Need Permutations? Impact of Width Expansion on Linear Mode Connectivity, arxiv:2510.08023
  2. D. Chijiwa, T. Hasegawa, K. Nishida, S. Yamaguchi, T. Ohba, T. Sakao, S. Takeuchi, Lossless Vocabulary Reduction for Auto-Regressive Language Models, arxiv:2510.08102
  3. S. Yamaguchi, K. Nishida, D. Chijiwa, Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought, arxiv:2507.07685
  4. S. Yamaguchi, K. Nishida, D. Chijiwa, Y. Ida, Zero-shot Concept Bottleneck Models, arxiv:2502.09018

Journals / International Conferences

  1. H. Otsuka, D. Chijiwa, Y. Okoshi, D. Fujiki, S. Takeuchi, M. Motomura, The Strong Lottery Ticket Hypothesis for Multi-Head Attention Mechanisms, to appear in AAAI, 2025
  2. S. Yamaguchi, S. Kanai, A. Kumagai, D. Chijiwa, H. Kashima, Transfer learning with pre-trained conditional generative models, Machine Learning, 2025
  3. Y. Yamasaki, K. Niwa, D. Chijiwa, T. Fukami, T. Miura, Plausible Token Amplification for Improving Accuracy of Differentially Private In-Context Learning Based on Implicit Bayesian Inference, International Conference on Machine Learning (ICML), 2025
  4. D. Chijiwa, T. Hasegawa, K. Nishida, K. Saito, S. Takeuchi, Portable Reward Tuning: Towards Reusable Fine-Tuning across Different Pretrained Models, International Conference on Machine Learning (ICML), 2025
  5. S. Yamaguchi, D. Feng, S. Kanai, K. Adachi, D. Chijiwa, Post-pre-training for Modality Alignment in Vision-Language Foundation Models, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025
  6. H. Otsuka, D. Chijiwa, Á. L. García-Arias, Y. Okoshi, K. Kawamura, T. V. Chu, D. Fujiki, S. Takeuchi, M. Motomura, Partially Frozen Random Networks Contain Compact Strong Lottery Tickets, Transactions on Machine Learning Research (TMLR), 2025
  7. S. Yamaguchi, S. Kanai, K. Adachi, D. Chijiwa, Adaptive Random Feature Regularization on Fine-tuning Deep Neural Networks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
  8. M. Yamada, T. Yamashita, S. Yamaguchi, D. Chijiwa, Toward Data Efficient Model Merging between Different Datasets without Performance Degradation, Asian Conference on Machine Learning (ACML), 2024
  9. D. Chijiwa, Transferring Learning Trajectories of Neural Networks, International Conference on Learning Representations (ICLR), 2024
  10. S. Yamaguchi, D. Chijiwa, S. Kanai, A. Kumagai, H. Kashima, Regularizing Neural Networks with Meta-Learning Generative Models, Advances in Neural Information Processing Systems (NeurIPS), 2023
  11. D. Chijiwa, S. Yamaguchi, A. Kumagai, Y. Ida, Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks, Advances in Neural Information Processing Systems (NeurIPS), 2022
  12. D. Chijiwa, S. Yamaguchi, Y. Ida, K. Umakoshi, T. Inoue, Pruning randomly initialized neural networks with iterative randomization, Advances in Neural Information Processing Systems (NeurIPS, selected as Spotlight), 2021

Master’s Thesis

  1. D. Chijiwa, On certain algebraic cycles on abelian varieties of Fermat type, 2019.