|
Research
For a complete list of my publications and patents, please see my profile.
Research Interests:
My research centers on Natural Language Processing and Machine Learning, with a primary focus on Large Language Models:
LLM Explainability: Rationale Generation, Uncertainty Estimation, Mechanistic Interpretability.
LLM Reasoning: Rationale Generation, Alignment, Inference-Time Reinforcement Learning.
LLM Applications: AI for Education, Agents, Multimodality, Role-Playing Systems.
LLM Engineering: End-to-end development of LLMs, including multi-GPU & multi-node distributed pre-training from scratch (e.g., 1.2B parameters on 100B tokens) and diverse post-training techniques.
Find me online: GitHub, Huggingface, LinkedIn, Twitter.
I am actively seeking internship and full-time research opportunities. My CV is available upon request.
News
Nov 2025: One main (multi-agent verbal reinforcement learning) and two demo papers on student answer assessment accepted at EMNLP 2025.
Sep 2025: My work at Spotify on mechanistic interpretability, residual stream activation-space auditing for CoT faithfulness is accepted at NeurIPS 2025 MechInterp Workshop.
Jul 2025: Three papers accepted at ACL 2025 (Rationale Faithfulness, Neuron Level Interpretation of Preference Optimisation, Theory-of-Mind Reasoning).
Jul 2025: My research on explainable student answer assessment was featured by The Telegraph.
Jul 2025: One position paper on Meta-Reasoning accepted at ICML 2025.
Jun 2025: My research project has been selected for the second cohort of the KCL Spinout Accelerator. We are seeking investment. Please contact me for a live demo. Also feel free to reach out for work opportunities.
Jun 2025: Presented a series of my work on an explainable student answer assessment system with Cesare at the FLIP+ conference in Arnhem, the Netherlands.
May 2025: One paper on fact-checking accepted by IEEE TASLP.
May 2025: Gave a panel talk at the Industry Placements Student Panel at King's Doctoral College.
Feb 2025: Two papers related to rationale generation for education and medicine accepted at AAAI 2025.
Jan 2025: My research was featured in KCL News.
Dec 2024: One paper on assessment rationale alignment accepted for oral presentation at the NeurIPS 2024 FM-EduAssess workshop.
Nov 2024: Named as an outstanding reviewer at EMNLP 2024.
Nov 2024: Three papers on preference optimisation and interpretability accepted at EMNLP 2024 (Explainable Student Assessment, Preference Optimisation, ICL Interpretability).
Aug 2024: One dataset on complex narrative relation understanding accepted at ACL 2024.
Jun 2024: Gave a talk at the TLDR Group, Imperial College London.
May 2024: Delivered a seminar on interpretable text classification at CGI.
Mar 2024: One paper on a role-playing demo system accepted at EACL 2024.
Mar 2024: Gave a talk on explainable student answer assessment at the ALTA group, University of Cambridge.
Mar 2024: One demo system accepted at AAAI 2024.
Dec 2023: One workshop paper on batched zero-shot querying of LLMs accepted at the NeurIPS 2023 R0-FoMo workshop.
Dec 2023: One paper on explainable student answer assessment presented at EMNLP 2023.
Dec 2023: Gave a talk at MLLM.
Sep 2023: Gave a presentation and panel talk on AI in assessment at BERA 2023.
Aug 2023: One paper on uncertainty interpretation accepted for a spotlight presentation at UAI 2023.
May 2023: Presented two posters at the UKAI Fellows Conference.
|