The 2026 Singapore Symposium on Natural Language Processing (SSNLP 2026)

in partnership with IMDA's Technical Sharing Session

👉🏻 Register Now!

SSNLP 2026 begins in…

-- Days
-- Hours
-- Minutes
-- Seconds

Welcome

The Singapore Symposium on Natural Language Processing (SSNLP) returns on Wednesday, January 21, 2026, at SUTD as a full-day, in-person event. Since 2018, SSNLP has been the annual gathering of Singapore’s NLP community, bringing together students, faculty, and industry researchers to share ongoing work and spark collaboration. Held in conjunction with AAAI 2026, this year’s symposium will feature invited keynotes from leading researchers in the field, a panel discussion, and an immersive poster-focused program showcasing recent work from Singapore-based researchers published in top venues such as ACL, EMNLP, NeurIPS, ICLR, and AAAI this year. We look forward to welcoming you to SSNLP 2026 for a day of research exchange and community building!

🎉 News

  • 22 Dec 2025 Registration is now open! Please register by Jan 11, 2026 to join us.
  • 20 Nov 2025 Call for Presentations is now open, fill the submission form by Dec. 7 to share your latest work to the community!
  • 18 Nov 2025 SSNLP 2026 website launched 🚀

Programme

Date: Jan 21, 2026 (Wed) | Venue: Albert Hong Lecture Theatre 1 & Campus Center @ SUTD

Program details are to be confirmed and will be updated closer to the event date.

Time Event Presenter
09:00–09:30 Registration
09:30–09:40 Welcome and Opening Remarks
09:40–10:20
Keynote 1: Making LLMs Listen, Speak, and Think: A journey to End-to-End Speech Language Models

Abstract: This presentation charts the evolution of Speech Language Models (SLMs) through the lens of our recent research. We describe the process of integrating speech encoders with text-based LLMs, emphasizing critical training strategies to prevent catastrophic forgetting. Next, we address the challenge of enabling LLMs to speak. To overcome the computational bottlenecks of conventional waveform-to-token autoregressive models, we introduce novel speech representation techniques that optimize sequence length and efficiency. The talk concludes with an approach to empowering SLMs with reasoning capabilities, advancing toward models that can "think while speaking."

Bio: Hung-yi Lee is a professor of the Department of Electrical Engineering at National Taiwan University (NTU), with a joint appointment at the Department of Computer Science & Information Engineering of the university. His recent research focuses on developing technology that can reduce the requirement of annotated data for speech processing (including voice conversion and speech recognition) and natural language processing (including abstractive summarization and question answering). He won Salesforce Research Deep Learning Grant in 2019, AWS ML Research Award in 2020, Outstanding Young Engineer Award from The Chinese Institute of Electrical Engineering in 2018, Young Scholar Innovation Award from Foundation for the Advancement of Outstanding Scholarship in 2019, Ta-You Wu Memorial Award from Ministry of Science and Technology of Taiwan in 2019, and The 59th Ten Outstanding Young Person Award in Science and Technology Research & Development of Taiwan. He is a Fellow of International Speech Communication Association (ISCA). He owns a YouTube channel teaching deep learning technology in Mandarin, which has more than 350,000 subscribers.

Hung-yi Lee
National Taiwan University
10:20–10:40 Invited Talks from Singapore Government Agency
Invited Talk 1: Culturally-Grounded Benchmarking

Bio: Seok Min leads the technical research team in Infocomm Media Development Authority (IMDA), driving initiatives in trust-related technologies, such as AI testing and digital watermarking. She began her career in cybersecurity research and has more recently focused on AI governance testing (AI Verify) and the evaluation of generative models (Project Moonshot).

Seok Min Lim
Assistant Director, BizTech Group, IMDA
10:40–11:00 Invited Talks from Singapore Government Agency
Invited Talk 2: SEA-LION: Southeast Asian Languages in One Network

Bio: Jian Gang is the Head of Applied Research in AI Singapore, and leads the teams that develop the SEA-LION family of models, the SEA-HELM evaluation suite and leaderboard, as well as various research publication efforts.

Jian Gang Ngui
Head of Applied Research, AI Singapore
11:00–11:40
Keynote 2: Enough of Scaling Laws! Let's focus on downscaling

Abstract: Despite the superior performance demonstrated by Transformer-based LLMs across numerous applications involving natural languages, their high computational cost, energy consumption, and limited accessibility underscore the need for efficient, interpretable, and adaptable small language models (SLMs). This talk highlights methods to develop economical and interpretable SLMs that rival their larger counterparts in performance without significant computational requirements. Our research emphasizes three key dimensions: economical resource usage, adaptability to diverse and low-resource tasks, and enhanced interpretability. Techniques like competitive knowledge distillation, leveraging student-teacher dynamics, and activation sparsity in manifold-preserving transformers demonstrate significant efficiency gains without compromising performance. We formulate novel decomposer components for LLMs for modularizing problem decomposition and solution generation, allowing smaller models to excel in complex reasoning tasks. We also propose innovative prompt construction and alignment strategies that boost in-context knowledge adaptation in low-resource settings for SLMs. Our findings demonstrate that SLMs can achieve scalability, interpretability, and adaptability, paving the way for broader and sustainable AI accessibility.

Bio: Tanmoy Chakraborty is a Rajiv Khemani Young Faculty Chair Professor in AI and an Associate Professor in the Dept. of Electrical Engineering and the School of AI at IIT Delhi. He leads the Laboratory for Computational Social Systems (LCS2), a research group that primarily focuses on building economical, adaptable and interpretable language models. He served as the DAAD visiting professor at MPI Saarbrucken, PECFAR visiting professor at TU Munich and Humboldt visiting professor at TU Darmstadt. Tanmoy has received numerous recognitions, including the Indian National Academic of Science Young Associate, Ramanujan Fellowship, ACL '23 Outstanding Paper Award, IJCAI'23 AI for Social Good Award, and several faculty awards from industries like Microsoft, IBM, Google, LinkedIn, JP Morgan, and Adobe. He has authored two textbooks -- "Social Network Analysis" and "Introduction to Large Language Models". Tanmoy earned his PhD from IIT Kharagpur in 2015 as a Google PhD Scholar. He served as the PC Chair of EMNLP'25 and the WebConf'26 (Web4Good Track), and local organizing chair of AACL'25. More details may be found at tanmoychak.com.

Tanmoy Chakraborty (Remote)
IIT Delhi
11:40–13:00 - Lunch Break
13:00–14:30 Poster Session Details ↓
14:30–15:10
Keynote 3: What Does Simple Mean? Grounded Text Simplification

Abstract: Large language models can generate fluent texts on demand, yet fluency is not the same as accessibility: the same text may be trivial for one reader and incomprehensible for another. This keynote argues that text simplification and complexity control should be grounded in standardized proficiency definitions rather than ad-hoc heuristics. I introduce a CEFR-grounded framework that treats "simplicity" as a target proficiency level, enabling controlled adaptation of syntax and vocabulary while preserving meaning. Central to the framework is CEFR-SP, a corpus of 17k English sentences with expert CEFR annotations, which supports proficiency-aware evaluation and learning signals. Building on CEFR-SP, I present reinforcement-learning approaches that steer LLM rewriting toward a specified CEFR level via lexical- and sentence-level rewards, and I discuss error analyses that reveal where current LLM-based simplifiers still fail. I conclude with open challenges in document-level simplification, including disentangling linguistic difficulty from topical expertise and discourse structure, toward inclusive NLP.

Bio: Yuki Arase is a professor at the School of Computing, Institute of Science Tokyo (aka Tokyo Institute of Technology), Japan. After obtaining her PhD in Information Science from Osaka University in 2010, she worked for Microsoft Research Asia, where she started NLP research that continues to captivate her to this day. Her research interests focus on paraphrasing and NLP technology for language education and healthcare.

Yuki Arase
Tokyo Institute of Technology
15:10–15:50
Keynote 4: Can a Language Model Be Its Own Judge? From Task Evaluation to Better Reasoning

Abstract: Modern large language models (LLMs) are increasingly expected not only to generate responses but also to evaluate their own outputs. This talk presents a unified perspective on transforming LLMs into reliable evaluators and demonstrates how such judging capability can, in turn, strengthen their reasoning performance. We begin by exploring the evaluative knowledge inherently embedded within large models and introduce methods to elicit and apply this capacity in tasks such as translation evaluation and solution verification. We then illustrate how these evaluative abilities can be integrated into reinforcement learning pipelines, allowing models to critique their own chain‑of‑thought and guide optimization toward deeper and more consistent reasoning. Collectively, these insights outline a practical framework: elicit the model's latent evaluative skills, calibrate its reasoning, structure clear assessment criteria, rigorously verify outcomes, and ultimately leverage these self‑generated judgments to train models that reason, and learn, to be better judges of their own performance.

Bio: Derek F. Wong is a Full Professor at the University of Macau, where he leads the Natural Language Processing and Chinese–Portuguese Machine Translation Laboratory (NLP2CT Lab). He serves on the boards and committees of CIPS, CCF, and AFNLP, and holds editorial roles with IEEE/ACM TASLP, ACM TALLIP, TACL, and the ACL Rolling Review. His research has earned multiple honors, including the Macao Science and Technology Awards (2012, 2022), the FST Research Excellence Award, and both the Outstanding Academic Staff Incentive (2022) and Teaching Excellence Award (2024) from UM. He has also contributed to major NLP conferences such as ACL, NeurIPS, ICML, IJCAI, AAAI, EMNLP, NAACL, COLING, AACL, and IJCNLP in various program leadership roles.

Derek F. Wong
University of Macau
15:50–16:20 - Coffee Break
16:20–17:20 Panel Discussion
17:20–17:30 Closing Remarks
Here is the poster list
No. Poster Details
1

Youchao Zhou, Heyan Huang, Yicheng Liu, Rui Dai, Xinglin Wang, Xingchen Zhang, Shumin Shi, Yang Deng

Do Retrieval Augmented Language Models Know When They Don't Know?
AAAI 2026
2

Mengfan Li, Xuanhua Shi, Yang Deng

RecToM: A Benchmark for Evaluating Machine Theory of Mind in LLM-based Conversational Recommender Systems
AAAI 2026
3

Wenzheng Zeng, Mingyu Ouyang, Langyuan Cui, Hwee Tou Ng

SlideTailor: Personalized Presentation Slide Generation for Scientific Papers
AAAI 2026
4

Moxin Li, Yuantao Zhang, Wenjie Wang, Wentao Shi, Zhuo Liu, Fuli Feng, Tat-Seng Chua

Self-improvement towards Pareto Optimality: Mitigating Preference Conflicts in Multi-objective Alignment
ACL 2025
5

Mingzhe Du, Luu Anh Tuan, Yue Liu, Yuhao Qing, Dong Huang, Xinyi He, Qian Liu, Zejun Ma, See-Kiong Ng

Afterburner: Reinforcement Learning Facilitates Self-Improving Code Efficiency Optimization
NeurIPS 2025 (Preprint)
6

Miaoyu Li, Qin Chao, Boyang Li

Two Causally Related Needles in a Video Haystack
NeurIPS 2025
7

Yi Feng, Jiaqi Wang, Wenxuan Zhang, Zhuang Chen, Yutong Shen, Xiyao Xiao, Minlie Huang, Liping Jing, Jian Yu

Reframe Your Life Story: Interactive Narrative Therapist and Innovative Moment Assessment with Large Language Models
EMNLP 2025
8

Chengtao Lv, Bilang Zhang, Yang Yong, Ruihao Gong, Yushi Huang, Shiqiao Gu, Jiajun Wu, Yumeng Shi, Jinyang Guo, Wenya Wang

LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play Toolkit
AAAI 2026
9

Kuicai Dong, Yujing Chang, Derrick Goh Xin Deik, Dexun Li, Ruiming Tang, Yong Liu

MMDocIR: Benchmarking Multimodal Retrieval for Long Documents
EMNLP 2025
10

Quanyu Long*, Jianda Chen*, Zhengyuan Liu, Nancy F. Chen, Wenya Wang, Sinno Jialin Pan

Reinforcing Compositional Retrieval: Retrieving Step-by-Step for Composing Informative Contexts
ACL 2025
11

Jianzhu Bao, Yuqi Huang, Yang Sun, Wenya Wang, Yice Zhang, Bojun Jin, Ruifeng Xu

Exploring Quality and Diversity in Synthetic Data Generation for Argument Mining
EMNLP 2025
12

Guizhen Chen, Weiwen Xu, Hao Zhang, Hou Pong Chan, Chaoqun Liu, Lidong Bing, Deli Zhao, Anh Tuan Luu, Yu Rong

FineReason: Evaluating and Improving LLMs' Deliberate Reasoning through Reflective Puzzle Solving
ACL 2025
13

Guizhen Chen, Weiwen Xu, Hao Zhang, Hou Pong Chan, Deli Zhao, Anh Tuan Luu, Yu Rong

GeoPQA: Bridging the Visual Perception Gap in MLLMs for Geometric Reasoning
EMNLP 2025
14

Qian Wang, Zhanzhi Lou, Zhenheng Tang, Nuo Chen, Xuandong Zhao, Wenxuan Zhang, Dawn Song, Bingsheng He

Assessing Judging Bias in Large Reasoning Models: An Empirical Study
COLM 2025
15

Tongyao Zhu, Qian Liu, Haonan Wang, Shiqi Chen, Xiangming Gu, Tianyu Pang, Min-Yen Kan

SkyLadder: Better and Faster Pretraining via Context Window Scheduling
NeurIPS 2025
16

Qiongqiong Wang, Hardik B. Sailor, Tianchi Liu, Wenyu Zhang, Muhammad Huzaifah, Nattadaporn Lertcheva, Shuo Sun, Nancy F. Chen, Jinyang Wu, Ai Ti Aw

Benchmarking Contextual and Paralinguistic Reasoning in Speech-LLMs: A Case Study with In-the-Wild Data
EMNLP 2025
17

Nicholas Sadjoli, Tim Siefken, Atin Ghosh, Yifan Mai, Daniel Dahlmeier

Optimization before Evaluation: Evaluation with Unoptimized Prompts Can be Misleading
ACL 2025
18

Fanxiao Li, Jiaying Wu, Tingchao Fu, Yunyun Dong, Bingbing Song, Wei Zhou

Drifting Away from Truth: GenAI-Driven News Diversity Challenges LVLM-Based Misinformation Detection
AAAI 2026
19

Burak Satar, Zhixin Ma, Patrick A. Irawan, Wilfried A. Mulyawan, Jing Jiang, Ee-Peng Lim, Chong-Wah Ngo

Seeing Culture: A Benchmark for Visual Reasoning and Grounding
EMNLP 2025
20

Yuxin Chen, Yiran Zhao, Yang Zhang, An Zhang, Kenji Kawaguchi, Shafiq Joty, Junnan Li, Tat-Seng Chua, Michael Qizhe Shish, Wenxuan Zhang

The Emergence of Abstract Thought in Large Language Models Beyond Any Language
NeurIPS 2025
21

Yixuan Tang, Jincheng Wang, Anthony K.H. Tung

The Missing Parts: Augmenting Fact Verification with Half-Truth Detection
EMNLP 2025
22

Renjie Luo, Jiaxi Li, Chen Huang, Wei Lu

Through the Valley: Path to Effective Long CoT Training for Small Language Models
EMNLP 2025
23

Yumeng Shi, Quanyu Long, Wenya Wang

Static or Dynamic: Towards Query-Adaptive Token Selection for Video Question Answering
EMNLP 2025
24

Yumeng Shi, Quanyu Long, Yin Wu, Wenya Wang

Causality Matters: How Temporal Information Emerges in Video Language Models
AAAI 2026
25

Yisong Miao, Min-Yen Kan

Discursive Circuits: How Do Language Models Understand Discourse Relations?
EMNLP 2025
26

Weihua Zheng, Xin Huang, Zhengyuan Liu, Tarun Kumar Vangani, Bowei Zou, Xiyan Tao, Yuhao Wu, Ai Ti Aw, Nancy F. Chen, Roy Ka-Wei Lee

AdaMCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Multilingual Chain-of-Thought
AAAI 2026

Invited Speakers

Hung-yi Lee

Hung-yi Lee

National Taiwan University

Yuki Arase

Yuki Arase

Tokyo Institute of Technology

Tanmoy Chakraborty

Tanmoy Chakraborty

Indian Institute of Technology, Delhi

Derek F. Wong

Derek F. Wong

University of Macau

Seok Min Lim

Seok Min Lim

IMDA

Jian Gang Ngui

Jian Gang Ngui

AI Singapore

Panel Discussion

Behind the Scenes of NLP Peer Review: Perspectives from Program Chairs

Wenxuan Zhang

Wenxuan Zhang

SUTD

Moderator

Yuki Arase

Yuki Arase

TokyoTech

AACL'23 PC

Nancy F. Chen

Nancy F. Chen

A*STAR

NeurIPS'25 / ICLR'23 PC

Wei Lu

Wei Lu

NTU

Editor-in-Chief, CL

Derek F. Wong

Derek F. Wong

UM

AACL'25 / NLPCC'24 PC

Organizers

General Chair Wenxuan Zhang · Singapore University of Technology and Design
Program Chairs
Wenya Wang · Nanyang Technological University
Yang Deng · Singapore Management University
Local Chairs
Ryner Tan · Singapore University of Technology and Design
Qisheng Hu · Nanyang Technological University
Registration Chairs
Satar Burak · Singapore Management University
Quanyu Long · Nanyang Technological University
Web & Publicity Chair Bobo Li · National University of Singapore
Industrial Relation Chairs
Liu Qian · TikTok/ByteDance AI Innovation Center, Singapore
Ming Shan Hee · MBZUAI Fundamental Models Research Center

📮 For any inquiries, feel free to reach out to Wenxuan Zhang, Wenya Wang, Yang Deng.

Sponsors

Location

SSNLP 2026 will be held at Albert Hong Lecture Theatre 1, SUTD, 8 Somapah Rd, Singapore 487372. Direction instruction will be given near the date.

Past Events