- 👋 Hi, I’m Cheng Wang, a third-year undergrad at National University of Singapore (NUS).
- 👀 My research interest broadly covers NLP, AI Safety and Trustworty LLMs.
- 🎓 I am looking for a PhD position starting in Fall 2026. Any collaboration is welcome!
- 📩 Please contact me through email for any inquiries or collaboration opportunities.
-
National University of Singapore
- https://wangcheng0116.github.io
Highlights
- Pro
Pinned Loading
-
yueliu1999/Awesome-Jailbreak-on-LLMs
yueliu1999/Awesome-Jailbreak-on-LLMs PublicAwesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, datasets, evaluations, and analyses.
-
Awesome-LRMs-Safety
Awesome-LRMs-Safety PublicOfficial repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning Models to enhance their security and reliability.
-
CON-RECALL
CON-RECALL Public[COLING 2025] Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding
Python 8
-
hkust-nlp/model-task-align-rl
hkust-nlp/model-task-align-rl PublicThe official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".
Python 15
If the problem persists, check the GitHub status page or contact support.



