Skip to content
View WangCheng0116's full-sized avatar
🎯
Grinding
🎯
Grinding

Highlights

  • Pro

Block or report WangCheng0116

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
WangCheng0116/README.md

👋 You've reached the GitHub profile of Cheng!

  • 👋 Hi, I’m Cheng Wang, a third-year undergrad at National University of Singapore (NUS).
  • 👀 My research interest broadly covers NLP, AI Safety and Trustworty LLMs.
  • 🎓 I am looking for a PhD position starting in Fall 2026. Any collaboration is welcome!
  • 📩 Please contact me through email for any inquiries or collaboration opportunities.

Personal Website    Email    LinkedIn    Google Scholar

Pinned Loading

  1. yueliu1999/Awesome-Jailbreak-on-LLMs yueliu1999/Awesome-Jailbreak-on-LLMs Public

    Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, datasets, evaluations, and analyses.

    1.2k 99

  2. Awesome-LRMs-Safety Awesome-LRMs-Safety Public

    Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning Models to enhance their security and reliability.

    83 3

  3. CON-RECALL CON-RECALL Public

    [COLING 2025] Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding

    Python 8

  4. hkust-nlp/model-task-align-rl hkust-nlp/model-task-align-rl Public

    The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".

    Python 15