Yue Huang (黄跃)

Yue Huang (黄跃)

PhD Student of Computer and Science · University of Notre Dame

Daily life

About Me

I’m a second-year PhD student in MINE Lab of Computer Science and Engineering (CSE) at the University of Notre Dame began from Fall 2024, supervised by Leonard C. Bettex Collegiate Prof. Xiangliang Zhang. I’m also a graduate student in Foundation Models and Applications Lab (FMAL) at Lucy Family Institute for Data & Society. I obtained my bachelor’s degree from Sichuan University in 2024. In 2025 summer, I worked with Prasanna Sattigeri and Pin-Yu Chen at MIT-IBM Watson AI Lab and IBM Research AI. Previously, I was a visiting student under the guidance of Prof. Lichao Sun. This experience was enhanced by mentorship from Prof. Philip S. Yu. Earlier before, I worked under Prof. Tang Jie and Dr. Xiao Liu at Tsinghua University.

  • I welcome the opportunity to connect with colleagues in my field as well as those from interdisciplinary areas.
  • My recent research centers on: (1) the science of foundation models, with a particular emphasis on their trustworthiness; (2) edge or tailed alignment of foundation models; and (3) dynamic evaluation protocols tailored for generative models.
  • If you are interested in my research, please contact me via email or for an in-person conversation.

Research Interests

My research is centered on three pivotal questions:

Trustworthy, Aligned, and Democratically Governed Generative Foundation Models. This line of inquiry seeks to develop robust frameworks for evaluating trustworthiness and to identify strategies for enhancing the trustworthiness of these models within specific application domains. This includes: ICML'24, NAACL'24, ACM CCS'24, WWW'24, EMNLP'24, NeurIPS'24, ICLR'25d, and NeurIPS'25a.
Data-Driven Scalable Alignment for General-Purpose AI Systems. This research emphasizes data-centric methods to enable scalable model alignment and evolution, ensuring that they adhere to human values and ethical paradigms throughout the development process. This includes: ACL'24, EMNLP'24, ICLR'25a, and ICLR'25b.
Scientific AI and Societal AI. This research area critically assesses the practical impact of generative models, with a particular focus on their application and AI4Science, exploring its transformative potential and interdisciplinary contributions in fields such as agentic models, social sciences, and beyond. This includes: ICLR'24, ICLR'25c, and NeurIPS'25b.

News

Sep. 2025  We have four papers accepted by NeurIPS 2025 (huge thanks to Xiangqi, Yanbo, and all other co-authors), and one paper (EmoNest) is accepted by NeurIPS 2025 Creative AI track (Try our demo in Dec. at conference). See you in San Diego!
Aug. 2025  One paper is accepted by EMNLP 2025 Findings and two paper are accepted by CIKM 2025. We have incoming four tutorials in CIKM 2025, ICDM 2025, and AAAI 2026.
July. 2025   Preference Leakage won the best paper award of DIG-BUGs@ICML 2025, PsychometricBench won the best paper award of SciSocLLM@KDD'25. One paper accepted by COLM 2025.
May. 2025   Two papers are accepted by ACL 2025 (1 Main + 1 Findings).
Mar. 2025   TrustEval is accepted by NAACL 2025 Demo and UPME is accepted by CVPR 2025.
Jan. 2025   Four papers have been accepted by ICLR 2025! I was selected as KAUST Rising Stars in AI Symposium 2025 (24/300+).
Dec. 2024   I will join IBM Research as a Research Scientist Intern in 2025 Summer. See you in Cambridge, MA.
Sep. 2024   HonestLLM has been accepted by NeurIPS 2024. Congratulations to Chujie! Another paper has been accepted by main conference of EMNLP 2024.
Aug. 2024   Attack LLM-as-a-Judge has been accepted by ACM CCS 2024.
Jul. 2024   OpenAI's Researcher Access Program is Awarded.
May. 2024   TrustLLM has been accepted by ICML 2024. Another paper has been accepted by main conference of ACL 2024.
Mar. 2024   One paper has been accepted by NAACL 2024. Another paper has been accepted as a short paper of WWW 2024.
Jan. 2024   MetaTool has been accepted by ICLR 2024!

Highlight

AAAI 2026 Tutorial
AAAI 2026
January 20-27, 2026 | Singapore
CIKM 2026 Tutorial
CIKM 2025
November 10–14, 2025 | Seoul, Korea

Selected Publications (Full Publications)

Disclaimer: This material is presented to ensure the timely dissemination of scholarly works. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms invoked by each author’s copyright.

*: Equal Contribution

ChemOrch: Empowering LLMs with Chemical Intelligence via Groundbreaking Synthetic Instructions

Yue Huang*, Zhengzhe Jiang*, Xiaonan Luo, Kehan Guo, Haomin Zhuang, Yujun Zhou, Zhengqing Yuan, Xiaoqi Sun, Jules Schleinitz, Yanbo Wang, Shuhao Zhang, Mihir Surve, Nitesh V Chawla, Olaf Wiest, Xiangliang Zhang

The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025)

Exposing and Patching the Flaws of Large Language Models in Social Character Simulation

Yue Huang*, Zhengqing Yuan*, Yujun Zhou, Kehan Guo, Xiangqi Wang, Haomin Zhuang, Weixiang Sun, Lichao Sun, Jindong Wang, Yanfang Ye, Xiangliang Zhang

Second Conference on Language Modeling (COLM 2025)

DataGen: Unified Synthetic Dataset Generation via Large Language Models

Yue Huang*, Siyuan Wu*, Chujie Gao, Dongping Chen, Qihui Zhang, Yao Wan, Tianyi Zhou, Chaowei Xiao, Jianfeng Gao, et al.

The Thirteenth International Conference on Learning Representations (ICLR 2025)

Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge

Jiayi Ye*, Yanbo Wang*, Yue Huang*, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, Nitesh V Chawla, Xiangliang Zhang

The Thirteenth International Conference on Learning Representations (ICLR 2025)

GUI-World: A GUI-oriented Dataset for Multimodal LLM-based Agents

Dongping Chen*, Yue Huang*, Siyuan Wu, Jingyu Tang, Huichi Zhou, Qihui Zhang, Zhigang He, et al.

The Thirteenth International Conference on Learning Representations (ICLR 2025)

Cross-Lingual Pitfalls: Automatic Probing Cross-Lingual Weakness of Multilingual Large Language Models

Zixiang Xu*, Yanbo Wang*, Yue Huang*, Xiuying Chen, Jieyu Zhao, Meng Jiang, Xiangliang Zhang

The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025)

TRUSTEVAL: A Dynamic Evaluation Toolkit on Trustworthiness of Generative Foundation Models

Yanbo Wang*, Jiayi Ye*, Siyuan Wu*, Chujie Gao, Yue Huang, Xiuying Chen, Yue Zhao, Xiangliang Zhang

2025 Annual Conference of the North American Chapter of the Association for Computational Linguistics -- System Demonstration (NAACL 2025 Demo)

TrustLLM: Trustworthiness in Large Language Models

Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, et al.

2024 International Conference on Machine Learning (ICML 2024)

(Highlighted by United States Department of Homeland Security (DHS) & International AI Safety Report, Invited Talk at IBM Research)

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use

Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, et al.

The Twelfth International Conference on Learning Representations (ICLR 2024)

1+1>2: Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators?

Yue Huang*, Chenrui Fan*, Yuan Li, Siyuan Wu, et al.

The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)

HonestLLM: Toward an Honest and Helpful Large Language Model

Chujie Gao*, Siyuan Wu*, Yue Huang*, Dongping Chen*, Qihui Zhang*, et al.

Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

Optimization-based Prompt Injection Attack to LLM-as-a-Judge

Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong

The ACM Conference on Computer and Communications Security (ACM CCS 2024)

Talks

May. 2025   Toward Socially Impactful and Trustworthy Generative Foundation Models @ University of Illinois Urbana-Champaign [Slides]
Apr. 2025   On the Trustworthiness of Generative Foundation Models @ KAUST Rising Stars in AI Symposium 2025
Mar. 2025   Trustworthiness in Large Language Models @ University of Virginia
Feb. 2025   Toward Socially Impactful and Trustworthy Generative Foundation Models @ University of Southern California [Slides]
Jul. 2024   Bias of Large Language Models @ Technical University of Munich
Feb. 2024   Trustworthiness in Large Language Models @ IBM Research

Honors and Awards

Aug. 2025   NSF Discover ACCESS Project
Aug. 2025   NSF POSE Training Award (Role: Industry Mentor)
Jul. 2025   Best Paper Award of SciSocLLM@KDD’25
Jul. 2025   Best Paper Award of DIG-BUG@ICML’25
Jan. 2025   KAUST AI Rising Star
Jul. 2024   OpenAI's Researcher Access Program
Jun. 2024   Elite Student of School of Cyber Science and Engineering, Sichuan University (网安菁英)
Jan. 2024   Microsoft Accelerate Foundation Models Research

Academic Participation

  • Journal Reviewer: Nature Communication, IEEE Transactions on Artificial Intelligence (TAI), IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), ACM Transactions on Intelligent Systems and Technology (ACM TIST)

  • Conference Reviewer: NeurIPS (2025-), ICLR (2024-), ICML (2025-), AAAI (2025-), KDD (2025-), ICDM (2024-), WWW (2024-), COLM (2025-), ACL Rolling Review (2024-), EMNLP Demo Track (2024-), NAACL Demo Track (2025-), ACL Demo Track (2025-)

  • Technical Committee Member of 2024 IEEE Computer Society North America Student Challenge

Educations

Sep. 2024
Present
Ph.D in Computer Science and Engineering
Sep. 2020
Jun. 2024
BEng. in Cybersecurity

Internships

May. 2025
Aug. 2025
Research Intern
Sep. 2023
Jan. 2024
Research Intern

Acknowledgment

I am honored that my research is funded, supported, or recognized by:

Misc

  • I spent 18 years in my hometown, Fujian 🇨🇳, and had 4 wonderful years of university life in Sichuan 🌶️ (I can handle spicy food!).

  • I love exchanging ideas with people from different fields 🌍—it helps me see the world more broadly.

  • My favorite singers are Eason Chan and Steve Chou;小剛 🎤. Lately, I’ve been listening to Patti Tsai.

  • My favorite sports are swimming 🏊 and badminton 🏸. I also enjoy capturing scenic moments with my camera 📷.

  • I’m deeply grateful to those who’ve helped me along the way 🙏—thank you for helping me go further!