Yue Huang (黄跃)

alt text 

Undergraduate Icon Graduate Icon Lab Icon

PhD Student of Computer and Science
Computer Science and Engineering
University of Notre Dame

Fitzpatrick Hall of Engineering, Notre Dame, IN 46556 USA
Email: yhuang37 (at) nd.edu

Google Scholar · Github · LinkedIn · X

About Me

I’m a first-year PhD student in MINE Lab of Computer Science and Engineering (CSE) at the University of Notre Dame began from Fall 2024, supervised by Prof. Xiangliang Zhang. I’m also a graduate student in Foundation Models and Applications Lab (FMAL) at Lucy Family Institute for Data & Society. I obtained my bachelor’s degree from Sichuan University in 2024. I am working with Prasanna Sattigeri this summer at MIT-IBM Watson AI Lab.

Previously, I was a visiting student under the guidance of Prof. Lichao Sun. This experience was enhanced by mentorship from Prof. Philip S. Yu. Earlier before, I worked under Prof. Tang Jie and Dr. Xiao Liu at Tsinghua University.

I welcome the opportunity to connect with colleagues in my field as well as those from interdisciplinary areas, as I believe collaboration is immensely valuable.
My recent research has been centered on scientific foundation models, with a particular emphasis on their trustworthiness. Concurrently, I am working on the development of dynamic evaluation protocols tailored for generative models.
If you are interested in my research, please feel free to contact me via email or in-person conversation.

Research Interests

My research is centered on three pivotal questions:

Trustworthy, Aligned, and Democratically Governed Generative Foundation Models This line of inquiry seeks to develop robust frameworks for evaluating trustworthiness and to identify strategies for enhancing the trustworthiness of these models within specific application domains. This includes: ICML'24, NAACL'24, ACM CCS'24, WWW'24, EMNLP'24, NeurIPS'24, and ICLR'25d.
Data-Driven Scalable Alignment for General-Purpose AI Systems This research emphasizes data-centric methods to enable scalable model alignment and evolution, ensuring that they adhere to human values and ethical paradigms throughout the development process. This includes: ACL'24, EMNLP'24, ICLR'25a, and ICLR'25b.
Scientific AI and Societal AI This research area critically assesses the practical impact of generative models, with a particular focus on their application and AI4Science, exploring its transformative potential and interdisciplinary contributions in fields such as agentic models, social sciences, and beyond. This includes: ICLR'24, Preprint, and ICLR'25c.

News

May. 2025   Two papers are accepted by ACL 2025 (1 Main + 1 Findings).
Mar. 2025   TrustEval is accepted by NAACL 2025 Demo and UPME is accepted by CVPR 2025.
Jan. 2025   Four papers have been accepted by ICLR 2025! I was selected as KAUST Rising Stars in AI Symposium 2025 (24/300+).
Dec. 2024   I will join IBM Research as a Research Scientist Intern in 2025 Summer. See you in Cambridge, MA.
Sep. 2024   HonestLLM has been accepted by NeurIPS 2024. Congratulations to Chujie! Another paper has been accepted by main conference of EMNLP 2024.
Aug. 2024   Attack LLM-as-a-Judge has been accepted by ACM CCS 2024.
Jul. 2024   OpenAI's Researcher Access Program is Awarded.
May. 2024   TrustLLM has been accepted by ICML 2024. Another paper has been accepted by main conference of ACL 2024.
Mar. 2024   One paper has been accepted by NAACL 2024. Another paper has been accepted as a short paper of WWW 2024.
Jan. 2024   MetaTool has been accepted by ICLR 2024!

Selected Publications

Disclaimer: This material is presented to ensure the timely dissemination of scholarly works. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms invoked by each author’s copyright.

*: Equal Contribution

DataGen: Unified Synthetic Dataset Generation via Large Language Models

Yue Huang*, Siyuan Wu*, Chujie Gao, Dongping Chen, Qihui Zhang, Yao Wan, Tianyi Zhou, Chaowei Xiao, Jianfeng Gao, et al.

The Thirteenth International Conference on Learning Representations (ICLR 2025)

Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge

Jiayi Ye*, Yanbo Wang*, Yue Huang*, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, Nitesh V Chawla, Xiangliang Zhang

The Thirteenth International Conference on Learning Representations (ICLR 2025)

GUI-World: A GUI-oriented Dataset for Multimodal LLM-based Agents

Dongping Chen*, Yue Huang*, Siyuan Wu, Jingyu Tang, Huichi Zhou, Qihui Zhang, Zhigang He, et al.

The Thirteenth International Conference on Learning Representations (ICLR 2025)

Interleaved Scene Graph for Interleaved Text-and-Image Generation Assessment

Dongping Chen, Ruoxi Chen, Shu Pu, Zhaoyi Liu, Yanru Wu, Caixi Chen, Benlin Liu, Yue Huang, Yao Wan, Pan Zhou, Ranjay Krishna

The Thirteenth International Conference on Learning Representations (ICLR 2025 Spotlight)

Cross-Lingual Pitfalls: Automatic Probing Cross-Lingual Weakness of Multilingual Large Language Models

Zixiang Xu*, Yanbo Wang*, Yue Huang*, Xiuying Chen, Jieyu Zhao, Meng Jiang, Xiangliang Zhang

The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025)

Beyond Single-Value Metrics: Evaluating and Enhancing LLM Unlearning with Cognitive Diagnosis

Yicheng Lang*, Kehan Guo*, Yue Huang, Yujun Zhou, Haomin Zhuang, Tianyu Yang, Yao Su, Xiangliang Zhang

Findings of The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025 Findings)

UPME: An Unsupervised Peer Review Framework for Multimodal Large Language Model Evaluation

Qihui Zhang, Munan Ning, Zheyuan Liu, Yanbo Wang, Jiayi Ye, Yue Huang, Shuo Yang, Xiao Chen, Yibing Song, Li Yuan

The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025 (CVPR 2025)

TRUSTEVAL: A Dynamic Evaluation Toolkit on Trustworthiness of Generative Foundation Models

Yanbo Wang*, Jiayi Ye*, Siyuan Wu*, Chujie Gao, Yue Huang, Xiuying Chen, Yue Zhao, Xiangliang Zhang

2025 Annual Conference of the North American Chapter of the Association for Computational Linguistics -- System Demonstration (NAACL 2025 Demo)

TrustLLM: Trustworthiness in Large Language Models

Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, et al.

2024 International Conference on Machine Learning (ICML 2024)

(Highlighted by United States Department of Homeland Security (DHS) & International AI Safety Report, Invited Talk at IBM Research)

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use

Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, et al.

The Twelfth International Conference on Learning Representations (ICLR 2024)

1+1>2: Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators?

Yue Huang*, Chenrui Fan*, Yuan Li, Siyuan Wu, et al.

The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)

HonestLLM: Toward an Honest and Helpful Large Language Model

Chujie Gao*, Siyuan Wu*, Yue Huang*, Dongping Chen*, Qihui Zhang*, et al.

Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

Optimization-based Prompt Injection Attack to LLM-as-a-Judge

Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong

The ACM Conference on Computer and Communications Security (ACM CCS 2024)

LLM-as-a-Coauthor: Can Mixed Human-Written and Machine-Generated Text Be Detected?

Qihui Zhang*, Chujie Gao*, Dongping Chen*, Yue Huang, et al.

2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Findings of NAACL 2024)

AlignBench: Benchmarking Chinese Alignment of Large Language Models

Xiao Liu*, Xuanyu Lei*, Shengyuan Wang, Yue Huang, Zhuoer Feng, et al.

The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)

From Creation to Clarification: ChatGPT's Journey Through the Fake News Quagmire

Yue Huang, Kai Shu, Philip S. Yu, Lichao Sun

2024 ACM Web Conference (WWW 2024)

Talks

May. 2025   Toward Socially Impactful and Trustworthy Generative Foundation Models @ University of Illinois Urbana-Champaign [Slides]
Apr. 2025   On the Trustworthiness of Generative Foundation Models @ KAUST Rising Stars in AI Symposium 2025
Mar. 2025   Trustworthiness in Large Language Models @ University of Virginia
Feb. 2025   Toward Socially Impactful and Trustworthy Generative Foundation Models @ University of Southern California [Slides]
Jul. 2024   Bias of Large Language Models @ Technical University of Munich
Feb. 2024   Trustworthiness in Large Language Models @ IBM Research

Honors and Awards

Jan. 2025   KAUST AI Rising Star
Jul. 2024   OpenAI's Researcher Access Program
Jan. 2024   Microsoft Accelerate Foundation Models Research

Academic Participation

  • Journal Reviewer: IEEE Transactions on Artificial Intelligence (TAI), IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), ACM Transactions on Intelligent Systems and Technology (ACM TIST)

  • Conference Reviewer: NeurIPS, ICLR, ICML, ICDM, WWW, COLM, ACL Rolling Review, EMNLP Demo Track (2024), NAACL Demo Track (2025), ACL Demo Track (2025)

  • Technical Committee Member of 2024 IEEE Computer Society North America Student Challenge

Educations

Sep. 2024
Present
Ph.D in Computer Science and Engineering
Sep. 2020
Jun. 2024
BEng. in Cybersecurity

Internships

May. 2025
present
Research Intern
Sep. 2023
Jan. 2024
Research Intern

Misc

  • I spent 18 years in my hometown, Fujian 🇨🇳, and had 4 wonderful years of university life in Sichuan 🌶️ (I can handle spicy food!).

  • I love exchanging ideas with people from different fields 🌍—it helps me see the world more broadly.

  • My favorite singers are Eason Chan and Steve Chou;小剛 🎤. Lately, I’ve been listening to Evangeline Wong;王艷薇 🎶.

  • My favorite sports are swimming 🏊 and badminton 🏸. I also enjoy capturing scenic moments with my camera 📷.

  • I’m deeply grateful to those who’ve helped me along the way 🙏—thank you for helping me go further!