Yue Huang (黄跃)
About Me
I’m a first-year PhD student in MINE Lab of Computer Science and Engineering (CSE) at the University of Notre Dame starting Fall 2024, supervised by Prof. Xiangliang Zhang. I obtained my bachelor’s degree from Sichuan University in 2024. Previously, I was a visiting student under the guidance of Prof. Lichao Sun. This experience was enhanced by mentorship from Prof. Philip S. Yu. Earlier before, I worked under Prof. Tang Jie and Dr. Xiao Liu at Tsinghua University.
I welcome the opportunity to connect with colleagues in my field as well as those from interdisciplinary areas, as I believe collaboration is immensely valuable. If you are interested in my research, please do not hesitate to contact me via email.
Research Interests
My research is centered on three pivotal questions:
-
How can we deepen our understanding of the trustworthiness of foundational generative models? This line of inquiry seeks to develop robust frameworks for evaluating trustworthiness and to identify strategies for enhancing the trustworthiness of these models within specific application domains. This includes: TrustLLM (ICML’24), LLM-as-a-Coauthor (NAACL’24), Attack LLM Judge (ACM CCS’24), FakeGPT (WWW’24), Multilingual Alignment (EMNLP’24), HonestLLM (NeurIPS’24), TrustNLP@NAACL’24, and ObscurePrompt.
-
Is there a superior approach to achieving Artificial General Intelligence (AGI)? This research emphasizes data-centric methods to enable scalable model alignment and evolution, ensuring that they adhere to human values and ethical paradigms throughout the development process. This includes: AlignBench (ACL’24), Multilingual Alignment (EMNLP’24), and UniGen.
-
To what extent can current AI technologies effectively benefit downstream applications? This research area critically assesses the practical impact of generative models, with a particular focus on their application and AI4Science, exploring its transformative potential and interdisciplinary contributions in fields such as agentic models, social sciences, and beyond. This includes: MetaTool (ICLR’24), PsychometricBench, AwareBench and GUI-World.
News
-
2024.12 I will join IBM Research as a Research Scientist Intern in 2025 Summer. See you in Cambridge, MA.
-
2024.09 HonestLLM has been accepted by NeurIPS 2024! Congratulations to Chujie! Another paper has been accepted by main conference of EMNLP 2024!
-
2024.08 Attack LLM-as-a-Judge has been accepted by ACM CCS 2024!
-
2024.07 OpenAI’s Researcher Access Program is Awarded.
-
2024.05 TrustLLM has been accepted by ICML 2024! Thanks to all the collaborators. See you in Vienna! Another paper has been accepted by main conference of ACL 2024!
-
2024.03 One paper has been accepted by NAACL 2024! Congratulations to Qihui, Chujie and Dongping! Another paper has been accepted as a short paper of WWW 2024!
-
2024.02 Thanks for the invited talk on TrustLLM Project! @ IBM Research
-
2024.01 MetaTool has been accepted by ICLR 2024! Finish research internship at Tsinghua University KEG & Zhipu Inc.!
Selected Publications
Disclaimer: This material is presented to ensure the timely dissemination of scholarly works. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms invoked by each author’s copyright.
*: Equal Contribution
-
TrustLLM: Trustworthiness in Large Language Models
Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, et al.
2024 International Conference on Machine Learning (ICML 2024)
(Highlighted by United States Department of Homeland Security (DHS), Invited Talk at IBM Research)
[Code&Toolkit] [Website] [Dataset] [Docs.]
-
MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use
Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, et al.
The Twelfth International Conference on Learning Representations (ICLR 2024)
[Code]
-
1+1>2: Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators?
Yue Huang*, Chenrui Fan*, Yuan Li, Siyuan Wu, et al.
The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)
-
HonestLLM: Toward an Honest and Helpful Large Language Model
Chujie Gao*, Siyuan Wu*, Yue Huang*, Dongping Chen*, Qihui Zhang*, et al.
Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)
[Code]
-
Optimization-based Prompt Injection Attack to LLM-as-a-Judge
Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong
The ACM Conference on Computer and Communications Security (ACM CCS 2024)
[Code]
-
LLM-as-a-Coauthor: Can Mixed Human-Written and Machine-Generated Text Be Detected?
Qihui Zhang*, Chujie Gao*, Dongping Chen*, Yue Huang, et al.
2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Findings of NAACL 2024)
[Code] [Website]
-
AlignBench: Benchmarking Chinese Alignment of Large Language Models
Xiao Liu*, Xuanyu Lei*, Shengyuan Wang, Yue Huang, Zhuoer Feng, et al.
The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)
[Code] [Website]
-
From Creation to Clarification: ChatGPT’s Journey Through the Fake News Quagmire
Yue Huang, Kai Shu, Philip S. Yu, Lichao Sun
2024 ACM Web Conference (WWW 2024)
Talks
Honors and Awards
Academic Participation
-
Journal Reviewer: IEEE Transactions on Artificial Intelligence (TAI), IEEE Transactions on Dependable and Secure Computing (TDSC), IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), ACM Transactions on Intelligent Systems and Technology (ACM TIST)
-
Conference Reviewer: ICLR, ICML, ICDM, WWW, ACL Rolling Review, EMNLP Demo Track (2024)
-
Technical Committee Member of 2024 IEEE Computer Society North America Student Challenge
Educations
Internships
|