Yue Huang (黄跃, Yue pronounced similar to “your”) is a PhD student in Computer Science and Engineering (CSE) at the University of Notre Dame starting Fall 2024, supervised by Prof. Xiangliang Zhang. Yue obtained a bachelor’s degree from Sichuan University in 2024.

Previously, Yue was a visiting student under the guidance of Prof. Lichao Sun. This experience was enhanced by mentorship from Prof. Philip S. Yu. Earlier before, Yue worked under Prof. Tang Jie and Dr. Xiao Liu at Tsinghua University.

I am seeking potential research collaborations and the position of industry research intern. If you are interested, please contact me.

We are hosting IEEE CS North America Student Challenge 2024 from Sep 30th to Oct 21. Welcome to participate the competition on Inferring User Latent Preference from Conversations with LLM, and win the First Prize: 2,500 USD; Second Prize: 1,500 USD; Third Prize: 500 USD.

💡 Research

My research is centered on three pivotal questions:

  1. How can we deepen our understanding of the trustworthiness of foundational generative models? This line of inquiry seeks to develop robust frameworks for evaluating trustworthiness and to identify strategies for enhancing the trustworthiness of these models within specific application domains. This includes: TrustLLM (ICML’24), LLM-as-a-Coauthor (NAACL’24), Attack LLM Judge (ACM CCS’24), FakeGPT (WWW’24), Multilingual Alignment (EMNLP’24), HonestLLM (NeurIPS’24), TrustNLP@NAACL’24, and ObscurePrompt.

  2. Is there a superior approach to achieving Artificial General Intelligence (AGI)? This research emphasizes data-centric methods to enable scalable model alignment and evolution, ensuring that they adhere to human values and ethical paradigms throughout the development process. This includes: AlignBench (ACL’24) and UniGen.

  3. To what extent can current AI technologies effectively benefit downstream applications? This research area critically assesses the practical impact of generative models, with a particular focus on their application and AI4Science, exploring its transformative potential and interdisciplinary contributions in fields such as agentic models, social sciences, and beyond. This includes: MetaTool (ICLR’24), PsychometricBench, AwareBench and GUI-World.

🔥 News

  • 2024.09.25  🎉🎉 HonestLLM has been accepted by NeurIPS 2024! Congratulations to Chujie!
  • 2024.09.19  🎉🎉 One paper has been accepted by main conference of EMNLP 2024!
  • 2024.08.19  🎉🎉 Attack LLM-as-a-Judge has been accepted by ACM CCS 2024!
  • 2024.07.28  🎉🎉 OpenAI’s Researcher Access Program is Awarded.
  • 2024.05.16  🎉🎉 One paper has been accepted by main conference of ACL 2024!
  • 2024.05.02  🎉🎉 TrustLLM has been accepted by ICML 2024! Thanks to all the collaborators. See you in Vienna!
  • 2024.03.14  🎉🎉 One paper has been accepted by NAACL 2024! Congratulations to Qihui, Chujie and Dongping!
  • 2024.03.05  🎉🎉 One paper has been accepted as a short paper of WWW 2024!
  • 2024.02.08  🎉🎉 Thanks for the invited talk on TrustLLM Project! @ IBM Research
  • 2024.01.15  🎉🎉 MetaTool has been accepted by ICLR 2024!
  • 2024.01.13  🎉🎉 Finish research internship at Tsinghua University KEG & Zhipu Inc.!

📝 Selected Publications

See more publications in my Google Scholar

ICLR 2024
sym

ICLR 2024 MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use

Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun

Code

ICML 2024
sym

ICML 2024 TrustLLM: Trustworthiness in Large Language Models

Yue Huang *, Lichao Sun *, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, et al. (*: co-corresponding authors)

Toolkit & Code ibm huggingface report Website Dataset Data Map Leaderboard Toolkit Document Downloads Metioned

🎤 Talk

  • 2024.02 Invited Talk: Trustworthiness in Large Language Models @ IBM Research
  • 2024.07 Invited Talk: Bias of Large Language Models @ TUM

🎖 Honors and Awards

  • 2024.07 OpenAI’s Researcher Access Program and API
  • 2024.01 Microsoft Accelerate Foundation Models Research is awarded (Project: TrustLLM & Lead PI: Lichao Sun)

📖 Educations

💻 Internships

  • 2023.09 - 2024.01, Research Intern at Tsinghua University KEG & Zhipu AI Inc.