MohammadHossein Rezaei

prof_pic.jpg
Email: mhrezaei@arizona.edu

I am a final-year undergraduate student at the University of Arizona UArizona majoring in Computer Science with a 4.0 GPA. I am a member of the Computational Language Understanding (CLU) Lab, advised by Eduardo Blanco, where I worked on making SLMs more robust against negation by further pre-training and paraphrasing in affirmative terms.

Previously, I was a research intern at Stanford University Stanford in the SALT Lab advised by Diyi Yang. There, I co-created EgoNormia, a benchmark for evaluating physical-social norm understanding in vision-language models.

In summer 2025, I was a post-training research intern at Scale AI Scale AI where I worked on OnlineRubrics, an approach for post-training LLMs with evolving rubrics to improve alignment in tasks without verifiable ground-truth.

news

Oct 09, 2025 Check out my internship project at Scale AI: Online Rubrics Elicitation from Pairwise Comparisons.
May 27, 2025 I joined Scale AI as a Research Intern, Post-training.
May 15, 2025 EgoNormia: Benchmarking Physical Social Norm Understanding has been accepted to ACL 2025 Findings.
Jan 22, 2025 My paper, Making Language Models Robust Against Negation, has been accepted to NAACL 2025. See you in Albuquerque!
Aug 20, 2024 I participated in the LINXS Summer Research Program at Stanford University in the summer of 2024 as an undergraduate visiting research intern. I was advised by Diyi Yang in the SALT Lab.
May 16, 2024 My paper, Paraphrasing in Affirmative Terms Improves Negation Understanding, has been accepted to ACL 2024.
Dec 06, 2023 Our paper on Interpreting Indirect Answers to Yes-No Questions in Multiple Languages has been accepted to EMNLP Findings 2023.
Jul 01, 2023 I participated in the SoNIC Summer Research Workshop 2023 at Cornell University.

selected publications

  1. Online Rubrics Elicitation from Pairwise Comparisons
    2025
  2. EgoNormia: Benchmarking Physical-Social Norm Understanding
    MohammadHossein Rezaei*Yicheng Fu*Phil Cuvin*Caleb ZiemsYanzhe ZhangHao Zhu, and Diyi Yang
    In Findings of the Association for Computational Linguistics: ACL 2025, Jul 2025
  3. Making Language Models Robust Against Negation
    MohammadHossein Rezaei, and Eduardo Blanco
    In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Apr 2025
  4. ACL
    Paraphrasing in Affirmative Terms Improves Negation Understanding
    MohammadHossein Rezaei, and Eduardo Blanco
    In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Aug 2024