MohammadHossein Rezaei

prof_pic.jpg
Email: mhrezaei@arizona.edu

I am a Machine Learning Research Engineer at Scale AI Scale AI where I work on post-training and evaluation of LLMs. I worked on OnlineRubrics, an approach for post-training LLMs with evolving rubrics to improve alignment in tasks without verifiable ground-truth.

I earned a B.S. in Computer Science from the University of Arizona UArizona. I was a member of the Computational Language Understanding (CLU) Lab, advised by Eduardo Blanco, where I worked on making SLMs more robust against negation by further pre-training and paraphrasing in affirmative terms.

Previously, I was a research intern at Stanford University Stanford in the SALT Lab advised by Diyi Yang. There, I co-created EgoNormia, a benchmark for evaluating physical-social norm understanding in vision-language models.

news

Jan 05, 2026 I moved to New York City to join Scale AI as a Machine Learning Research Engineer, Post-training.
Dec 19, 2025 I graduated Summa Cum Laude with a B.S. in Computer Science and a Minor in Mathematics. I delivered the keynote address at the College of Science Convocation Ceremony.
Dec 17, 2025 I was selected as the Overall Outstanding Senior for both the Computer Science Department and the College of Science at the University of Arizona.
Oct 09, 2025 Check out my internship project at Scale AI: Online Rubrics Elicitation from Pairwise Comparisons.
May 27, 2025 I joined Scale AI as a Research Intern, Post-training.
May 15, 2025 EgoNormia: Benchmarking Physical Social Norm Understanding has been accepted to ACL 2025 Findings.
Jan 22, 2025 My paper, Making Language Models Robust Against Negation, has been accepted to NAACL 2025. See you in Albuquerque!

selected publications

  1. Online Rubrics Elicitation from Pairwise Comparisons
    2025
  2. EgoNormia: Benchmarking Physical-Social Norm Understanding
    MohammadHossein Rezaei*Yicheng Fu*Phil Cuvin*Caleb ZiemsYanzhe ZhangHao Zhu, and Diyi Yang
    In Findings of the Association for Computational Linguistics: ACL 2025, Jul 2025
  3. Making Language Models Robust Against Negation
    MohammadHossein Rezaei, and Eduardo Blanco
    In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Apr 2025
  4. ACL
    Paraphrasing in Affirmative Terms Improves Negation Understanding
    MohammadHossein Rezaei, and Eduardo Blanco
    In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Aug 2024