University of California, Berkeley | Fall 2025
Instructor: Serina Chang (serinac@berkeley.edu)
Time: TuTh 2:00-3:30pm
Location: Wheeler 200
Office Hours: By appointment
This course will explore the intersection of machine learning (ML) and human behavior. The format of the course will be a mix of paper presentations, lectures, and a final project. We will cover three units:
Unit I: Modeling Human Behavior with ML. Predicting or simulating human behaviors is useful when behaviors are difficult to observe (e.g., for cost or privacy reasons) or cannot be observed (e.g., future or counterfactual behaviors). In this unit, we'll begin with recent efforts to simulate behaviors with LLMs, including survey responses, experimental results, and social interactions. We will discuss challenges in this domain, such as bias, diversity, validation, generalization, and scalability, along with approaches to address these challenges. We will start from the individual-level and scale up to entire networks and societies, exploring both LLM-based and more traditional approaches to modeling human societies.
Unit II: Algorithmically Infused Societies. In this unit, we'll discuss the interplay between algorithms and social systems: how algorithms shape human behaviors and human behaviors feed back into algorithms, resulting in feedback loops. We will study these loops in different contexts, such as recommender systems and social media feeds, and explore human awareness of algorithms and strategic shifts in behavior. We will end with a discussion about automating human decision-making, including how to compare human vs. algorithmic decisions under selective labels and the validity and risks of such automation.
Unit III: Adapting AI to Human Behavior. In this final unit, we will focus on how humans behave in interaction with AI systems and how we should adapt generative AI systems to work more effectively with humans. We will begin with analyses of human-AI interactions in the wild and discuss how to evaluate human-AI interactions. We will then explore various ways in which AI needs to adapt to humans in order to improve human-AI outcomes, such as how to understand human intents (e.g., given ambiguous user queries), learn individual preferences and personalize models, mitigate overreliance and provide explanations, and strive for complementarity.
Course topics were inspired in part by Johan Ugander's course on Social Algorithms and Joon Sung Park's course on AI Agents and Simulations. The course form was adapted from Sewon Min's course on Data-Centric Large Language Models.