October 19, 2022
Dr. Anca Dragan Lytle and Colloquium Lectures
Dr. Anca Dragan, Professor of Electrical Engineering and Computer Sciences at UC Berkeley, will be presenting 2 exciting lectures at the University of Washington on November 14th and November 15th! Both lectures are available to the public and we encourage everyone interested to attend! Information regarding the lectures below.
Lytle Lecture “Robotics Algorithms that Take People into Account” | November 14th 3:15pm—4:30pm | UW HUB Lyceum | Register for the Lytle Lecture Here
Abstract: I discovered AI by reading “Artificial Intelligence: A Modern Approach”. What drew me in was the concept that you could specify a goal or objective for a robot, and it would be able to figure out on its own how to sequence actions in order to achieve it. In other words, we don’t have to hand-engineer the robot’s behavior — it emerges from optimal decision making. Throughout my career in robotics and AI, it has always felt satisfying when the robot would autonomously generate a strategy that I felt was the right way to solve the task, and it was even better when the optimal solution would take me a bit by surprise. In “Intro to AI” I share with students an example of this, where a mobile robot figures out it can avoid getting stuck in a pit by moving along the edge. In my group’s research, we tackle the problem of enabling robots to coordinate with and assist people: for example, autonomous cars driving among pedestrians and human-driven vehicles, or robot arms helping people with motor impairments (together with UCSF Neurology). And time and time again, what has sparked the most joy for me is when robots figure out their own strategies that lead to good interaction — when we don’t have to hand-engineer that an autonomous car should inch forward at a 4-way stop to assert its turn, for instance, but instead, the behavior emerges from optimal decision making. In this talk, I want to share how we’ve set up optimal decision making problems that require the robot to account for the people it is interacting with, and the surprising strategies that have emerged from that along the way. And I am very proud to say that you can also read a bit about these aspects now in the 4th edition of “Artificial Intelligence: A Modern Approach”, where I had the opportunity to edit the robotics chapter to include optimization and interaction.
Colloquium Lecture (Technical Talk) “On Human Models for Human-Robot Interaction” | November 15th 10:30am—11:30am | UW ECE 123
Abstract: Much of my work has dealt with human-robot interaction by pretending that people are like robots: assuming they optimize for utility, and run Bayes filters to maintain estimates over what they can’t directly observe. It’s somewhat surprising that this approach works at all, given that behavioral economics has long warned us that people are a bag of heuristics and cognitive biases, which is a far cry from “rational” robot behavior. On the other hand, treating people as black boxes and throwing a lot of data at the problem leads to models that are themselves a bag of spurious correlations that produce amazingly accurate predictions in distribution, but fail spectacularly outside of that context. This has left me with the question: how do we get accurate, yet robust, human models? One idea I want to share in this talk is that perhaps many of the aspects of human behavior that seem arbitrary, inconsistent, and time-varying, might actually be explained by acknowledging that people make decisions using inaccurate estimates that evolve over time. This is far away from a perfect model, but it greatly expands the space of useful models for robots and AI agents more broadly.