Learning to Engage with Interactive Systems: A Field Study on Deep Reinforcement Learning in a Public Museum
ACM Transactions on Human-Robot Interaction, Volume 10, Issue 1
By Lingheng Meng, Daiwei Lin, Adam Francey, Rob Gorbet, Philip Beesley, Dana Kulić
Abstract—Physical agents that can autonomously generate engaging, life-like behavior will lead to more responsive and user-friendly robots and other autonomous systems. Although many advances have been made for one-to-one interactions in well-controlled settings, physical agents should be capable of interacting with humans in natural settings, including group interaction. To generate engaging behaviors, the autonomous system must first be able to estimate its human partners’ engagement level. In this article, we propose an approach for estimating engagement during group interaction by simultaneously taking into account active and passive interaction, and use the measure as the reward signal within a reinforcement learning framework to learn engaging interactive behaviors. The proposed approach is implemented in an interactive sculptural system in a museum setting. We compare the learning system to a baseline using pre-scripted interactive behaviors. Analysis based on sensory data and survey data shows that adaptable behaviors within an expert-designed action space can achieve higher engagement and likeability.