Using Stories to Teach Human Values to Artificial Agents

Using Stories to Teach Human Values to Artificial Agents
Mark O. Riedl and Brent Harrison
School of Interactive Computing, Georgia Institute of Technology Atlanta, Georgia, USA

9b5de677b247a95869a8f8accbd798ff

Abstract
Value alignment is a property of an intelligent agent indicating that it can only pursue goals that are beneficial to humans. Successful value alignment should ensure that an artificial general intelligence cannot intention- ally or unintentionally perform behaviors that adversely affect humans. This is problematic in practice since it is difficult to exhaustively enumerated by human pro- grammers. In order for successful value alignment, we argue that values should be learned. In this paper, we hypothesize that an artificial intelligence that can read and understand stories can learn the values tacitly held by the culture from which the stories originate. We de- scribe preliminary work on using stories to generate a value-aligned reward signal for reinforcement learning agents that prevents psychotic-appearing behaviour.

read the paper here

Posted in Pop culture.

Leave a Reply

Your email address will not be published. Required fields are marked *