(Sorry, this shouldn’t be directed just at you in particular. I’m annoyed at how often I have to argue against this perception, and this paper happened to prompt me to actually write something.)
Seems like aside from Stuart Russell, Max Tegmark (or whoever gave him the following information) is another main person you should blame for this. I just ran across this quote from his Life 3.0 book (while looking for something else):
A currently popular approach to the second challenge is known in geek-speak as inverse reinforcement learning, which is the main focus of a new Berkeley research center that Stuart Russell has launched. [...] However, a key idea underlying inverse reinforcement learning is that we make decisions all the time, and that every decision we make reveals something about our goals. The hope is therefore that by observing lots of people in lots of situations (either for real or in movies and books), the AI can eventually build an accurate model of all our preferences.
Seems like aside from Stuart Russell, Max Tegmark (or whoever gave him the following information) is another main person you should blame for this. I just ran across this quote from his Life 3.0 book (while looking for something else):