I think most people understood the basic argument: powerful reinforcement learners would behave badly, and we need to look for other frameworks. Pushing that idea was my biggest goal at the conference. I didn’t get much further than that with most people I talked to.
Unfortunately, almost nobody seemed convinced that it was an urgent issue, or one that could be solved, so I don’t expect many people to start working on FAI because of me. Hopefully repeated exposure to SI’s ideas will convince people gradually.
Common responses I got when I failed to convince someone included:
“I don’t care what AGIs do, I just want to solve the riddle of intelligence.”
“Why would you want to control an AGI, instead of letting it do what it wants to?”
“Our system has many different reward signals, not just one. It has hunger, boredom, loneliness, etc.”
It’s been submitted, but I haven’t gotten any word on whether it’s accepted yet.
EDIT: Accepted!
How was it received at the conference?
Well, overall.
I think most people understood the basic argument: powerful reinforcement learners would behave badly, and we need to look for other frameworks. Pushing that idea was my biggest goal at the conference. I didn’t get much further than that with most people I talked to.
Unfortunately, almost nobody seemed convinced that it was an urgent issue, or one that could be solved, so I don’t expect many people to start working on FAI because of me. Hopefully repeated exposure to SI’s ideas will convince people gradually.
Common responses I got when I failed to convince someone included:
“I don’t care what AGIs do, I just want to solve the riddle of intelligence.”
“Why would you want to control an AGI, instead of letting it do what it wants to?”
“Our system has many different reward signals, not just one. It has hunger, boredom, loneliness, etc.”
Ouch.
But keep up the good work regardless! Hopefully we’ll still convince them.
Thanks!