I feel very excited by the AI alignment discussion group I’m running at Oregon State University. Three weeks ago, most attendees didn’t know much about “AI security mindset”-ish considerations. This week, I asked the question “what, if anything, could go wrong with a superhuman reward maximizer which is rewarded for pictures of smiling people? Don’t just fit a bad story to the reward function. Think carefully.”
There was some discussion and initial optimism, after which someone said “wait, those optimistic solutions are just the ones you’d prioritize! What’s that called, again?” (It’s called anthropomorphic optimism)
I feel very excited by the AI alignment discussion group I’m running at Oregon State University. Three weeks ago, most attendees didn’t know much about “AI security mindset”-ish considerations. This week, I asked the question “what, if anything, could go wrong with a superhuman reward maximizer which is rewarded for pictures of smiling people? Don’t just fit a bad story to the reward function. Think carefully.”
There was some discussion and initial optimism, after which someone said “wait, those optimistic solutions are just the ones you’d prioritize! What’s that called, again?” (It’s called anthropomorphic optimism)
I’m so proud.