It’s an open question whether we could construct a utility function that is, in the ultimate analysis, Safe without being Fun.
Personally, I’m almost hoping the answer is no. I’d love to see the faces of all the world’s Very Serious People as we ever-so-seriously explain that if they don’t want to be killed to the last human being by a horrible superintelligent monster, they’re going to need to accept Fun as their lord and savior ;-).
It’s an open question whether we could construct a utility function that is, in the ultimate analysis, Safe without being Fun.
Personally, I’m almost hoping the answer is no. I’d love to see the faces of all the world’s Very Serious People as we ever-so-seriously explain that if they don’t want to be killed to the last human being by a horrible superintelligent monster, they’re going to need to accept Fun as their lord and savior ;-).
Almost everything about FAI is anon question. What’s you get ifyou multiply a bunch of open questions together?