@James: “Very very roughly: You should increase your ability to think about morality.
this seems like a very sensible idea to me, and if I had to work on building a FAI, this is the strategy I would use. Good to hear that someone else has had the same ideas I’ve had!
@Richard Hollerith: I believe that the only sort of seed AI anyone should ever launch has the “transparency” property, namely, that it is very clear and obvious to its creators what the seed AI’s optimization target is.
why?
I think that it is a mistake to create a utility maximizing AI of any kind, whether or not its utility function is easy for humans to read. But it’s a little bit hard to explain why. I owe you a blog post...
@James: “Very very roughly: You should increase your ability to think about morality.
this seems like a very sensible idea to me, and if I had to work on building a FAI, this is the strategy I would use. Good to hear that someone else has had the same ideas I’ve had!
@Richard Hollerith: I believe that the only sort of seed AI anyone should ever launch has the “transparency” property, namely, that it is very clear and obvious to its creators what the seed AI’s optimization target is.
why?
I think that it is a mistake to create a utility maximizing AI of any kind, whether or not its utility function is easy for humans to read. But it’s a little bit hard to explain why. I owe you a blog post...