“You’ve already said the friendly AI problem is terribly hard, and there’s a large chance we’ll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be “friendly”, making your design task all that harder?”
I think Eliezer regards these as sub-problems, necessary to the creation of a Friendly AI.
“You’ve already said the friendly AI problem is terribly hard, and there’s a large chance we’ll fail to solve it in time. Why then do you keep adding these extra minor conditions on what it means to be “friendly”, making your design task all that harder?”
I think Eliezer regards these as sub-problems, necessary to the creation of a Friendly AI.