Relatively new to the forum and just watched the 2 1⁄2 hour Yudkowsky video on Google. Excellent talk that really helped frame some of the posts here for me, though the audience questions were generally a distraction.
My biggest disappointment was the one question that popped up in my mind while watching and was actually posed wasn’t answered because it would take about 5 minutes. The man who asked was told to pose it again at the end of the talk, but did not.
This was the question about the friendly AI:
“Why are you assuming it knows the outcome of its modifications?”
Any pointer to the answer would be much appreciated.
Relatively new to the forum and just watched the 2 1⁄2 hour Yudkowsky video on Google. Excellent talk that really helped frame some of the posts here for me, though the audience questions were generally a distraction.
My biggest disappointment was the one question that popped up in my mind while watching and was actually posed wasn’t answered because it would take about 5 minutes. The man who asked was told to pose it again at the end of the talk, but did not.
This was the question about the friendly AI: “Why are you assuming it knows the outcome of its modifications?”
Any pointer to the answer would be much appreciated.