It seems that as long as you don’t solve those problems a rational agent might have a nearly infinite incentive to expend all available resources on attempting to leave this universe, hack the matrix or undertake other crazily seeming stunts.
I don’t think this is a significant practical problem.
We have built lots of narrow intelligences. They work fine and this just doesn’t seem to be much of an issue.
I don’t think this is a significant practical problem.
We have built lots of narrow intelligences. They work fine and this just doesn’t seem to be much of an issue.