On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
Kinda, yes. Any problem is a decision theory problem—in a sense. However, we can get a long way without the wirehead problem, utility counterfeiting, and machines mining their own brains causing problems.
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.