Most have seemed to think that decision theory is a very small piece of the AGI picture.
I also think that’s most likely the case, but there’s a significant chance that it isn’t. I have not heard a strong argument why decision theory must be a very small piece of the AGI picture (and I did bring up this question on the decision theory mailing list), and in my state of ignorance it doesn’t seem crazy to think that maybe with the right decision theory and just a few other key pieces of technology, AGI would be possible.
On one hand, machine intelligence is all about making decisions in the face of uncertainty—so from this perspective, decision theory is central.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
The idea that safe machine intelligence will be assisted by modifications to decision theory to deal with “esoteric” corner cases seems to be mostly down to Eliezer Yudkowsky. I think it is a curious idea—but I am very happy that it isn’t an idea that I am faced with promoting.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.
On one hand, machine intelligence is all about making decisions in the face of uncertainty—so from this perspective, decision theory is central.
On the other hand, the basics of decision theory do not look that complicated—you just maximise expected utility. The problems seem to be mostly down to exactly how to do that efficiently.
The idea that safe machine intelligence will be assisted by modifications to decision theory to deal with “esoteric” corner cases seems to be mostly down to Eliezer Yudkowsky. I think it is a curious idea—but I am very happy that it isn’t an idea that I am faced with promoting.
Isn’t AIXI a counter-example to that? We could give it unlimited computing power, and it would still screw up badly, in large part due to a broken decision theory, right?
Kinda, yes. Any problem is a decision theory problem—in a sense. However, we can get a long way without the wirehead problem, utility counterfeiting, and machines mining their own brains causing problems.
From the perspective of ordinary development these don’t look like urgent issues—we can work on them once we have smarter minds. We need not fear not solving them too much—since if we can’t solve these problems our machines won’t work and nobody will buy them. It would take security considerations to prioritise these problems at this stage.