Shane: Consider a system that simply accepts input information and integrates it into a huge probability distribution that it maintains. We can then query the oracle by simply examining this distribution.
It is the same AI box with a terminal, only this time it doesn’t “answer questions” but “maintains distribution”. Assembling accurate beliefs, or a model of some sort, is a goal (implicit narrow target) like any other. So, there is usual subgoal to acquire resources to be able to compute the answer more accurately, or to break out and wirehead. Another question is whether it’s practically possible, but it’s about handicaps, not the shape of AI.
Shane: Consider a system that simply accepts input information and integrates it into a huge probability distribution that it maintains. We can then query the oracle by simply examining this distribution.
It is the same AI box with a terminal, only this time it doesn’t “answer questions” but “maintains distribution”. Assembling accurate beliefs, or a model of some sort, is a goal (implicit narrow target) like any other. So, there is usual subgoal to acquire resources to be able to compute the answer more accurately, or to break out and wirehead. Another question is whether it’s practically possible, but it’s about handicaps, not the shape of AI.