This means that equation (20) in Hutter is written as a utility function over sense data, where the reward channel is just a special case of sense data. We can easily adapt this equation to talk about any function computed directly over sense data—we can get AIXI to optimize any aspect of its sense data that we please. We can’t get it to optimize a quality of the external universe. One of the challenges I listed in my FAI Open Problems talk, and one of the problems I intend to talk about in my FAI Open Problems sequence, is to take the first nontrivial steps toward adapting this formalism—to e.g. take an equivalent of AIXI in a really simple universe, with a really simple goal, something along the lines of a Life universe and a goal of making gliders, and specify something given unlimited computing power which would behave like it had that goal, without pre-fixing the ontology of the causal representation to that of the real universe, i.e., you want something that can range freely over ontologies in its predictive algorithms, but which still behaves like it’s maximizing an outside thing like gliders instead of a sensory channel like the reward channel. This is an unsolved problem!
It gets more interesting if the computing power is not unlimited but strictly smaller than that of the universe in which the agent is living (excluding the ridiculous ‘run sim since big bang and find yourself in it’ non-solution). Also, it is not only an open problem in the FAI, but also an open problem in the dangerous uFAI.
edit: actually I would search for general impossibility proofs at that point. Also, keep in mind that having ‘all possible models, weighted’ is the ideal Bayesian approach, so it may be the case that simply striving for the most correct way of acting upon uncertainty makes it impossible to care about any real world goals.
Also it is rather interesting how 1 sample into ‘unethical AI design space’ (AIXI) yielded something which, most likely, is fundamentally incapable of caring about a real world goal (but is still an incredibly powerful optimization process if given enough computing power edit: i.e. AIXI doesn’t care if you live or die but in a way quite different from a paperclip maximizer). In so much as one previously had an argument that such is incredibly unlikely, one ought to update and severely lower the probability of correctness of methods employed for generating that argument.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Downvoting the whole thing as probable nonsense, though my judgment here is influenced by numerous downvoted troll comments that poster has made previously.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Limited computing power means that the ontologies have to be processed approximately (can’t simulate everything at level of quarks all way from the big bang), likely in some sort of multi level model which can go down to level of quarks but also has to be able to go up to level of paperclips, i.e. would have to be able to establish relations between ontologies of different level of detail. It is not inconceivable that e.g. Newtonian mechanics would be part of any multi level ontology, no matter what it has at microscopic level. Note that while I am very skeptical about the AI risk, this is an argument slightly in favour of the risk.
It gets more interesting if the computing power is not unlimited but strictly smaller than that of the universe in which the agent is living (excluding the ridiculous ‘run sim since big bang and find yourself in it’ non-solution). Also, it is not only an open problem in the FAI, but also an open problem in the dangerous uFAI.
edit: actually I would search for general impossibility proofs at that point. Also, keep in mind that having ‘all possible models, weighted’ is the ideal Bayesian approach, so it may be the case that simply striving for the most correct way of acting upon uncertainty makes it impossible to care about any real world goals.
Also it is rather interesting how 1 sample into ‘unethical AI design space’ (AIXI) yielded something which, most likely, is fundamentally incapable of caring about a real world goal (but is still an incredibly powerful optimization process if given enough computing power edit: i.e. AIXI doesn’t care if you live or die but in a way quite different from a paperclip maximizer). In so much as one previously had an argument that such is incredibly unlikely, one ought to update and severely lower the probability of correctness of methods employed for generating that argument.
The ontology problem has nothing to do with computing power, except that limited computing power means you use fewer ontologies. The number might still be large, and for a smart AI not fixable in advance; we didn’t know about quantum fields just recently, and new approximations and models are being invented all the time. If your last paragraph isn’t talking about evolution, I don’t know what it’s talking about.
Downvoting the whole thing as probable nonsense, though my judgment here is influenced by numerous downvoted troll comments that poster has made previously.
Limited computing power means that the ontologies have to be processed approximately (can’t simulate everything at level of quarks all way from the big bang), likely in some sort of multi level model which can go down to level of quarks but also has to be able to go up to level of paperclips, i.e. would have to be able to establish relations between ontologies of different level of detail. It is not inconceivable that e.g. Newtonian mechanics would be part of any multi level ontology, no matter what it has at microscopic level. Note that while I am very skeptical about the AI risk, this is an argument slightly in favour of the risk.