The way I think about it, if we can reduce one FAI problem to another FAI or AGI problem, which we know has to be solved anyway, that counts as solving the former problem (modulo the possibility of being wrong about the reduction, or being wrong about the necessity of solving the latter problem).
This is not how I use the term “solved”, also the gist of my reply was that possibly one aspect of one aspect of a large problem had been reduced to an unsolved problem in UDT.
multilevel reasoning about physical laws and high-level objects
Also agreed, but I think it’s plausible that the solution to this could just fall out of a principled approach to the problem of logical uncertainty.
Thaaat sounds slightly suspicious to me. I mean it sounds a bit like expecting a solution to the One True Prior to fall out of the development of a principled probability theory, or like expecting a solution to AGI to fall out of a principled approach to causal models. I would expect a principled approach to logical uncertainty to look like the core of probability theory itself, with a lot left to be filled in to make an actual epistemic model. I would also think it plausible that a principled version of logical uncertainty would resemble probability theory in that it would still be too expensive to compute, and that an additional principled version of bounded logical uncertainty would be needed on top, and then a further innovation akin to causal models or a particular prior to yield bounded logical uncertainty that looks like multi-level maps of a single-level territory.
the self-referential aspects of the reasoning
Same with this one.
Same reply, plus specific mild skepticism relating to how current work on the Lobian obstacle hasn’t yet taken a shape that looks like it fills the logical-counterfactual symbol in UDT, plus specific stronger skepticism that it would be work on UDT qua UDT that burped out a solution to tiling agents rather than the other way around!
updating in cases where there’s no predetermined Cartesian boundary of what constitutes the senses
I don’t understand why you think it’s a problem in UDT. A UDT-agent would have some sort of sensory pre-processor which encodes its sensory data into an arbitrary digital format and then feed that into UDT. UDT would compute an optimal input/output map, apply that map to its current input, then send the output to its actuators. Does this count as having a “predetermined Cartesian boundary of what constitutes the senses”? Why do we need to handle cases where there is no such boundary?
Let’s say you add a new sensor. How do you remap? We could maybe try to reframe as a tiling problem where agents create successor agents which then have new sensors… whereupon we run into all the current usual tiling issues and Lobian obstacles. Thinking about this in a natively naturalized mode, it doesn’t seem too unnatural to me to try to adopt a bridge hypothesis to an AI that can choose to treat arbitrary events in RAM as sensory observations and condition on them. This does not seem to me to mesh as well with native thinking in UDT the way I wrote out the equation. Again, it’s possible that we could make the two mesh via tiling, assuming that tiling with UDT agents optimizing over a map where actions included building further UDT agents introduced no further open problems or free variables or anomalies into UDT. But that’s a big assumption.
And then all this is just one small aspect of building an AGI, not most of the way AFAICT.
...mild skepticism relating to how current work on the Lobian obstacle hasn’t yet taken a shape that looks like it fills the logical-counterfactual symbol in UDT...
...I mean it sounds a bit like expecting a solution to the One True Prior to fall out of the development of a principled probability theory...
I believe my new formalism circumvents the problem by avoiding strong prior sensitivity.
Same reply, plus specific mild skepticism relating to how current work on the Lobian obstacle hasn’t yet taken a shape that looks like it fills the logical-counterfactual symbol in UDT...
My proposal does look that way. I hope to publish an improved version soon which also admits logical uncertainty in the sense of being unable to know the zillionth digit of pi.
Thinking about this in a natively naturalized mode, it doesn’t seem too unnatural to me to try to adopt a bridge hypothesis to an AI that can choose to treat arbitrary events in RAM as sensory observations and condition on them.
In myformalism input channels and arbitrary events in RAM have similar status.
Minor formal note: I have a mildly negative knee-jerk when someone repeatedly links to/promotes to something referred to only as “my ___”. Giving your formalism a proper name might make you sound less gratuitously self-promotional (which I don’t think you are).
Actually I already have a name for the formalism: I call it the “updateless intelligence metric”. My intuition was that referring to my own invention by the serious-sounding name I gave it myself would sound more pompous / self-promotional than referring to it as just “my formalism”. Maybe I was wrong.
This is not how I use the term “solved”, also the gist of my reply was that possibly one aspect of one aspect of a large problem had been reduced to an unsolved problem in UDT.
Thaaat sounds slightly suspicious to me. I mean it sounds a bit like expecting a solution to the One True Prior to fall out of the development of a principled probability theory, or like expecting a solution to AGI to fall out of a principled approach to causal models. I would expect a principled approach to logical uncertainty to look like the core of probability theory itself, with a lot left to be filled in to make an actual epistemic model. I would also think it plausible that a principled version of logical uncertainty would resemble probability theory in that it would still be too expensive to compute, and that an additional principled version of bounded logical uncertainty would be needed on top, and then a further innovation akin to causal models or a particular prior to yield bounded logical uncertainty that looks like multi-level maps of a single-level territory.
Same reply, plus specific mild skepticism relating to how current work on the Lobian obstacle hasn’t yet taken a shape that looks like it fills the logical-counterfactual symbol in UDT, plus specific stronger skepticism that it would be work on UDT qua UDT that burped out a solution to tiling agents rather than the other way around!
Let’s say you add a new sensor. How do you remap? We could maybe try to reframe as a tiling problem where agents create successor agents which then have new sensors… whereupon we run into all the current usual tiling issues and Lobian obstacles. Thinking about this in a natively naturalized mode, it doesn’t seem too unnatural to me to try to adopt a bridge hypothesis to an AI that can choose to treat arbitrary events in RAM as sensory observations and condition on them. This does not seem to me to mesh as well with native thinking in UDT the way I wrote out the equation. Again, it’s possible that we could make the two mesh via tiling, assuming that tiling with UDT agents optimizing over a map where actions included building further UDT agents introduced no further open problems or free variables or anomalies into UDT. But that’s a big assumption.
And then all this is just one small aspect of building an AGI, not most of the way AFAICT.
Please take a look at my adaption of parametric polymorphism to the updateless intelligence formalism.
I believe my new formalism circumvents the problem by avoiding strong prior sensitivity.
My proposal does look that way. I hope to publish an improved version soon which also admits logical uncertainty in the sense of being unable to know the zillionth digit of pi.
In my formalism input channels and arbitrary events in RAM have similar status.
Minor formal note: I have a mildly negative knee-jerk when someone repeatedly links to/promotes to something referred to only as “my ___”. Giving your formalism a proper name might make you sound less gratuitously self-promotional (which I don’t think you are).
Hi Vulture, thanks for your comment!
Actually I already have a name for the formalism: I call it the “updateless intelligence metric”. My intuition was that referring to my own invention by the serious-sounding name I gave it myself would sound more pompous / self-promotional than referring to it as just “my formalism”. Maybe I was wrong.