QACI and plausibly PreDCA rely on a true name of phenomena in the real world using solomonoff induction, and thus talk about locating them in a theoretical giant computation of the universe, from the beginning. it’s reasonable to be concerned that there isn’t enough compute for an aligned AI to actually do this. however, i have two responses:
isn’t there enough compute? supposedly, our past lightcone is a lot smaller than our future lightcone, and quantum computers seem to work. this is evidence that we can, at least in theory, build within our future lightcone a quantum computer simulating our past lightcone. the major hurdle here would be “finding out” a fully explanatory “initial seed” of the universe, which could take exponential time, but also could maybe not.
we don’t need to simulate past lightcone. if you ask me what my neighbor was thinking yesterday at noon, the answer is that i don’t know! the world might be way too complex to figure that out without simulating it and scanning his brain. however, i have a reasonable distribution over guesses. he was more likely to think about french things than korean things. he was more likely to think about his family than my family. et cetera. an aligned superintelligence can hold an ever increasingly refined distribution of guesses, and then maximize the expected utility of utility functions corresponding to each guess.
you’ve thoroughly convinced me that your formalisms do not break in the way I thought they did because of the limitation I’m referencing in this post.
That said, I still think point 1 is invalid: no, I don’t think any computational system until the end of time will know to 5 decimal places what temperature it was 500 feet in the air off the instantaneous shape of the surface of the ocean [edit: at gps 0,0] the exact moment that the apple made contact with newton’s scalp skin. it’s just too chaotic.
Maybe you can manage three, or something. Or maybe I’m wrong and you can actually get more than 5. but then I can go further and say you definitely can’t get ten decimal places, or fifteen. There’s just no way you can collect all of the thermal noise from that moment in time and run it backwards. I think. well, I guess I can imagine that this might not be true; there are ways the past might be unique and possible to exactly infer, and I definitely believe the past is quite unique at the level of humans who had any significant impact on history at all, so ancestor sims are in fact probably possible.
But as you say—gotta use simplifying abstraction for that to work.
“just too chaotic” is covered by “entire past lightcone”. no matter how chaotic, the past lightcone is one whole computation which you can just run naively, if you’ve got room. you get all decimal places, just like an agent in conway’s game of life can build a complete exact simulation of its past if it’s got room in the future.
Yeah, this is the part of the proposal that’s hardest for me to buy. Chaos theory means that small variations in initial conditions lead to massive differences pretty rapidly; and we can’t even measure an approximation of initial conditions. The whole “let’s calculate the universe from the start” approach seems to leave way too much scope to end up with something completely unexpected.
It’s not actually calculating the universe from the start. The formalism is intended to be specified such that identifying the universe to arbitrarily high precision ought to still converge; I’m still skeptical, but it does work to simply infer backwards in time, which ought to be a lot more tractable than forward in time (I think? maybe.) but is still not friendly, and see above about apple making contact with newton’s scalp. It’s definitely a key weak point, I have some ideas how to fix it and need to talk them over in a lot more depth with @carado.
Inferring backwards would significantly reduce my concern since your starting from a point we have information about.
I suppose that maybe we could calculate the Kolmogorov score of worlds close to us by backchaining, although that doesn’t really seem to be compatible with the calculation at each step being a formal mathematical expression.
(cross-posted as a top-level post on my blog)
QACI and plausibly PreDCA rely on a true name of phenomena in the real world using solomonoff induction, and thus talk about locating them in a theoretical giant computation of the universe, from the beginning. it’s reasonable to be concerned that there isn’t enough compute for an aligned AI to actually do this. however, i have two responses:
isn’t there enough compute? supposedly, our past lightcone is a lot smaller than our future lightcone, and quantum computers seem to work. this is evidence that we can, at least in theory, build within our future lightcone a quantum computer simulating our past lightcone. the major hurdle here would be “finding out” a fully explanatory “initial seed” of the universe, which could take exponential time, but also could maybe not.
we don’t need to simulate past lightcone. if you ask me what my neighbor was thinking yesterday at noon, the answer is that i don’t know! the world might be way too complex to figure that out without simulating it and scanning his brain. however, i have a reasonable distribution over guesses. he was more likely to think about french things than korean things. he was more likely to think about his family than my family. et cetera. an aligned superintelligence can hold an ever increasingly refined distribution of guesses, and then maximize the expected utility of utility functions corresponding to each guess.
you’ve thoroughly convinced me that your formalisms do not break in the way I thought they did because of the limitation I’m referencing in this post.
That said, I still think point 1 is invalid: no, I don’t think any computational system until the end of time will know to 5 decimal places what temperature it was 500 feet in the air off the instantaneous shape of the surface of the ocean [edit: at gps 0,0] the exact moment that the apple made contact with newton’s scalp skin. it’s just too chaotic.
Maybe you can manage three, or something. Or maybe I’m wrong and you can actually get more than 5. but then I can go further and say you definitely can’t get ten decimal places, or fifteen. There’s just no way you can collect all of the thermal noise from that moment in time and run it backwards. I think. well, I guess I can imagine that this might not be true; there are ways the past might be unique and possible to exactly infer, and I definitely believe the past is quite unique at the level of humans who had any significant impact on history at all, so ancestor sims are in fact probably possible.
But as you say—gotta use simplifying abstraction for that to work.
“just too chaotic” is covered by “entire past lightcone”. no matter how chaotic, the past lightcone is one whole computation which you can just run naively, if you’ve got room. you get all decimal places, just like an agent in conway’s game of life can build a complete exact simulation of its past if it’s got room in the future.
(yes, maybe quantum breaks this)
I guess core to my claim is I don’t think you ever have room in the universe. it would take astronomically huge amounts of waste.
Yeah, this is the part of the proposal that’s hardest for me to buy. Chaos theory means that small variations in initial conditions lead to massive differences pretty rapidly; and we can’t even measure an approximation of initial conditions. The whole “let’s calculate the universe from the start” approach seems to leave way too much scope to end up with something completely unexpected.
It’s not actually calculating the universe from the start. The formalism is intended to be specified such that identifying the universe to arbitrarily high precision ought to still converge; I’m still skeptical, but it does work to simply infer backwards in time, which ought to be a lot more tractable than forward in time (I think? maybe.) but is still not friendly, and see above about apple making contact with newton’s scalp. It’s definitely a key weak point, I have some ideas how to fix it and need to talk them over in a lot more depth with @carado.
Inferring backwards would significantly reduce my concern since your starting from a point we have information about.
I suppose that maybe we could calculate the Kolmogorov score of worlds close to us by backchaining, although that doesn’t really seem to be compatible with the calculation at each step being a formal mathematical expression.