We have to take over the universe to save it by making the seed of an artificial general intelligence, that is undergoing explosive recursive self-improvement, extrapolate the coherent volition of humanity, while acausally trading with other superhuman intelligences across the multiverse.
...
Let’s take a closer look at the necessary top-level presuppositions to take the above quote seriously:
The many-worlds interpretation
Belief in the Implied Invisible
Timeless Decision theory
Intelligence explosion
To be able to take the above quote seriously you have to assign a non-negligible probability to the truth of the conjunction of #1,2,3,4, 1∧2∧3∧4.
I think you’re being unfair here. Presumably you think we need Many-Worlds for acausal trade, but this is far from obvious. Possible Worlds would do it too, and there are various decision theoretic ideas that make sense of it in a single world. Even beyond that though, it’s not obvious that acausal trade is particularly important to SIAI’s main thesis. SIAI wants to (maybe) build a seed AI, not do acausal trade.
seems trivially true to me. While we could wrap it up in measure theory, it seems about as obvious as any piece of mathematics.
And lots of work has gone into non-TDT theories—other UDTs, like the one Stuart’s recently been discussing. Even then I don’t see why ¬UDT → ¬Intelligence Explosion.
am I going to tell everyone to stop emitting CO2 because of that?
This is a bad example; it’s equally possible that we might be emitting too little CO2. There’s a symmetry here that isn’t obviously present in the AI case.
Nobody is so far able to beat arguments that bear resemblance to Pascal’s Mugging… One can only reject it based on a strong gut feeling that something is wrong.
This is untrue; bounded utility functions. Maybe those aren’t a good idea for other reasons, but there are systems that don’t get mugged for better reasons than their gut.
I think you’re being unfair here. Presumably you think we need Many-Worlds for acausal trade, but this is far from obvious. Possible Worlds would do it too, and there are various decision theoretic ideas that make sense of it in a single world. Even beyond that though, it’s not obvious that acausal trade is particularly important to SIAI’s main thesis. SIAI wants to (maybe) build a seed AI, not do acausal trade.
seems trivially true to me. While we could wrap it up in measure theory, it seems about as obvious as any piece of mathematics.
And lots of work has gone into non-TDT theories—other UDTs, like the one Stuart’s recently been discussing. Even then I don’t see why ¬UDT → ¬Intelligence Explosion.
This is a bad example; it’s equally possible that we might be emitting too little CO2. There’s a symmetry here that isn’t obviously present in the AI case.
This is untrue; bounded utility functions. Maybe those aren’t a good idea for other reasons, but there are systems that don’t get mugged for better reasons than their gut.