Your proof actually fails to fully account for the fact that any ASI must actually exist in the world. It would affect the world other then just through its outputs—e.g. if it’s computation produces heat, that heat would also affect the world. Your proof does not show that the sum of all effects of the ASI on the world (both intentional + side-effects of it performing its computation) could be aligned. Further, real computation takes time—it’s not enough for the aligned ASI to produce the right output, it also needs to produce it at the right time. You did not prove it to be possible.
it’s not enough for the aligned ASI to produce the right output, it also needs to produce it at the right time
Yes, but again this is a mathematical object so it has effectively infinitely fast compute. But I can also prove that FA:BGROW—FA for “functional approximation”—will require less thinking time that human brains.
fact that any ASI must actually exist in the world
It’s a mathematical existence proof that the ASI exists as a mathematical object, so this part is not necessary. However, I can also argue quite convincingly that an ASI similar to LT:BGROW (let’s call it FA:BGROW—FA for “functional approximation) must easily fit in the world and also emit less waste heat than a team of human advisors.
Perhaps you are missing the point of what I am saying here somewhat? The issue is is not the scale of the side-effect of a computation, it’s the fact that the side-effect exists, so any accurate mathematical abstraction of an actual real-world ASI must be prepared to deal with solving a self-referential equation.
Your proof actually fails to fully account for the fact that any ASI must actually exist in the world. It would affect the world other then just through its outputs—e.g. if it’s computation produces heat, that heat would also affect the world. Your proof does not show that the sum of all effects of the ASI on the world (both intentional + side-effects of it performing its computation) could be aligned. Further, real computation takes time—it’s not enough for the aligned ASI to produce the right output, it also needs to produce it at the right time. You did not prove it to be possible.
Yes, but again this is a mathematical object so it has effectively infinitely fast compute. But I can also prove that FA:BGROW—FA for “functional approximation”—will require less thinking time that human brains.
It’s a mathematical existence proof that the ASI exists as a mathematical object, so this part is not necessary. However, I can also argue quite convincingly that an ASI similar to LT:BGROW (let’s call it FA:BGROW—FA for “functional approximation) must easily fit in the world and also emit less waste heat than a team of human advisors.
Perhaps you are missing the point of what I am saying here somewhat? The issue is is not the scale of the side-effect of a computation, it’s the fact that the side-effect exists, so any accurate mathematical abstraction of an actual real-world ASI must be prepared to deal with solving a self-referential equation.
But it’s not that: it’s a mathematical abstraction of a disembodied ASI that lacks any physical footprint.