Close, but the tricky part is that the universe can expand at greater than the speed of light. Nothing (like photons) that can influence cause and effect can travel faster than c but the fabric of spacetime itself can expand faster than the speed of light. Looking at the (models of) the first 10^-30 seconds highlights this to an extreme degree. Even now some of the galaxies that are visible to us are becoming further away from us by more than a light year per year. That means that the light they are currently emitting (if any) will never reach us.
To launch an AI out of our future light cone you must send it past a point at which the expansion of the universe makes that point further away from us at c. At that time it will be one of the points at the edge of our future light cone and beyond it the AI can never touch us.
So you’re positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite—very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon—not practical.
To bring this thread back onto the LW Highway...
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn’t find examples in the LW wiki.) A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn’t what I intended.
I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere?
The relevant failure mode here is “other optimising”.
A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.
Close, but the tricky part is that the universe can expand at greater than the speed of light. Nothing (like photons) that can influence cause and effect can travel faster than c but the fabric of spacetime itself can expand faster than the speed of light. Looking at the (models of) the first 10^-30 seconds highlights this to an extreme degree. Even now some of the galaxies that are visible to us are becoming further away from us by more than a light year per year. That means that the light they are currently emitting (if any) will never reach us.
To launch an AI out of our future light cone you must send it past a point at which the expansion of the universe makes that point further away from us at c. At that time it will be one of the points at the edge of our future light cone and beyond it the AI can never touch us.
So you’re positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite—very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon—not practical.
To bring this thread back onto the LW Highway...
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn’t find examples in the LW wiki.) A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn’t what I intended.
I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
The relevant failure mode here is “other optimising”.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.