I’m pretty sure I’m not mistaken. At this risk of driving this sidetrack off a cliff...
Once an object (in this case, a potentially dangerous AI) is in our past light cone, the only way for its world line to stay outside of our future light cone forever (besides terminating it through thermite destruction as mentioned above) is for it to travel at the speed of light or faster. That was the physics nitpick I was making. In short, destroy it because you cannot send it far enough away fast enough to keep it from coming back and eating us.
Close, but the tricky part is that the universe can expand at greater than the speed of light. Nothing (like photons) that can influence cause and effect can travel faster than c but the fabric of spacetime itself can expand faster than the speed of light. Looking at the (models of) the first 10^-30 seconds highlights this to an extreme degree. Even now some of the galaxies that are visible to us are becoming further away from us by more than a light year per year. That means that the light they are currently emitting (if any) will never reach us.
To launch an AI out of our future light cone you must send it past a point at which the expansion of the universe makes that point further away from us at c. At that time it will be one of the points at the edge of our future light cone and beyond it the AI can never touch us.
So you’re positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite—very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon—not practical.
To bring this thread back onto the LW Highway...
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn’t find examples in the LW wiki.) A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn’t what I intended.
I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere?
The relevant failure mode here is “other optimising”.
A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.
Woah, that is a lot of divs I had to count to count!
Another case I’d like to be considered more is “if we can’t/shouldn’t control the AIs, what can we do to still have influence over them?”
Thermite. Destroying or preventing them is the ONLY option in that situation. (Well, I suppose you could launch them out of our future light cone.)
No; I said, I’d like to case to be considered. What you are doing is NOT considering it.
Considering all the other alternatives it’s rather fortunate that we have thermite, an expanding cosmos and special relativity at our displosal for influencing cause and effect. Without those we’d be screwed!
Thermite. Destroying or preventing them is the ONLY option in that situation. (Well, I suppose you could launch them out of our future light cone.)
I hope that was a joke because that doesn’t square with our current understanding of how physics works...
You are mistaken.
I’m pretty sure I’m not mistaken. At this risk of driving this sidetrack off a cliff...
Once an object (in this case, a potentially dangerous AI) is in our past light cone, the only way for its world line to stay outside of our future light cone forever (besides terminating it through thermite destruction as mentioned above) is for it to travel at the speed of light or faster. That was the physics nitpick I was making. In short, destroy it because you cannot send it far enough away fast enough to keep it from coming back and eating us.
Close, but the tricky part is that the universe can expand at greater than the speed of light. Nothing (like photons) that can influence cause and effect can travel faster than c but the fabric of spacetime itself can expand faster than the speed of light. Looking at the (models of) the first 10^-30 seconds highlights this to an extreme degree. Even now some of the galaxies that are visible to us are becoming further away from us by more than a light year per year. That means that the light they are currently emitting (if any) will never reach us.
To launch an AI out of our future light cone you must send it past a point at which the expansion of the universe makes that point further away from us at c. At that time it will be one of the points at the edge of our future light cone and beyond it the AI can never touch us.
So you’re positing a technique that takes advantage of inflationary theory to permanently get rid of an AI. Thermite—very practical. Launching the little AI box across the universe at near light-speed for a few billion years until inflation takes it beyond our horizon—not practical.
To bring this thread back onto the LW Highway...
It looks like you fell into a failure mode of defending what would otherwise have been a throwaway statement—probably to preserve the appearance of consistency, the desire not to be wrong in public, etc. (Do we have a list of these somewhere? I couldn’t find examples in the LW wiki.) A better response to “I hope that was a joke...” than “You are mistaken” would be “Yeah. It was hyperbole for effect.” or something along those lines.
A better initial comment from me would have been to make it into an actual question because I thought you might have had a genuine misunderstanding about light cones and world lines. Instead, it came off hostile which wasn’t what I intended.
I don’t think that wedrifid made those remarks to save face or the like since wedrifid is the individual who proposed both thermite and the light cone option. The light cone option was clearly humorous and then wedrifid expalined how it would work (for some value of work). If I am reading this correctly there was not any serious intent at all in that proposal but to emphasize how wedrifid sees destruction as the only viable response.
Thankyou Joshua. I was going to let myself have too much fun with my reply so it is good that you beat me to it. I’ll allow myself to add two responses however.
The relevant failure mode here is “other optimising”.
No, no, no. That would be wrong, in as much as it is accepting a false claim about physics! Direct contradiction is exactly what I want to convey. This is wrong on a far more basic level than the belief that we could could control, or survive, an unfriendly GAI. There are even respected experts (who believe their expertise is relevant) who share that particular delusion—Robin Hanson for example.
No; I said, I’d like to case to be considered. What you are doing is NOT considering it.
Woah, that is a lot of divs I had to count to count!
Considering all the other alternatives it’s rather fortunate that we have thermite, an expanding cosmos and special relativity at our displosal for influencing cause and effect. Without those we’d be screwed!