Shot the laser differently. For example, only make the comet smaller and change it’s orbit so that some arbitrary city of earth gets blown up with knock-of effects.
Hide a message that is far more difficult to detect than P is able to, but which can be used to communicate with a much later AI.
Circumventing physics entirely using advanced decision theories. (counterfactual trade, etc.)
Things that can’t be mentioned here because they are cognitive hazards.
The AI loopholes to send a physical femtobot “through the output channel” and escapes entirely.
Shot the laser differently. For example, only make the comet smaller and change it’s orbit so that some arbitrary city of earth gets blown up with knock-of effects.
If the AI has a naive “save humans” utility function, I don’t see how this advantages it.
Hide a message that is far more difficult to detect than P is able to, but which can be used to communicate with a much later AI.
That kind of trade is indeed a problem, but you’d need to have a dangerous “much later AI” in the first place, which is a very bad thing anyway...
Circumventing physics entirely using advanced decision theories. (counterfactual trade, etc.)
That’s a general risk—I’ll analyse that later, if this seems to work.
The AI loopholes to send a physical femtobot “through the output channel” and escapes entirely.
The output channel is indeed dangerous—it is not constrained through this method, and needs to be controlled in other ways.
Normally I wouldn’t make a post this contentless but I just HAVE to commend the general rationalist virtue on how this response was handled. I have no further responses.
If the AI has a naive “save humans” utility function, I don’t see how this advantages it.
I’ve met people who can lucidly argue that nuking a particular city or small region would produce many benefits for humanity as a whole, including reduced risk of politically-motivated extinction events down the line.
Also… you’re going to an awful lot of trouble, here, to calculate a firing solution for a beam of light to hit a non-accelerating object in space. Realistically, if we know where the comet is well enough to realize it’s headed for Earth, aiming a laser at it with non-sapient hardware is almost trivial. Why not an NP-complete problem?
Ways for the AI to do other stuff:
Shot the laser differently. For example, only make the comet smaller and change it’s orbit so that some arbitrary city of earth gets blown up with knock-of effects.
Hide a message that is far more difficult to detect than P is able to, but which can be used to communicate with a much later AI.
Circumventing physics entirely using advanced decision theories. (counterfactual trade, etc.)
Things that can’t be mentioned here because they are cognitive hazards.
The AI loopholes to send a physical femtobot “through the output channel” and escapes entirely.
Is this a cognitive hazard you came up with yourself, or a “standard” one?
Both, but the obvious one in the context of this site was the one I had in mind the most.
If the AI has a naive “save humans” utility function, I don’t see how this advantages it.
That kind of trade is indeed a problem, but you’d need to have a dangerous “much later AI” in the first place, which is a very bad thing anyway...
That’s a general risk—I’ll analyse that later, if this seems to work.
The output channel is indeed dangerous—it is not constrained through this method, and needs to be controlled in other ways.
Normally I wouldn’t make a post this contentless but I just HAVE to commend the general rationalist virtue on how this response was handled. I have no further responses.
I’ve met people who can lucidly argue that nuking a particular city or small region would produce many benefits for humanity as a whole, including reduced risk of politically-motivated extinction events down the line.
Also… you’re going to an awful lot of trouble, here, to calculate a firing solution for a beam of light to hit a non-accelerating object in space. Realistically, if we know where the comet is well enough to realize it’s headed for Earth, aiming a laser at it with non-sapient hardware is almost trivial. Why not an NP-complete problem?
Why would an intelligent agent do better at an NP-complete problem than an unintelligent algorithm?
The laser problem is an illustration, a proof of concept of a developing idea. If that is deemed to work, I’ll see how general we can make it.