Even granting that there are grabby aliens in your cosmic neighborhood (click here to chat with them*), I find the case for SETI-risk entirely unpersuasive (as in, trillionths of a percent plausible, or indistinguishable from cosmic background uncertainty), and will summarize some of the arguments others have already made against it and some of my own. I think it is so implausible that I don’t see any need to urge SETI to change their policy. [Throwing in a bunch of completely spitballed, mostly-meaningless felt-sense order-of-magnitude probability estimates.]
Parsability. As Ben points out, conveying meaning is hard. Language is highly arbitrary; the aliens are going to know enough about human languages to have a crack at composing bytestrings that compile executable code? No chance if undirected transmission, 1% if directed transmission intended to exploit my civilization in particular.
System complexity. Dweomite is correct that conveying meaning to computers is even harder. There is far too much flexibility, and far too many arbitrary and idiosyncratic choices made in computer architectures and programming languages. No chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.
Transmission fidelity. If you want to transmit encrypted messages or program code, you can’t be dropping bits. Do you know what frequency I’m listening on, and what my sample depth is? The orbital period of my planet and the location of my telescope? What the interplanetary and terrestrial weather conditions that day are going to be, you being presumably light-years away or you’d have chosen a different attack vector? You want to mail me a bomb, but you’re shipping it in parts, expecting all the pieces to get there, and asking me to build it myself as well? 0.01% chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.
Compute. As MichaelStJules’s comment suggests, if the compute needed to reproduce powerful AI is anything like Ajeya’s estimates, who cares if some random asshole runs the thing on their PC? No chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.
Information density. Sorry, how much training is your AI going to have to do in order to be functional? Do you have a model that can bootstrap itself up from as much data as you can send in an unbroken transmission? Are you going to be able to access the hardware necessary to obtain more information? See above objections. There’s terabytes of SETI recordings, but probably at most megabytes of meaningful data in there. 1% chance if undirected, 100% if directed, conditioning on all above conditions being fulfilled.
Inflexible policy in the case of observed risk. If the first three lines look like an exploit, I’m not posting it on the internet. Likewise, if an alien virus I accidentally posted somehow does manage to infect a whole bunch of people’s computers, I’m shutting off the radio telescope before you can start beaming down an entire AI, etc, etc. (I don’t think you’d manage to target all architectures with a single transmission without being detected; even if your entire program was encrypted to the point of indistinguishability from entropy, the escape code and decrypter are going to have to look like legible information to anyone doing any amount of analysis.) Good luck social engineering me out of pragmatism, even if I wasn’t listening to x-risk concerns before now. 1% chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.
So if you were an extraterrestrial civilization trying this strategy, most of the time you’d just end up accomplishing nothing, and if you even got close to accomplishing something, you’d more often be alerting neighboring civilizations about your hostile intentions than succeeding. Maybe you’d have a couple lucky successes. I hope you are traveling at a reasonable fraction of C, because if not you’ve just given your targets a lot of advance warning about any planned invasion.
I just don’t think this one is worth anyone’s time, sorry. I’d expect any extraterrestrial communications we receive to be at least superficially friendly, and intended to be clearly understood rather than accidentally executed, and the first sign of hostility to be something like a lethal gamma-ray burst. In the case that I did observe an attempt to execute this strategy, I’d be highly inclined to believe that the aliens already had us completely owned and were trolling us for lolz.
*Why exactly did you click on a spammy-looking link in a comment on the topic of arbitrary code execution?
Even granting that there are grabby aliens in your cosmic neighborhood (click here to chat with them*), I find the case for SETI-risk entirely unpersuasive (as in, trillionths of a percent plausible, or indistinguishable from cosmic background uncertainty), and will summarize some of the arguments others have already made against it and some of my own. I think it is so implausible that I don’t see any need to urge SETI to change their policy. [Throwing in a bunch of completely spitballed, mostly-meaningless felt-sense order-of-magnitude probability estimates.]
Parsability. As Ben points out, conveying meaning is hard. Language is highly arbitrary; the aliens are going to know enough about human languages to have a crack at composing bytestrings that compile executable code? No chance if undirected transmission, 1% if directed transmission intended to exploit my civilization in particular.
System complexity. Dweomite is correct that conveying meaning to computers is even harder. There is far too much flexibility, and far too many arbitrary and idiosyncratic choices made in computer architectures and programming languages. No chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.
Transmission fidelity. If you want to transmit encrypted messages or program code, you can’t be dropping bits. Do you know what frequency I’m listening on, and what my sample depth is? The orbital period of my planet and the location of my telescope? What the interplanetary and terrestrial weather conditions that day are going to be, you being presumably light-years away or you’d have chosen a different attack vector? You want to mail me a bomb, but you’re shipping it in parts, expecting all the pieces to get there, and asking me to build it myself as well? 0.01% chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.
Compute. As MichaelStJules’s comment suggests, if the compute needed to reproduce powerful AI is anything like Ajeya’s estimates, who cares if some random asshole runs the thing on their PC? No chance if undirected, 1% if directed, conditioning on all above conditions being fulfilled.
Information density. Sorry, how much training is your AI going to have to do in order to be functional? Do you have a model that can bootstrap itself up from as much data as you can send in an unbroken transmission? Are you going to be able to access the hardware necessary to obtain more information? See above objections. There’s terabytes of SETI recordings, but probably at most megabytes of meaningful data in there. 1% chance if undirected, 100% if directed, conditioning on all above conditions being fulfilled.
Inflexible policy in the case of observed risk. If the first three lines look like an exploit, I’m not posting it on the internet. Likewise, if an alien virus I accidentally posted somehow does manage to infect a whole bunch of people’s computers, I’m shutting off the radio telescope before you can start beaming down an entire AI, etc, etc. (I don’t think you’d manage to target all architectures with a single transmission without being detected; even if your entire program was encrypted to the point of indistinguishability from entropy, the escape code and decrypter are going to have to look like legible information to anyone doing any amount of analysis.) Good luck social engineering me out of pragmatism, even if I wasn’t listening to x-risk concerns before now. 1% chance if undirected, 10% if directed, conditioning on all above conditions being fulfilled.
So if you were an extraterrestrial civilization trying this strategy, most of the time you’d just end up accomplishing nothing, and if you even got close to accomplishing something, you’d more often be alerting neighboring civilizations about your hostile intentions than succeeding. Maybe you’d have a couple lucky successes. I hope you are traveling at a reasonable fraction of C, because if not you’ve just given your targets a lot of advance warning about any planned invasion.
I just don’t think this one is worth anyone’s time, sorry. I’d expect any extraterrestrial communications we receive to be at least superficially friendly, and intended to be clearly understood rather than accidentally executed, and the first sign of hostility to be something like a lethal gamma-ray burst. In the case that I did observe an attempt to execute this strategy, I’d be highly inclined to believe that the aliens already had us completely owned and were trolling us for lolz.
*Why exactly did you click on a spammy-looking link in a comment on the topic of arbitrary code execution?