While these ideas are interesting I think there are many reasons not to worry about SETI. The first is that I find the “malicious signal” attack very implausible to begin with. Even if the simple plain-text message “Their is no God” would be enough to wipe out a typical civilization I still think the aliens don’t stand much of a chance. How could they create a radio signal that carries that exact meaning to a majority of all possible civilizations that could find the broadcast? And this is a scenario where the cards are stacked in the aliens favor by assuming such a low-data packet can wipe us out. A powerful AI would be a much larger piece of data: which multiplies all of the difficulties of sending it to an unknown civilization.
My second reason is that I think singling out SETI specifically is unfair. We are looking at all kinds of space data the whole time. Radio telescopes, normal telescopes and now even gravity wave detectors. Almost all of these devices are aimed at understanding natural processes. If you were some aliens who DID have the ability to send a malicious death-message then your message might be detected by SETI, but its just as likely to be detected by someone else first. Someone notices something odd, maybe “gamma ray bursts” from the galactic center. They investigate what (assumedly natural) mechanism might cause them, then Oh no! Someone put the spectrum of a gamma ray burst into the computer, but its Fourier series contained the source code of an AI that then started spontaneously running on the office computer before escaping into the internet to start WW3.
Your second paragraph seems unpersuasive to me. I would think that designing a program that can wipe out a civilization conditional on that civilization intentionally running it would be many orders of magnitude easier than designing a program that can wipe out a civilization when that civilization tries to analyze it as pure data.
Both things would require that you somehow make your concepts compatible with alien information systems (your first counter-argument), but the second thing additionally requires that you exploit some programming bug (such as a buffer overflow) in a system you have never examined. That seems to me like it would require modeling the unmet aliens in a much higher degree of detail.
Now, you could argue that if an astronomer who is attempting to analyze “gamma ray bursts” accidentally discovers an alien signal, that they are just as likely as SETI to immediately post it to the Internet. But suggesting they would accidentally execute alien code, without realizing that it’s code, seems like a pretty large burdensome detail.
(Contrariwise, conditional on known-alien-source-code being posted to the Internet, I would say the probability of someone trying to run it is close to 1.)
While these ideas are interesting I think there are many reasons not to worry about SETI. The first is that I find the “malicious signal” attack very implausible to begin with. Even if the simple plain-text message “Their is no God” would be enough to wipe out a typical civilization I still think the aliens don’t stand much of a chance. How could they create a radio signal that carries that exact meaning to a majority of all possible civilizations that could find the broadcast? And this is a scenario where the cards are stacked in the aliens favor by assuming such a low-data packet can wipe us out. A powerful AI would be a much larger piece of data: which multiplies all of the difficulties of sending it to an unknown civilization.
My second reason is that I think singling out SETI specifically is unfair. We are looking at all kinds of space data the whole time. Radio telescopes, normal telescopes and now even gravity wave detectors. Almost all of these devices are aimed at understanding natural processes. If you were some aliens who DID have the ability to send a malicious death-message then your message might be detected by SETI, but its just as likely to be detected by someone else first. Someone notices something odd, maybe “gamma ray bursts” from the galactic center. They investigate what (assumedly natural) mechanism might cause them, then Oh no! Someone put the spectrum of a gamma ray burst into the computer, but its Fourier series contained the source code of an AI that then started spontaneously running on the office computer before escaping into the internet to start WW3.
Your second paragraph seems unpersuasive to me. I would think that designing a program that can wipe out a civilization conditional on that civilization intentionally running it would be many orders of magnitude easier than designing a program that can wipe out a civilization when that civilization tries to analyze it as pure data.
Both things would require that you somehow make your concepts compatible with alien information systems (your first counter-argument), but the second thing additionally requires that you exploit some programming bug (such as a buffer overflow) in a system you have never examined. That seems to me like it would require modeling the unmet aliens in a much higher degree of detail.
Now, you could argue that if an astronomer who is attempting to analyze “gamma ray bursts” accidentally discovers an alien signal, that they are just as likely as SETI to immediately post it to the Internet. But suggesting they would accidentally execute alien code, without realizing that it’s code, seems like a pretty large burdensome detail.
(Contrariwise, conditional on known-alien-source-code being posted to the Internet, I would say the probability of someone trying to run it is close to 1.)