For already existing viral strains, that’s to be expected. I don’t know if you’ve ever had discussions with synthetic biology students but… as Hugh Hixon at Alcor once said to me, “that is the stuff of nightmares, assuming you can even sleep afterwards.” Fully-novel genetic constructs, hybridization of various unlike genomes, or even more potentially exotic constructs such as fungal spores that upon contact with human secretions re-express (medusa-like) into something akint o Toxoplasmosis gondii only inducing schizophrenia, hyperaggression, and so on. (Why kill a population with a disease when you can make a nearly-unkillable environmental ‘bloom’ toxin that passes gene screening, has no observable external symptoms, and causes an entire society to turn into batshit-crazy homocidal axe-killers?)
Human “swine” flu is some scary shit. But compared to what synth-bio could achieve, I’m less worried about it. Especially considering we’re already in the range of introducing, say, the biotoxin of the Irukandji to airborne molds. That right there would be capable of killing just about all animal life within the blooming pattern of the organism.
We are quite literally, I believe, less than three or four—five at the most -- 20-year generations away from synthetic biology students’ being capable of creating bioweapons capable of wiping out the human species. If not the entirety of all mammalian life.
I agree that synbio has some very nasty and rapidly emerging capabilities. However, with respect to your last paragraph are you also assuming that defenses don’t improve? Fancy biotech enables better detectors and rapid creation of tailored countermeasures (including counter-organisms). Surveillance tech restricts what students can get away with, sterilization and isolation of environments becomes easier, etc.
However, with respect to your last paragraph are you also assuming that defenses don’t improve?
The statements I made were agnostic as to the likelihood of a given event, as opposed to the raw capability of the devices—that is, beyond saying that it would become a non-zero percent chance. Furthermore; it is generally true that defense is “harder” than offense when it comes to weapons-tech.
Even if better technology means defenses can improve, does that mean they will improve at a fast enough pace? I don’t understand why your same logic wouldn’t also imply the belief that it will be easier to make AI friendly when we understand more about AGI.
I don’t understand why your same logic wouldn’t also imply the belief that it will be easier to make AI friendly when we understand more about AGI.
Ceteris paribus, that argument does go through: for any given project, success is easier with more AGI understanding. That doesn’t mean that we should expect AI to be safe, or that interventions to shift the curves don’t matter. Likewise, the considerations I mentioned with respect to synbio make us safer to some extent, and I was curious as to Logos’ evaluation of their magnitudes.
Okay, thanks for the clarification. If we would expect the magnitudes for synbio to be significantly higher (or lower) than for AGI, then I would be curious as to what differentiates the two situations (I could easily imagine that there is a difference, I just think it would be a good exercise to characterize it as precisely as possible).
ETA: Actually, I think there are some plausible arguments as to why AGI progress would be less relevant to AGI safety than one would expect naievely (due to the decoupling of beliefs and utility functions in Bayesian decision theory—being an AGI hinges mostly on the belief part, whereas being an FAI hinges mostly on the utility function part). But I currently have a non-trivial degree of uncertainty over how correct these arguments are.
For already existing viral strains, that’s to be expected. I don’t know if you’ve ever had discussions with synthetic biology students but… as Hugh Hixon at Alcor once said to me, “that is the stuff of nightmares, assuming you can even sleep afterwards.” Fully-novel genetic constructs, hybridization of various unlike genomes, or even more potentially exotic constructs such as fungal spores that upon contact with human secretions re-express (medusa-like) into something akint o Toxoplasmosis gondii only inducing schizophrenia, hyperaggression, and so on. (Why kill a population with a disease when you can make a nearly-unkillable environmental ‘bloom’ toxin that passes gene screening, has no observable external symptoms, and causes an entire society to turn into batshit-crazy homocidal axe-killers?)
Human “swine” flu is some scary shit. But compared to what synth-bio could achieve, I’m less worried about it. Especially considering we’re already in the range of introducing, say, the biotoxin of the Irukandji to airborne molds. That right there would be capable of killing just about all animal life within the blooming pattern of the organism.
We are quite literally, I believe, less than three or four—five at the most -- 20-year generations away from synthetic biology students’ being capable of creating bioweapons capable of wiping out the human species. If not the entirety of all mammalian life.
I agree that synbio has some very nasty and rapidly emerging capabilities. However, with respect to your last paragraph are you also assuming that defenses don’t improve? Fancy biotech enables better detectors and rapid creation of tailored countermeasures (including counter-organisms). Surveillance tech restricts what students can get away with, sterilization and isolation of environments becomes easier, etc.
The statements I made were agnostic as to the likelihood of a given event, as opposed to the raw capability of the devices—that is, beyond saying that it would become a non-zero percent chance. Furthermore; it is generally true that defense is “harder” than offense when it comes to weapons-tech.
Even if better technology means defenses can improve, does that mean they will improve at a fast enough pace? I don’t understand why your same logic wouldn’t also imply the belief that it will be easier to make AI friendly when we understand more about AGI.
Ceteris paribus, that argument does go through: for any given project, success is easier with more AGI understanding. That doesn’t mean that we should expect AI to be safe, or that interventions to shift the curves don’t matter. Likewise, the considerations I mentioned with respect to synbio make us safer to some extent, and I was curious as to Logos’ evaluation of their magnitudes.
Okay, thanks for the clarification. If we would expect the magnitudes for synbio to be significantly higher (or lower) than for AGI, then I would be curious as to what differentiates the two situations (I could easily imagine that there is a difference, I just think it would be a good exercise to characterize it as precisely as possible).
ETA: Actually, I think there are some plausible arguments as to why AGI progress would be less relevant to AGI safety than one would expect naievely (due to the decoupling of beliefs and utility functions in Bayesian decision theory—being an AGI hinges mostly on the belief part, whereas being an FAI hinges mostly on the utility function part). But I currently have a non-trivial degree of uncertainty over how correct these arguments are.