If someone stabs you with the knife, there is a possibility that there be no damage to large blood vessels and organs, so you survive. But when you are at risk of being stabbed you don’t think “I won’t treat dying from stabbing by knife as inevitability”, you think “I should avoid being stabbed, because otherwise I can die.”
Yes. But you don’t worry about him killing everyone in Washington DC, taking control of the White House and enslaving the human race. That’s my critic: he goes too easily from , a machine very intelligent can be built, to, this machine will inevitably be magically powerful and kill everyone. I’m perfectly aware of instrumental convergence and the orthogonality principle by the way, and still consider this view just wrong
You don’t need to be magically powerful to kill everyone! I think, at current biotech level, medium-sized lab with no ethical constrains and median computational resources can develop humanity-wiping virus in 10 years and the only thing that saves us is that bioweapon is out of fashion. If we enter new Cold War with mentality “If you refuse to make bioweapons for Our Country then you are Their spy!” we are pretty doomed without any AI.
Sorry, I don’t think that’s possible! The bit we are disagreeing to be specific is the “everyone”. Yes, it is possible to cause A LOT of damage like this.
I can increase my timelines from 10 years to 20 to get “kill everyone including all eukaryotic biosphere”, using some prokaryotic intracellular parasite with incredible metabolic efficiency and sufficiently alternative biochemistry to be not edible by modern organisms.
I work on prokaryotic evolution. Happy to do a zoom call and you explain to me how that works. If you are interested just send me a DM! Otherwise just ignore:)
If someone stabs you with the knife, there is a possibility that there be no damage to large blood vessels and organs, so you survive. But when you are at risk of being stabbed you don’t think “I won’t treat dying from stabbing by knife as inevitability”, you think “I should avoid being stabbed, because otherwise I can die.”
Yes. But you don’t worry about him killing everyone in Washington DC, taking control of the White House and enslaving the human race. That’s my critic: he goes too easily from , a machine very intelligent can be built, to, this machine will inevitably be magically powerful and kill everyone. I’m perfectly aware of instrumental convergence and the orthogonality principle by the way, and still consider this view just wrong
You don’t need to be magically powerful to kill everyone! I think, at current biotech level, medium-sized lab with no ethical constrains and median computational resources can develop humanity-wiping virus in 10 years and the only thing that saves us is that bioweapon is out of fashion. If we enter new Cold War with mentality “If you refuse to make bioweapons for Our Country then you are Their spy!” we are pretty doomed without any AI.
Sorry, I don’t think that’s possible! The bit we are disagreeing to be specific is the “everyone”. Yes, it is possible to cause A LOT of damage like this.
I can increase my timelines from 10 years to 20 to get “kill everyone including all eukaryotic biosphere”, using some prokaryotic intracellular parasite with incredible metabolic efficiency and sufficiently alternative biochemistry to be not edible by modern organisms.
I work on prokaryotic evolution. Happy to do a zoom call and you explain to me how that works. If you are interested just send me a DM! Otherwise just ignore:)