Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.
If any actor with or without AGI can quickly gain lots of money and resources without alarming anyone, can take over infrastructure and weaponry, or can occupy land and create independent industrial systems and other countries cannot stop it, our destiny is already not in our hands, and it would be suicidal to think we don’t need to fix these first because we expect to create an aligned AGI to save us.
If we grow complacent about the fragility of our biology and ecosystem, and continue to allow the possibility of any actor releasing pandemics and arbitrary malwares and deadly radiation etc (for example by allowing global transport without reliable pathogen removal, or using operating systems and open-source libraries that have not been formally proven to be safe), and keep thinking the universe should keep our environment safe and convenient by default, it would be naive to complain when these things happen and hope AGI would somehow preserve human lives and values without having to change our lifestyle or biology to adapt to new risks.
Yes, fixing vulnerabilities of our biology and society is hard and inconvenient and not as glamorous as creating a friendly god to do whatever you want, but we shouldn’t let motivated reasoning and groupthink lead us into thinking the latter is feasible when we don’t have a good idea about how to do it, just because the former requires sacrifices and investments and we’d prefer if it’s not needed. After all, it’s a fact that there exist small configurations of matter and information that can completely devastate our world, and just wishing it wasn’t true is not going to make it go away.
I personally agree with you on the importance of these problems. But I myself might also be a more general responsible/trustworthy AI person, and I care about other issues outside of AI too, so not sure about a more specific community, or what the definition is for “AI Safety” people.
For funding, I am not very familiar and want to ask for some clarification: by “(especially cyber-and bio-)security”, do you mean generally, or “(especially cyber-and bio-)security” caused by AI specifically?
Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.
If any actor with or without AGI can quickly gain lots of money and resources without alarming anyone, can take over infrastructure and weaponry, or can occupy land and create independent industrial systems and other countries cannot stop it, our destiny is already not in our hands, and it would be suicidal to think we don’t need to fix these first because we expect to create an aligned AGI to save us.
If we grow complacent about the fragility of our biology and ecosystem, and continue to allow the possibility of any actor releasing pandemics and arbitrary malwares and deadly radiation etc (for example by allowing global transport without reliable pathogen removal, or using operating systems and open-source libraries that have not been formally proven to be safe), and keep thinking the universe should keep our environment safe and convenient by default, it would be naive to complain when these things happen and hope AGI would somehow preserve human lives and values without having to change our lifestyle or biology to adapt to new risks.
Yes, fixing vulnerabilities of our biology and society is hard and inconvenient and not as glamorous as creating a friendly god to do whatever you want, but we shouldn’t let motivated reasoning and groupthink lead us into thinking the latter is feasible when we don’t have a good idea about how to do it, just because the former requires sacrifices and investments and we’d prefer if it’s not needed. After all, it’s a fact that there exist small configurations of matter and information that can completely devastate our world, and just wishing it wasn’t true is not going to make it go away.
I personally agree with you on the importance of these problems. But I myself might also be a more general responsible/trustworthy AI person, and I care about other issues outside of AI too, so not sure about a more specific community, or what the definition is for “AI Safety” people.
For funding, I am not very familiar and want to ask for some clarification: by “(especially cyber-and bio-)security”, do you mean generally, or “(especially cyber-and bio-)security” caused by AI specifically?