I’ve done a bit of back and forth in my mind on examples and find the biggest challenge that of a plausible one, rather than mealy imaginable/possible. I think the best way to frame the concern (not quite an example but close) would be gain of function type research.
Currently I think the vast majority of that work is conducted in expensive labs (probably BL3 or 4 but might be wrong on that) by people with a lot of time, effort and money invested in their educations. It’s a complicated enough area that lacking the education makes even knowing how to start a challenge. However, I don’t think the basic work requires all that much in the way of specialized lab equipment. Most of the equipment is probably more about result/finding productivity than about the actual attempted modification.
On top of that we also have some limitation, legal/regulatory, on access to some materials. But I think that is more about specific items, e.g. anthrax, and not really a barrier to conducting gain of function type research. Everyone has access to lots of bacteria and viruses but most lack knowledge of isolation and identification techniques.
Smart tools which embody the knowledge and experience, as well as include a good ML functions really would open the door to home hobbyists that got interested in just playing around with some “harmless” gain of function or other genetic engineering. But if they don’t understand how to properly contain their experiments, or don’t understand that robust testing is not just testing for a successful (however that might be defined) result but also testing for harmful outcomes, then risks have definitely increased if we do actually see an increase in such activity by lay people.
I’m coming to the conclusion, though, that perhaps the way to address these type of risk are really outside the AI alignment focus as a fair amount of the mitigation is probably how we apply existing controls to evolution in smart tool use. Just as now, some things some things cannot just be ordered just by putting the order in and making payment. So maybe here the solution is more about qualifying access to smart tools as they get to some point—though I think this is also a problematic solution—so not just any unqualified person can play with them merely because they have an interest.
I also think a better way to frame the question might be “Given existing unintentional existential risk from human actions what is the relationship between AI and the probability of such an outcome?” Or perhaps more specifically as “Given gain of function research will produce a human civilization ending pandemic with probability X, is X a function of AI advancement and if so what direction and shape does it take?”
Given Connor’s comment on the possibility of the misleading AI one might think that X increases with the advancement of AI but would that lead to humans being the greater threat prior to the emergence of a malicious (just an uncaring AGI that just needs humans out of the way) I don’t know.
Yeah, I agree that one is fairly plausible. But still I’d put it as less likely than “classic” AGI takeover risk, because classic AGI takeover risk is so large and so soon. I think if I had 20-year timelines then I’d be much more concerned about gain-of-function type stuff than I currently am.
I’ve done a bit of back and forth in my mind on examples and find the biggest challenge that of a plausible one, rather than mealy imaginable/possible. I think the best way to frame the concern (not quite an example but close) would be gain of function type research.
Currently I think the vast majority of that work is conducted in expensive labs (probably BL3 or 4 but might be wrong on that) by people with a lot of time, effort and money invested in their educations. It’s a complicated enough area that lacking the education makes even knowing how to start a challenge. However, I don’t think the basic work requires all that much in the way of specialized lab equipment. Most of the equipment is probably more about result/finding productivity than about the actual attempted modification.
On top of that we also have some limitation, legal/regulatory, on access to some materials. But I think that is more about specific items, e.g. anthrax, and not really a barrier to conducting gain of function type research. Everyone has access to lots of bacteria and viruses but most lack knowledge of isolation and identification techniques.
Smart tools which embody the knowledge and experience, as well as include a good ML functions really would open the door to home hobbyists that got interested in just playing around with some “harmless” gain of function or other genetic engineering. But if they don’t understand how to properly contain their experiments, or don’t understand that robust testing is not just testing for a successful (however that might be defined) result but also testing for harmful outcomes, then risks have definitely increased if we do actually see an increase in such activity by lay people.
I’m coming to the conclusion, though, that perhaps the way to address these type of risk are really outside the AI alignment focus as a fair amount of the mitigation is probably how we apply existing controls to evolution in smart tool use. Just as now, some things some things cannot just be ordered just by putting the order in and making payment. So maybe here the solution is more about qualifying access to smart tools as they get to some point—though I think this is also a problematic solution—so not just any unqualified person can play with them merely because they have an interest.
I also think a better way to frame the question might be “Given existing unintentional existential risk from human actions what is the relationship between AI and the probability of such an outcome?” Or perhaps more specifically as “Given gain of function research will produce a human civilization ending pandemic with probability X, is X a function of AI advancement and if so what direction and shape does it take?”
Given Connor’s comment on the possibility of the misleading AI one might think that X increases with the advancement of AI but would that lead to humans being the greater threat prior to the emergence of a malicious (just an uncaring AGI that just needs humans out of the way) I don’t know.
Yeah, I agree that one is fairly plausible. But still I’d put it as less likely than “classic” AGI takeover risk, because classic AGI takeover risk is so large and so soon. I think if I had 20-year timelines then I’d be much more concerned about gain-of-function type stuff than I currently am.