Reflects my engineering-slanted opinion on the future of biology
I have a non-engineering-slanted preference for the future of biology; this is all quite scary. Given the direction that biology is going, dangerous thing will soon be widely accessible. FAI is hard, but at least AI is too. As a member of the field, what are your opinions on this?
If you mean my opinion on whether it’s worth being afraid of—I don’t think it is. Any powerful new technology/capability should be implemented with caution and an eye to anticipating risk, but I don’t view bioengineering in a different capacity than any other scientific frontier in terms of risk.
On a practical level, the oversight on manipulation of organisms beyond your run-of-the-mill, single-celled lab workhorses (bacteria, yeast) is massive. In the not-too-distant past, it was an uphill climb just to be able to do genetic engineering research at all.
I got a lot of questions about ‘bacteria FOOM,’ if you will, around the time the synthetic bacterium paper came out. The short version of my answer then is worth repeating—if we want to make super-germs or other nasty things, nature/Azathoth does it quite well already (ebola, smallpox, plague, HIV...). Beyond that, this sort of research is exceptionally time- and resource-consuming; the funding bottleneck reduces the chances of the lone mad scientist creating a monster essentially to nil. Beyond even that, putting some DNA in a cell is not hard, but designing an idealized, intelligent organism on the level of strong AI is at least as hard as just designing the AI.
So my stance is one of.… let’s call it exuberant caution. Or possibly cautious exuberance. Probably both.
On a practical level, the oversight on manipulation of organisms beyond your run-of-the-mill, single-celled lab workhorses (bacteria, yeast) is massive.
To the best of my knowledge—and that deserves a disclaimer, since I’m a grad student in science and not yet completely versed in the legal gymnastics—it is changing, but any loosening of policy restrictions only comes with exceptional evidence that current norms are grossly unnecessary. In a general sense, bioengineering and tech started out immersed in a climate of fear and overblown, Crighton-esque ‘what-if’ scenarios with little or no basis in fact, and that climate is slowly receding to more informed levels of caution.
Policy also assuredly changes in the other direction as new frontiers are reached, to account for increased abilities of researchers to manipulate these systems.
Interesting!
I have a non-engineering-slanted preference for the future of biology; this is all quite scary. Given the direction that biology is going, dangerous thing will soon be widely accessible. FAI is hard, but at least AI is too. As a member of the field, what are your opinions on this?
If you mean my opinion on whether it’s worth being afraid of—I don’t think it is. Any powerful new technology/capability should be implemented with caution and an eye to anticipating risk, but I don’t view bioengineering in a different capacity than any other scientific frontier in terms of risk.
On a practical level, the oversight on manipulation of organisms beyond your run-of-the-mill, single-celled lab workhorses (bacteria, yeast) is massive. In the not-too-distant past, it was an uphill climb just to be able to do genetic engineering research at all.
I got a lot of questions about ‘bacteria FOOM,’ if you will, around the time the synthetic bacterium paper came out. The short version of my answer then is worth repeating—if we want to make super-germs or other nasty things, nature/Azathoth does it quite well already (ebola, smallpox, plague, HIV...). Beyond that, this sort of research is exceptionally time- and resource-consuming; the funding bottleneck reduces the chances of the lone mad scientist creating a monster essentially to nil. Beyond even that, putting some DNA in a cell is not hard, but designing an idealized, intelligent organism on the level of strong AI is at least as hard as just designing the AI.
So my stance is one of.… let’s call it exuberant caution. Or possibly cautious exuberance. Probably both.
I have updated based on this evidence.
One follow up question:
Is this sort of thing not changing?
To the best of my knowledge—and that deserves a disclaimer, since I’m a grad student in science and not yet completely versed in the legal gymnastics—it is changing, but any loosening of policy restrictions only comes with exceptional evidence that current norms are grossly unnecessary. In a general sense, bioengineering and tech started out immersed in a climate of fear and overblown, Crighton-esque ‘what-if’ scenarios with little or no basis in fact, and that climate is slowly receding to more informed levels of caution.
Policy also assuredly changes in the other direction as new frontiers are reached, to account for increased abilities of researchers to manipulate these systems.
Thanks for the reply.