Imagine that the Morris worm never happened, nor Blaster, nor Samy. A few people independently discovered SQL injection but kept it to themselves. [...]
That hypothetical world is almost impossible, because it’s unstable. As soon as certain people noticed that they could get an advantage, or even a laugh, out of finding and exploiting bugs, they’d do it. They’d also start building on the art, and they’ll even find ways to organize. And finding out that somebody had done it would inspire more people to do it.
You could probably have a world without the disclosure norm, but I don’t see how you could have a world without the actual exploitation.
We have driverless cars, robosurgeons, and simple automated agents acting for us, all with the security of original Sendmail.
None of those things are exactly bulletproof as it is.
But having the whole world at the level you describe basically sounds like you’ve somehow managed to climb impossibly high up an incredibly rickety pile of junk, to the point where instead of getting bruised when you inevitably do fall, you’re probably going to die.
Introducing the current norms into that would be painful, but not doing so would just let it get keeping worse, at least toward an asymptote.
and the level of caution I see in biorisk seems about right given these constraints.
If that’s how you need to approach it, then shouldn’t you shut down ALL biology research, and dismantle the infrastructure? Once you understand how something works, it’s relatively easy to turn around and hack it, even if that’s not how you originally got your understanding.
Of course there’d be defectors, but maybe only for relatively well understood and controlled purposes like military use, and the cost of entry could be pretty high. If you have generally available infrastructure, anybody can run amok.
I don’t think it’s a world we could have ended up in, no. It’s an example to get people thinking about how norms they currently view as really positive could be a very bad fit in a different situation.
We have driverless cars, robosurgeons, and simple automated agents acting for us, all with the security of original Sendmail.
None of those things are exactly bulletproof as it is.
I’d say if you wanted to exploit these in practice today, as a random bystander, automated agents are by far the easiest, via prompt injection. And we’ve responded by talking a lot about the issue so people don’t over rely on them, not deploying them much yet, working hard to exploit them, and working hard to figure out how to make LLMs more robust against prompt injection. This is computer security norms working well: scrutinize technology heavily, starting as soon as it comes out before we have a heavy dependency on it (which wasn’t an option with the emergence of biological life).
But having the whole world at the level you describe basically sounds like you’ve somehow managed to climb impossibly high up an incredibly rickety pile of junk, to the point where instead of getting bruised when you inevitably do fall, you’re probably going to die.
Introducing the current norms into that would be painful, but not doing so would just let it get keeping worse, at least toward an asymptote.
Introducing the current computer security norms into biology without adjustment for the different circumstances means we, very likely, all die. I think you’re assuming that those norms are the only option to improve the situation, though, which is why you’d take them over nothing? But instead there are other ways we can make good progress on biosecurity. I think Delay, Detect, Defend (disclosure: I work for Kevin) is a good intro.
If that’s how you need to approach it, then shouldn’t you shut down ALL biology research, and dismantle the infrastructure?
That sounds like the equivalent of shutting down all computing because you’re concerned about AI safety? Shutting down some areas, however, is something I think ranges from “clearly right” to “clearly wrong” depending on the risk/reward of the area. Stop researching how to predict whether a pathogen would be pandemic-class? Stop researching pest resistant food crops?
If the only choices were shutting it all down and doing nothing then I would lean towards the former, but not only aren’t those the choices, “shut it all down everywhere” would be (mostly rightly!) an incredibly unpopular approach we wouldn’t be able to make progress on.
Introducing the current computer security norms into biology without adjustment for the different circumstances means we, very likely, all die.
Only because of the issue of a mere catastrophe potentially leading us never to grow back our population. If you discount this effect, we’re not even sure if it’s possible at all for a biological infection to kill us all, and even if it is, I expect it to require way more implementation effort than people think.
I feel like this is either misinformation or very close to it.
That hypothetical world is almost impossible, because it’s unstable. As soon as certain people noticed that they could get an advantage, or even a laugh, out of finding and exploiting bugs, they’d do it. They’d also start building on the art, and they’ll even find ways to organize. And finding out that somebody had done it would inspire more people to do it.
You could probably have a world without the disclosure norm, but I don’t see how you could have a world without the actual exploitation.
None of those things are exactly bulletproof as it is.
But having the whole world at the level you describe basically sounds like you’ve somehow managed to climb impossibly high up an incredibly rickety pile of junk, to the point where instead of getting bruised when you inevitably do fall, you’re probably going to die.
Introducing the current norms into that would be painful, but not doing so would just let it get keeping worse, at least toward an asymptote.
If that’s how you need to approach it, then shouldn’t you shut down ALL biology research, and dismantle the infrastructure? Once you understand how something works, it’s relatively easy to turn around and hack it, even if that’s not how you originally got your understanding.
Of course there’d be defectors, but maybe only for relatively well understood and controlled purposes like military use, and the cost of entry could be pretty high. If you have generally available infrastructure, anybody can run amok.
I don’t think it’s a world we could have ended up in, no. It’s an example to get people thinking about how norms they currently view as really positive could be a very bad fit in a different situation.
I’d say if you wanted to exploit these in practice today, as a random bystander, automated agents are by far the easiest, via prompt injection. And we’ve responded by talking a lot about the issue so people don’t over rely on them, not deploying them much yet, working hard to exploit them, and working hard to figure out how to make LLMs more robust against prompt injection. This is computer security norms working well: scrutinize technology heavily, starting as soon as it comes out before we have a heavy dependency on it (which wasn’t an option with the emergence of biological life).
Introducing the current computer security norms into biology without adjustment for the different circumstances means we, very likely, all die. I think you’re assuming that those norms are the only option to improve the situation, though, which is why you’d take them over nothing? But instead there are other ways we can make good progress on biosecurity. I think Delay, Detect, Defend (disclosure: I work for Kevin) is a good intro.
That sounds like the equivalent of shutting down all computing because you’re concerned about AI safety? Shutting down some areas, however, is something I think ranges from “clearly right” to “clearly wrong” depending on the risk/reward of the area. Stop researching how to predict whether a pathogen would be pandemic-class? Stop researching pest resistant food crops?
If the only choices were shutting it all down and doing nothing then I would lean towards the former, but not only aren’t those the choices, “shut it all down everywhere” would be (mostly rightly!) an incredibly unpopular approach we wouldn’t be able to make progress on.
Only because of the issue of a mere catastrophe potentially leading us never to grow back our population. If you discount this effect, we’re not even sure if it’s possible at all for a biological infection to kill us all, and even if it is, I expect it to require way more implementation effort than people think.
I feel like this is either misinformation or very close to it.
https://www.lesswrong.com/posts/8NPFtzPhkeYZXRoh3/perpetually-declining-population
Here is a now-public example of how a biological infection could kill us all: Biological Risk from the Mirror World.
Flagging that I think this is false, but probably can’t get into why.