Some additional notes from chatting with some bio people, about bio safety practices.
One thing going on with bio safety is that the rules are, in fact, kinda overly stringent. (i.e. if you get stuff on your hands, wash your hands for a full 15 minutes). People interpret the rules as coming from a place of ass-covering. People working in a biolab have a pretty good idea of what they’re working with and how safe it is. So the rules feel annoying and slowing-down-work for dumb reasons.
If AI was about-as-dangerous-as-bio, I’d probably think “Well, obviously part of the job here is to come up with a safety protocol that’s actually ‘correct’”, such that people don’t learn to tune it out. Maybe separate out “the definitely important rules” from the “the ass covering rules”.
With AI, there is the awkward thing of “well, but I do just really want AI being developed more slowly across the board.” So “just impose a bunch of restrictions” isn’t an obviously crazy idea. But
a) I think that can only work if imposed from outside as a regulation – it won’t work for the sort of internal-culture-building that this post is aimed at,
b) even for an externally imposed regulation, I think regulations that don’t make sense, and are just red-tape-for-the-sake-of-red-tape, are going to produce more backlash and immune response.
c) When I imagine the most optimistically useful regulations I can think of, implemented with real human bureaucracies, I think they still don’t get us all the way to safe AGI development practices. Anyone who’s plan actually routes through “eventually start working on AGI” needs an org that is significant better than the types of regulations I can imagine actually existing.
My wife was working in a BSL-3 facility with COVID and other viruses that were causing serious health issues in humans and were relatively easy to spread. This is the type of lab where you wear positive pressure suits.
To have access to such a facility, you need to take training in safety measures, which takes about a month, and successfully pass the exam—only after that can you enter. People who were working there, of course, were both intelligent and had master’s or doctoral degrees in some field related to biology or virology.
So, in essence, we have highly intelligent people who know that they are working with very dangerous stuff and passed the training and exam on safety measures. The atmosphere itself motivates you to be accurate—you’re wearing the positive pressure suit in the COVID lab.
What it was like in reality:
Suit indicates that filter/battery replacement needed—oh, it’s okay, it can wait. The same with UV lamps replacement in the lab.
Staying all night in a lab without sleeping properly—yeah, a regular case if someone is trying to finish their experiments.
There were rumors that once someone even took a mobile phone with them. A mobile phone. In BSL-3.
It seems to me that after some time of work with dangerous stuff people just become overconfident because their observations are something like: “previously nothing bad happened, so it’s ok to relax a bit and be less careful about safety measures”.
Some additional notes from chatting with some bio people, about bio safety practices.
One thing going on with bio safety is that the rules are, in fact, kinda overly stringent. (i.e. if you get stuff on your hands, wash your hands for a full 15 minutes). People interpret the rules as coming from a place of ass-covering. People working in a biolab have a pretty good idea of what they’re working with and how safe it is. So the rules feel annoying and slowing-down-work for dumb reasons.
If AI was about-as-dangerous-as-bio, I’d probably think “Well, obviously part of the job here is to come up with a safety protocol that’s actually ‘correct’”, such that people don’t learn to tune it out. Maybe separate out “the definitely important rules” from the “the ass covering rules”.
With AI, there is the awkward thing of “well, but I do just really want AI being developed more slowly across the board.” So “just impose a bunch of restrictions” isn’t an obviously crazy idea. But
a) I think that can only work if imposed from outside as a regulation – it won’t work for the sort of internal-culture-building that this post is aimed at,
b) even for an externally imposed regulation, I think regulations that don’t make sense, and are just red-tape-for-the-sake-of-red-tape, are going to produce more backlash and immune response.
c) When I imagine the most optimistically useful regulations I can think of, implemented with real human bureaucracies, I think they still don’t get us all the way to safe AGI development practices. Anyone who’s plan actually routes through “eventually start working on AGI” needs an org that is significant better than the types of regulations I can imagine actually existing.
My wife was working in a BSL-3 facility with COVID and other viruses that were causing serious health issues in humans and were relatively easy to spread. This is the type of lab where you wear positive pressure suits.
To have access to such a facility, you need to take training in safety measures, which takes about a month, and successfully pass the exam—only after that can you enter. People who were working there, of course, were both intelligent and had master’s or doctoral degrees in some field related to biology or virology.
So, in essence, we have highly intelligent people who know that they are working with very dangerous stuff and passed the training and exam on safety measures. The atmosphere itself motivates you to be accurate—you’re wearing the positive pressure suit in the COVID lab.
What it was like in reality: Suit indicates that filter/battery replacement needed—oh, it’s okay, it can wait. The same with UV lamps replacement in the lab. Staying all night in a lab without sleeping properly—yeah, a regular case if someone is trying to finish their experiments. There were rumors that once someone even took a mobile phone with them. A mobile phone. In BSL-3.
It seems to me that after some time of work with dangerous stuff people just become overconfident because their observations are something like: “previously nothing bad happened, so it’s ok to relax a bit and be less careful about safety measures”.