Sure, but the *specific* type of error I’m imagining would surely be easier to pick up than most other errors. I have no idea what sort of sanity checking was done with GPT-2, but the fact that the developers were asleep when it trained is telling: they weren’t being as careful as they could’ve been.
For this type of bug (a sign error in the utility function) to occur *before* the system is deployed and somehow persist, it’d have to make it past all sanity-checking tools (which I imagine would be used extensively with an AGI) *and* somehow not be noticed at all while the model trains *and* whatever else. Yes, these sort of conjunctions occur in the real world but the error is generally more subtle than “system does the complete opposite of what it was meant to do”.
I made a question post about this specific type of bug occurring before deployment a while ago and think my views have shifted significantly; it’s unlikely that a bug as obvious as one that flips the sign of the utility function won’t be noticed before deployment. Now I’m more worried about something like this happening *after* the system has been deployed.
I think a more robust solution to all of these sort of errors would be something like the separation from hyperexistential risk article that I linked in my previous response. I optimistically hope that we’re able to come up with a utility function that doesn’t do anything worse than death when minimised, just in case.
At least with current technologies, I expect serious risks to start occuring during training, not deployment. That’s ultimately when you will the greatest learning happening, when you have the greatest access to compute, and when you will first cross the threshold of intelligence that will make the system actually dangerous. So I don’t think that just checking things after they are trained is safe.
I’m under the impression that an AGI would be monitored *during* training as well. So you’d effectively need the system to turn “evil” (utility function flipped) during the training process, and the system to be smart enough to conceal that the error occurred. So it’d need to happen a fair bit into the training process. I guess that’s possible, but IDK how likely it’d be.
Yeah, I do think it’s likely that AGI would be monitored during training, but the specific instance of Open AI staff being asleep while we train the AI is a clear instance of us not monitoring the AI during the most crucial periods (which, to be clear, I think is fine since I think the risks were indeed quite low, and I don’t see this as providing super much evidence about Open AI’s future practices)
Given that compute is very expensive, economic pressures will push training to be 24⁄7, so it’s unlikely that people generally pause the training when going to sleep.
Sure, but the *specific* type of error I’m imagining would surely be easier to pick up than most other errors. I have no idea what sort of sanity checking was done with GPT-2, but the fact that the developers were asleep when it trained is telling: they weren’t being as careful as they could’ve been.
For this type of bug (a sign error in the utility function) to occur *before* the system is deployed and somehow persist, it’d have to make it past all sanity-checking tools (which I imagine would be used extensively with an AGI) *and* somehow not be noticed at all while the model trains *and* whatever else. Yes, these sort of conjunctions occur in the real world but the error is generally more subtle than “system does the complete opposite of what it was meant to do”.
I made a question post about this specific type of bug occurring before deployment a while ago and think my views have shifted significantly; it’s unlikely that a bug as obvious as one that flips the sign of the utility function won’t be noticed before deployment. Now I’m more worried about something like this happening *after* the system has been deployed.
I think a more robust solution to all of these sort of errors would be something like the separation from hyperexistential risk article that I linked in my previous response. I optimistically hope that we’re able to come up with a utility function that doesn’t do anything worse than death when minimised, just in case.
At least with current technologies, I expect serious risks to start occuring during training, not deployment. That’s ultimately when you will the greatest learning happening, when you have the greatest access to compute, and when you will first cross the threshold of intelligence that will make the system actually dangerous. So I don’t think that just checking things after they are trained is safe.
I’m under the impression that an AGI would be monitored *during* training as well. So you’d effectively need the system to turn “evil” (utility function flipped) during the training process, and the system to be smart enough to conceal that the error occurred. So it’d need to happen a fair bit into the training process. I guess that’s possible, but IDK how likely it’d be.
Yeah, I do think it’s likely that AGI would be monitored during training, but the specific instance of Open AI staff being asleep while we train the AI is a clear instance of us not monitoring the AI during the most crucial periods (which, to be clear, I think is fine since I think the risks were indeed quite low, and I don’t see this as providing super much evidence about Open AI’s future practices)
Given that compute is very expensive, economic pressures will push training to be 24⁄7, so it’s unlikely that people generally pause the training when going to sleep.
Sure, but I’d expect that a system as important as this would have people monitoring it 24⁄7.