reducing trust in a “we’ll just figure it out as we go” mentality
I think reducing trust in “we’ll just figure it out as we go” while still operating under that mentality is bad; I think steps like this are how we stop operating under that mentality. [Was it the case that nothing like this would happen in a widespread way until high profile failures, because of the lack of external pressure? Maybe.]
I think users being able to report problems doesn’t help with x-risk-related problems. (The issue will be when these systems stop sending bug reports!) I nevertheless think having systems for users to report issues will be a step in the right direction, even if it doesn’t get us all the way.
It also likely depends a lot on the individual and their counterfactual (e.g., some people might have strong comparative advantages in independent research or certain kinds of coordination/governance roles that require being outside of a lab).
This seems right and is good to point out; but it wouldn’t surprise me if the right place for a lot of safety-minded folk to be is non-profits with broad government/industry backing that serve valuable infrastructure roles, rather than just standing athwart history yelling “stop!”. [How do we get that backing? Well, that’s the challenge.]
The argument I see against this is that voluntary security that’s short term useful can be discarded once it’s no longer so, whereas security driven by public pressure or regulation can’t. If a lab was had great practices for forever and then dropped them, there would be much less pressure to revert than if they’d previously had huge security incidents.
For instance, we might want to focus on public pressure for 1-2 years, then switch gears towards security
I agree that you want the regulation to have more teeth than just being an industry cartel. I’m not sure I agree on the ‘switching gears’ point—it seems to me like we can do both simultaneously (tho not as well), and may not have the time to do them sequentially.
I think reducing trust in “we’ll just figure it out as we go” while still operating under that mentality is bad; I think steps like this are how we stop operating under that mentality. [Was it the case that nothing like this would happen in a widespread way until high profile failures, because of the lack of external pressure? Maybe.]
I think users being able to report problems doesn’t help with x-risk-related problems. (The issue will be when these systems stop sending bug reports!) I nevertheless think having systems for users to report issues will be a step in the right direction, even if it doesn’t get us all the way.
This seems right and is good to point out; but it wouldn’t surprise me if the right place for a lot of safety-minded folk to be is non-profits with broad government/industry backing that serve valuable infrastructure roles, rather than just standing athwart history yelling “stop!”. [How do we get that backing? Well, that’s the challenge.]
The argument I see against this is that voluntary security that’s short term useful can be discarded once it’s no longer so, whereas security driven by public pressure or regulation can’t. If a lab was had great practices for forever and then dropped them, there would be much less pressure to revert than if they’d previously had huge security incidents.
For instance, we might want to focus on public pressure for 1-2 years, then switch gears towards security
I agree that you want the regulation to have more teeth than just being an industry cartel. I’m not sure I agree on the ‘switching gears’ point—it seems to me like we can do both simultaneously (tho not as well), and may not have the time to do them sequentially.