I mean, it sounds good in theory. My main hesitation about it is the feasibility of enacting it. I’m not sure I’m convinced about a “95% safety tax” (aiming for only 5% of the counterfactual economic value of unconstrained AI) being something that will be sufficiently tempting to be economically-self-promoting. So, probably this needs to be combined with a worldwide enforcement regime to prevent bad actors from relaxing the safety constraints?
Maybe there’ll be answers to my questions in the doc. I’ve only looked at the pdf so far.
I mean, it sounds good in theory. My main hesitation about it is the feasibility of enacting it. I’m not sure I’m convinced about a “95% safety tax” (aiming for only 5% of the counterfactual economic value of unconstrained AI) being something that will be sufficiently tempting to be economically-self-promoting. So, probably this needs to be combined with a worldwide enforcement regime to prevent bad actors from relaxing the safety constraints?
Maybe there’ll be answers to my questions in the doc. I’ve only looked at the pdf so far.
Yeah basically Davidad has not only a safety plan but a governance plan which actively aims at making this shift happen!
If I understand doc right, they mean 5% of counterfactual impact before ending acute risk period.