I think 1 and 2 are good points. A big issue is that 2 might have problems with “picking up pennies in front of the steamroller,” where increases in planning ability and long-term coherence are valuable all the way from minutes to years, so pushing the envelope might be incentivized even if going too far would lose economic value.
3 I think neglects that extra features are harder for individual actors too, not just in competition. Alignment taxes that just look like “extra steps that you have to get right” will predictably increase the difficulty of getting the steps right.
For 4, what kind of substance to the regulation are you imagining? I think government regulation could be useful for slowing down data and compute intensive development. But what standards would you use for detecting when someone is developing AI that needs to be safe, and what safety work do you want mandated? The EU’s recent regulation is maybe a good place to start from, but it doesn’t address AGI and it doesn’t address the sort of safety work that AGI implies.
I think 1 and 2 are good points. A big issue is that 2 might have problems with “picking up pennies in front of the steamroller,” where increases in planning ability and long-term coherence are valuable all the way from minutes to years, so pushing the envelope might be incentivized even if going too far would lose economic value.
3 I think neglects that extra features are harder for individual actors too, not just in competition. Alignment taxes that just look like “extra steps that you have to get right” will predictably increase the difficulty of getting the steps right.
For 4, what kind of substance to the regulation are you imagining? I think government regulation could be useful for slowing down data and compute intensive development. But what standards would you use for detecting when someone is developing AI that needs to be safe, and what safety work do you want mandated? The EU’s recent regulation is maybe a good place to start from, but it doesn’t address AGI and it doesn’t address the sort of safety work that AGI implies.