Spearhead an international alliance to prohibit the development of smarter-than-human AI until we’re in a radically different position.
Has anyone already thought about how one would operationalize a ban of “smarter-than-human AI”? Seems like by default it would include things like Stockfish in chess, and that’s not really what anyone is concerned about.
Seems like the definitional problem may be a whole can of worms in itself, similarly to the never ending debates about what constitutes AGI.
As a start, you can prohibit sufficiently large training runs. This isn’t a necessary-and-sufficient condition, and doesn’t necessarily solve the problem on its own, and there’s room for debate about how risk changes as a function of training resources. But it’s a place to start, when the field is mostly flying blind about where the risks arise; and choosing a relatively conservative threshold makes obvious sense when failing to leave enough safety buffer means human extinction. (And when algorithmic progress is likely to reduce the minimum dangerous training size over time, whatever it is today—also a reason the cap is likely to need to lower over time to some extent, until we’re out of the lethally dangerous situation we currently find ourselves in.)
Has anyone already thought about how one would operationalize a ban of “smarter-than-human AI”? Seems like by default it would include things like Stockfish in chess, and that’s not really what anyone is concerned about.
Seems like the definitional problem may be a whole can of worms in itself, similarly to the never ending debates about what constitutes AGI.
As a start, you can prohibit sufficiently large training runs. This isn’t a necessary-and-sufficient condition, and doesn’t necessarily solve the problem on its own, and there’s room for debate about how risk changes as a function of training resources. But it’s a place to start, when the field is mostly flying blind about where the risks arise; and choosing a relatively conservative threshold makes obvious sense when failing to leave enough safety buffer means human extinction. (And when algorithmic progress is likely to reduce the minimum dangerous training size over time, whatever it is today—also a reason the cap is likely to need to lower over time to some extent, until we’re out of the lethally dangerous situation we currently find ourselves in.)