I wrote this a year ago and was not really thinking about this topic as carefully and was feeling quite emotional about the lack of effort. At the time people mostly thought slowing down AI was impossible or undesirable for some good reasons and a lot of reasons that in hindsight looked pretty dumb.
I think a better strategy would look more like “require new systems guarantee a reasonable level of interpretability and pass a set of safety benchmarks”
And eventually, if you can actually convince enough people of the danger, there should be a hard cap on the amount of compute that can be used in training runs that decreases over time to compensate for algorithmic improvements.
I wrote this a year ago and was not really thinking about this topic as carefully and was feeling quite emotional about the lack of effort. At the time people mostly thought slowing down AI was impossible or undesirable for some good reasons and a lot of reasons that in hindsight looked pretty dumb.
I think a better strategy would look more like “require new systems guarantee a reasonable level of interpretability and pass a set of safety benchmarks”
And eventually, if you can actually convince enough people of the danger, there should be a hard cap on the amount of compute that can be used in training runs that decreases over time to compensate for algorithmic improvements.