Fascinating. When you put it like this (particularly when you include the different argument to conservatives), slowing down progress seems relatively easy.
I think this approach will get pushback, but even more likely just mostly be ignored (by default) by the rationalist community because our values create an ugh field that steers us away from thinking about this approach.
Rationalists hate lying.
I hate it myself. The idea of putting lots of effort into a publicity campaign that misrepresents my real concerns sounds absolutely horrible.
What about an approach in which we present those concerns without claiming they’re the only concern? (and sort of downplaying the extent to which actual x-risk from agi is the main concern)?
I do care about bias in AI; it’s a real and unjust thing that will probably keep growing. And I’m actually fairly worried that job loss from AI could be so severe as to wreck most people’s lives (even people with jobs may have friends and family that are suddenly homeless). It may be so disruptive as to make the path to AGI even more dangerous by putting it in a cultural milieu of desperation.
The remaining question is how more regulation might actually harm our chances of winding up with aligned AGI. There have been some good points raised about how that could potentially turn alignment teams into regulation-satisfying teams, and how it could prevent alignment-focused research, but it seems like that would take more analysis.
Fascinating. When you put it like this (particularly when you include the different argument to conservatives), slowing down progress seems relatively easy.
I think this approach will get pushback, but even more likely just mostly be ignored (by default) by the rationalist community because our values create an ugh field that steers us away from thinking about this approach.
Rationalists hate lying.
I hate it myself. The idea of putting lots of effort into a publicity campaign that misrepresents my real concerns sounds absolutely horrible.
What about an approach in which we present those concerns without claiming they’re the only concern? (and sort of downplaying the extent to which actual x-risk from agi is the main concern)?
I do care about bias in AI; it’s a real and unjust thing that will probably keep growing. And I’m actually fairly worried that job loss from AI could be so severe as to wreck most people’s lives (even people with jobs may have friends and family that are suddenly homeless). It may be so disruptive as to make the path to AGI even more dangerous by putting it in a cultural milieu of desperation.
The main problems with this approach are the “what about China?” question, and the problem of creating polarization by aligning the argument with one of the existing political tribes in the US. Simultaneously presenting arguments that appeal to liberals and conservatives might prevent that.
The remaining question is how more regulation might actually harm our chances of winding up with aligned AGI. There have been some good points raised about how that could potentially turn alignment teams into regulation-satisfying teams, and how it could prevent alignment-focused research, but it seems like that would take more analysis.