I welcome more discussion of different forms of takeoff trajectory, competitive (both among AIs and among human+AI coalitions), and value-drift risks.
I worry a fair bit (and don’t really know whether the concern is even coherent) that I value individual experiences and diversity-of-values in a way that can erode if encoded and enforced in a formal way, which is one of the primary mechanisms being pursued by current research.
I welcome more discussion of different forms of takeoff trajectory, competitive (both among AIs and among human+AI coalitions), and value-drift risks.
I worry a fair bit (and don’t really know whether the concern is even coherent) that I value individual experiences and diversity-of-values in a way that can erode if encoded and enforced in a formal way, which is one of the primary mechanisms being pursued by current research.