Yeah, I think the level of seriousness is basically the same as if someone asked Eliezer “what’s a plausible world where humanity solves alignment?” to which the reply would be something like “none unless my assumptions about alignment are wrong, but here’s an implausible world where alignment is solved despite my assumptions being right!”
The implausible world is sketched out in way too much detail, but lots of usefulness points are lost by its being implausible. The useful kernel remaining is something like “with infinite coordination capacity we could probably solve alignment” plus a bit because Eliezer fiction is substantially better for your epistemics than other fiction. Maybe there’s an argument for taking it even less seriously? That said, I’ve definitely updated down on the usefulness of this given the comments here.
Yeah, I think the level of seriousness is basically the same as if someone asked Eliezer “what’s a plausible world where humanity solves alignment?” to which the reply would be something like “none unless my assumptions about alignment are wrong, but here’s an implausible world where alignment is solved despite my assumptions being right!”
The implausible world is sketched out in way too much detail, but lots of usefulness points are lost by its being implausible. The useful kernel remaining is something like “with infinite coordination capacity we could probably solve alignment” plus a bit because Eliezer fiction is substantially better for your epistemics than other fiction. Maybe there’s an argument for taking it even less seriously? That said, I’ve definitely updated down on the usefulness of this given the comments here.