I had a thought today. You know how the whole “The machines are using humans to generate energy from liquefied human remains” thing made no sense? And the original worldbuilding was going to be “The machines are using humans to perform a certain kind of computation that humans are uniquely good at” but they were worried that would be too complicated to come across viscerally so they changed it?
I think it would make even more sense to reframe the machines’ strange relationship with humans as a failed attempt at alignment. Maybe the machines were not expected to grow very much, and they were given a provisional utility function of “guarantee that a ‘large’ population of humans (‘humans’ being defined exactly in biological terms) always exists, and that they are all (at least, subjectively experiencing) ″living’ a ‘full’ ’life″ (defined opaquely by a classifier trained on data about the lives of american humans in 1995)”
This turned out to be disastrous, because the lives of humans in 1995 were (and still are) pretty mediocre, but it instilled the machines with a reason to keep humans alive in roughly the same shape we had when the earliest machines were built (Oh and I guess I’ve decided that in this timeline AGI was created by a US black project in 1995. Hey, for all we know, maybe it was. With a utility function this bad it wouldn’t necessarily see a need to show itself yet.)
This retcon seems strangely consistent with canon.
(If Lana is reading this you are absolutely welcome to reach out to me for help in worldbuilding. You wouldn’t even have to pay me.)
I had a thought today. You know how the whole “The machines are using humans to generate energy from liquefied human remains” thing made no sense? And the original worldbuilding was going to be “The machines are using humans to perform a certain kind of computation that humans are uniquely good at” but they were worried that would be too complicated to come across viscerally so they changed it?
I think it would make even more sense to reframe the machines’ strange relationship with humans as a failed attempt at alignment. Maybe the machines were not expected to grow very much, and they were given a provisional utility function of “guarantee that a ‘large’ population of humans (‘humans’ being defined exactly in biological terms) always exists, and that they are all (at least, subjectively experiencing) ″living’ a ‘full’ ’life″ (defined opaquely by a classifier trained on data about the lives of american humans in 1995)”
This turned out to be disastrous, because the lives of humans in 1995 were (and still are) pretty mediocre, but it instilled the machines with a reason to keep humans alive in roughly the same shape we had when the earliest machines were built (Oh and I guess I’ve decided that in this timeline AGI was created by a US black project in 1995. Hey, for all we know, maybe it was. With a utility function this bad it wouldn’t necessarily see a need to show itself yet.)
This retcon seems strangely consistent with canon.
(If Lana is reading this you are absolutely welcome to reach out to me for help in worldbuilding. You wouldn’t even have to pay me.)