It’s certainly possible. My analysis so far is only on a “all else being equal” footing.
I do feel that, absent other data, the safer assumption is that if an AI is capable of becoming a singleton at all, expense (in terms of energy/matter and space or time) isn’t going to be the thing that stops it. But that may be just a cached thought because I’m used to thinking of an AI trying to become a singleton as a dangerous potential adversary. I would appreciate your insight.
As for values, certainly conflicting values can exist, from ones that mention the subject directly (“don’t move everyone to a simulation in a way they don’t notice” would close one obvious route) to ones that impinge upon it in unexpected ways (“no first strike against aliens” becomes “oops, an alien-built paperclipper just ate Jupiter from the inside out”).
It’s certainly possible. My analysis so far is only on a “all else being equal” footing.
I do feel that, absent other data, the safer assumption is that if an AI is capable of becoming a singleton at all, expense (in terms of energy/matter and space or time) isn’t going to be the thing that stops it. But that may be just a cached thought because I’m used to thinking of an AI trying to become a singleton as a dangerous potential adversary. I would appreciate your insight.
As for values, certainly conflicting values can exist, from ones that mention the subject directly (“don’t move everyone to a simulation in a way they don’t notice” would close one obvious route) to ones that impinge upon it in unexpected ways (“no first strike against aliens” becomes “oops, an alien-built paperclipper just ate Jupiter from the inside out”).