Generally actually I would. Honestly as much as I love sexual and romantic entanglement with women, I can’t help but feel giddy about the awesomeness (according to my values) of an all male civilization on Mars. And I’ve already spoken about how I would probably take a pill that would make me asexual. Sexbots or homosexuality inducing pills seem an inferior solution but not that much. As long as the pill that would make me homosexual would change just my sexual preference and nothing else (I suspect the typical male homosexual brains actually differ in other subtle systematic ways from typical heterosexual male brains).
The problem comes here:
Note that as with encountering an alien civilization there is no guarantee whatsoever that peaceful coexistence would be viable in the long term.
Most LessWrongers have given very little though to the idea that human values might differ significantly enough to be incompatible. Even fewer have thought of finding a way to have them coexist rather than just making sure their own value set gobbles up as much matter.
A FAI is more likley to actually be a FAI if people don’t engage in a last desperate war for ownership of all the universe for eternity at the time of its construction.
The current proposed solution to avoid such negative sum arms race (where aggressive action and recklessness reduce the likelihood of a friendly AI for nearly all other human value sets, but increases the likelihood of one for your particular value set) has been to hope that our values aren’t really different, we’re just (for now) too dumb to see this.
Generally actually I would. Honestly as much as I love sexual and romantic entanglement with women, I can’t help but feel giddy about the awesomeness (according to my values) of an all male civilization on Mars. And I’ve already spoken about how I would probably take a pill that would make me asexual. Sexbots or homosexuality inducing pills seem an inferior solution but not that much. As long as the pill that would make me homosexual would change just my sexual preference and nothing else (I suspect the typical male homosexual brains actually differ in other subtle systematic ways from typical heterosexual male brains).
The problem comes here:
Most LessWrongers have given very little though to the idea that human values might differ significantly enough to be incompatible. Even fewer have thought of finding a way to have them coexist rather than just making sure their own value set gobbles up as much matter.
That’s because it seems more likely that there’s only one FAI to rule them all, and whatever values it has will dominate the light-cone.
A FAI is more likley to actually be a FAI if people don’t engage in a last desperate war for ownership of all the universe for eternity at the time of its construction.
The current proposed solution to avoid such negative sum arms race (where aggressive action and recklessness reduce the likelihood of a friendly AI for nearly all other human value sets, but increases the likelihood of one for your particular value set) has been to hope that our values aren’t really different, we’re just (for now) too dumb to see this.
It’s a bit worse than that. The “hope” seems to be more along the lines of:
Nevermind how a nascent valueless AI is supposed to convince itself to go back into the box.