Yes, I cannot deny that Friendly AI is way better than paper-clip optimizer. What frightens me is that when (if) CEV will converge, the humanity will be stuck in local maximum for the rest of eternity. It seems that FAI after CEV convergence will have adamantine moral by design (or it will look like it has, if FAI will be unconscious). And no one will be able to talk FAI out of this, or no one will want.
It seems we have not much choice, however. Bottoms up, to the Friendly God.
If CEV can include willingness to update as more information comes in and more processing power becomes available (and if I have anything to say about it, it will), there should be ways out of at least some of the local maxima.
Anyone can to speculate about the possibilities of contact with alien FAIs?
Would a community of alien FAIs be likely to have a better CEV than a human-only FAI?
If there are advantages to getting alien CEVs, but we’re unlikely to contact aliens because of light speed limits, or if we do, we’re unlikely to get enough information to construct their CEVs, would it make sense to evolve alien species (probably in simulation)? What would the ethical problems be?
Simulated aliens complex enough to have a CEV are complex enough to be people, and since death is evolution’s favorite tool, simulating the evolution of the species would be causing many needless deaths.
But I don’t see why we would want our CEV to include a random sample of possible aliens. If, when we encounter aliens, we find that we care about their values, we can run a CEV on them at that time.
Not sure where I stand actually, but this seems relevant:
“If God did not exist, it would be necessary to invent him”—Voltaire
I suppose it should be added that one should do one’s best to make sure the god that’s created is more Friendly than Not.
Yes, I cannot deny that Friendly AI is way better than paper-clip optimizer. What frightens me is that when (if) CEV will converge, the humanity will be stuck in local maximum for the rest of eternity. It seems that FAI after CEV convergence will have adamantine moral by design (or it will look like it has, if FAI will be unconscious). And no one will be able to talk FAI out of this, or no one will want.
It seems we have not much choice, however. Bottoms up, to the Friendly God.
If CEV can include willingness to update as more information comes in and more processing power becomes available (and if I have anything to say about it, it will), there should be ways out of at least some of the local maxima.
Anyone can to speculate about the possibilities of contact with alien FAIs?
Would a community of alien FAIs be likely to have a better CEV than a human-only FAI?
If there are advantages to getting alien CEVs, but we’re unlikely to contact aliens because of light speed limits, or if we do, we’re unlikely to get enough information to construct their CEVs, would it make sense to evolve alien species (probably in simulation)? What would the ethical problems be?
Simulated aliens complex enough to have a CEV are complex enough to be people, and since death is evolution’s favorite tool, simulating the evolution of the species would be causing many needless deaths.
The simulation could provide an afterlife.
But I don’t see why we would want our CEV to include a random sample of possible aliens. If, when we encounter aliens, we find that we care about their values, we can run a CEV on them at that time.
This possibility may be the strongest source of probability mass for an afterlife for us.
Does a similar argument apply to having children if there’s no high likelihood of immortality tech?
Depends on the context. Quite plausibly, though.
Isn’t God fake?
Must be. If he would exist, he would not have invented ape-imitating humans, would he?
Mysterious ways. :P