But I doubt that morality is all in out genetic nature; I suspect that most of it is learned, from our parents, aunts, uncles, grandparents and other older relatives; I think, in short, that morality is memetic rather than genetic.
That’s possible. But memetics can’t build morality out of nothing. At the very least, evolved genetics has to provide a “foundation,” a part of the brain that moral memes can latch onto. Sociopaths lack that foundation, although the research is inconclusive as to what extent this is caused by genetics, and what extent it is caused by later developmental factors (it appears to be a mix of some sort).
Hmmm. Looking at the wikipedia article, I can expect reflective equilibrium to produce a consistent moral framework. I also expect a correct moral framework to be consistent; but not all consistent moral frameworks are correct.
Yes, that’s why I consider reflective equilibrium to be far from perfect. Depending on how many errors you latch onto, it might worsen your moral state.
Carrying through this method to completion could give us—or anyone else—an equation. But is there any way to be sure that it necessarily gives us the correct equation?
Considering how morally messed up the world is now, even an imperfect equation would likely be better (closer to being correct) than our current slapdash moral heuristics. At this point we haven’t even achieved “good enough,” so I don’t think we should worry too much about being “perfect.”
However, I anticipate that the acquired results would be N subtly different, but similar, equations.
That’s not inconceivable. But I think that each of the subtly different equations would likely be morally better than pretty much every approximation we currently have.
But memetics can’t build morality out of nothing. At the very least, evolved genetics has to provide a “foundation,” a part of the brain that moral memes can latch onto. Sociopaths lack that foundation, although the research is inconclusive as to what extent this is caused by genetics, and what extent it is caused by later developmental factors
That sounds plausible, yes.
Considering how morally messed up the world is now, even an imperfect equation would likely be better (closer to being correct) than our current slapdash moral heuristics. At this point we haven’t even achieved “good enough,” so I don’t think we should worry too much about being “perfect.”
Hmmm. Finding an approximation to the equation will probably be easier than step two; encouraging people worldwide to accept the approximation. (Especially since many people who do accept it will then promptly begin looking for loopholes; either to use or to patch them).
However, if the correct equation cannot be found, then this means that the Morality Maximiser AI cannot be designed.
However, if the correct equation cannot be found, then this means that the Morality Maximiser AI cannot be designed.
That’s true, what I was trying to say is that a world ruled by a 99.99% Approximation of Morality Maximizer AI might well be far far better than our current one, even if it is imperfect.
Of course, it might be a problem if we put the 99.99% Approximation of Morality Maximizer AI in power, then find the correct equation, only to discover that the 99AMMAI is unwilling to step down in favor of the Morality Maximizer AI. On the other hand, putting the 99AMM AI in power might be the only way to ensure a Paperclipper doesn’t ascend to power before we find the correct equation and design the MMAI. I’m not sure whether we should risk it or not.
That’s possible. But memetics can’t build morality out of nothing. At the very least, evolved genetics has to provide a “foundation,” a part of the brain that moral memes can latch onto. Sociopaths lack that foundation, although the research is inconclusive as to what extent this is caused by genetics, and what extent it is caused by later developmental factors (it appears to be a mix of some sort).
Yes, that’s why I consider reflective equilibrium to be far from perfect. Depending on how many errors you latch onto, it might worsen your moral state.
Considering how morally messed up the world is now, even an imperfect equation would likely be better (closer to being correct) than our current slapdash moral heuristics. At this point we haven’t even achieved “good enough,” so I don’t think we should worry too much about being “perfect.”
That’s not inconceivable. But I think that each of the subtly different equations would likely be morally better than pretty much every approximation we currently have.
That sounds plausible, yes.
Hmmm. Finding an approximation to the equation will probably be easier than step two; encouraging people worldwide to accept the approximation. (Especially since many people who do accept it will then promptly begin looking for loopholes; either to use or to patch them).
However, if the correct equation cannot be found, then this means that the Morality Maximiser AI cannot be designed.
That’s true, what I was trying to say is that a world ruled by a 99.99% Approximation of Morality Maximizer AI might well be far far better than our current one, even if it is imperfect.
Of course, it might be a problem if we put the 99.99% Approximation of Morality Maximizer AI in power, then find the correct equation, only to discover that the 99AMMAI is unwilling to step down in favor of the Morality Maximizer AI. On the other hand, putting the 99AMM AI in power might be the only way to ensure a Paperclipper doesn’t ascend to power before we find the correct equation and design the MMAI. I’m not sure whether we should risk it or not.