I would like to see TGGP respond to my proposed theory of moral progress.
I don’t think that evolutionary psychology is needed to explain self-deception. Conditioning seems to be sufficient. The first, crudest, simplest approximations that most people learn for the concepts “truth” and “morality” may be “morality is that you can’t do what you want” and “truth is that you can’t think what you want”. If so, both will be seen as external authoritarian constraints to be rebelled against when one has the secrecy, status, or social support to do so. One major problem with the early AI “Eurisko” was that it wire-headed itself. This may be an extremely general problem facing a large space of learning systems including humans. Evolution uses kludges, so it probably solves this problem with kludge solutions, in which case “truth” really is an external constraint (actually a set of constraints, some of which evolved before the general learning systems and stabilized the latter enough to allow their evolution) on ‘you’ (for certain values of ‘you’ that don’t include your whole brain) that prevents you from simply being happy for your short existence, which may be the correct ‘average egoist’ thing to do, and with good enough external supports, such as a baby’s parents or Nozick’s experience machine, the correct ‘egoist’ thing to do for those who can’t contribute to a Friendly Singularity or don’t identify with its aftermath.
I don’t think that Leonid understands what “rationalists” are. I ask him, where are you used to encountering these exotic creatures?
I would like to see TGGP respond to my proposed theory of moral progress.
I don’t think that evolutionary psychology is needed to explain self-deception. Conditioning seems to be sufficient. The first, crudest, simplest approximations that most people learn for the concepts “truth” and “morality” may be “morality is that you can’t do what you want” and “truth is that you can’t think what you want”. If so, both will be seen as external authoritarian constraints to be rebelled against when one has the secrecy, status, or social support to do so. One major problem with the early AI “Eurisko” was that it wire-headed itself. This may be an extremely general problem facing a large space of learning systems including humans. Evolution uses kludges, so it probably solves this problem with kludge solutions, in which case “truth” really is an external constraint (actually a set of constraints, some of which evolved before the general learning systems and stabilized the latter enough to allow their evolution) on ‘you’ (for certain values of ‘you’ that don’t include your whole brain) that prevents you from simply being happy for your short existence, which may be the correct ‘average egoist’ thing to do, and with good enough external supports, such as a baby’s parents or Nozick’s experience machine, the correct ‘egoist’ thing to do for those who can’t contribute to a Friendly Singularity or don’t identify with its aftermath.
I don’t think that Leonid understands what “rationalists” are. I ask him, where are you used to encountering these exotic creatures?