Phil, I think you’re misunderstanding Eliezer’s take on ev psych; Eliezer is explicitly not concerned with slicing things into conscious vs. subconscious (only into evolutionarily computed vs. neurologically computed).
Eliezer, I agree that one can sharply distinguish even messy processes, including evolutionary vs. human messiness. My question (badly expressed last time) is whether motives can be sharply divided into evolutionary vs. human motives.
As a ridiculously exaggerated analogy, note that I can draw a sharp line division between you1 = you until thirty seconds ago and you2 = you after thirty seconds ago. However, it may be that I cannot cleanly attribute your “motive” to read this comment to either you1 or you2. The complexity of you1 and you2 is no barrier; we can sharply distinguish the processing involved in each. But if “motives” are theoretical entities that help us model behavior, then human “motives” are only approximate: if you seek a more exact model, “motives” are replaced by sets of partially coordinated sphexish tendencies, which are (in a still more exact model) replaced by atoms. “Motives” may be more useful for approximating you (as a whole) than for approximating either you1 or you2: perhaps you1 handed off to you2 a large number of partially coordinated, half-acted-out sphexish tendendies regarding comment-reading, and these tendencies can (all at once) be summarized by saying that you were motivated to read this comment.
Analogously, though less plausibly, my motivation to avoid heights, or to take actions that make me look honest, might be more coherently assigned to the combined processing of evolution and neurology than to either alone.
Phil, I think you’re misunderstanding Eliezer’s take on ev psych; Eliezer is explicitly not concerned with slicing things into conscious vs. subconscious (only into evolutionarily computed vs. neurologically computed).
Eliezer, I agree that one can sharply distinguish even messy processes, including evolutionary vs. human messiness. My question (badly expressed last time) is whether motives can be sharply divided into evolutionary vs. human motives.
As a ridiculously exaggerated analogy, note that I can draw a sharp line division between you1 = you until thirty seconds ago and you2 = you after thirty seconds ago. However, it may be that I cannot cleanly attribute your “motive” to read this comment to either you1 or you2. The complexity of you1 and you2 is no barrier; we can sharply distinguish the processing involved in each. But if “motives” are theoretical entities that help us model behavior, then human “motives” are only approximate: if you seek a more exact model, “motives” are replaced by sets of partially coordinated sphexish tendencies, which are (in a still more exact model) replaced by atoms. “Motives” may be more useful for approximating you (as a whole) than for approximating either you1 or you2: perhaps you1 handed off to you2 a large number of partially coordinated, half-acted-out sphexish tendendies regarding comment-reading, and these tendencies can (all at once) be summarized by saying that you were motivated to read this comment.
Analogously, though less plausibly, my motivation to avoid heights, or to take actions that make me look honest, might be more coherently assigned to the combined processing of evolution and neurology than to either alone.