Well the agent definition contains a series of conditionals. You have as the last three lines: if “cooperating is provably better than defecting”, then cooperate; else, if “defecting is provably better than cooperating” then defect; else defect. Intuitively, assuming the agent’s utility function is consistent, only one antecedent clause will evaluate to true. In the case that the first one does, the agent will output C. Otherwise, it will move through to the next part of the conditional and if that evaluates to true the agent will output D. If not, then it will output D anyway. Because of this, I would go for proving that line three and 4 can’t both obtain.
Or you could prove that line 3 and line 4 can’t both obtain; I haven’t figured out exactly how to do this yet.
How about this? Suppose condition 3 and 4 both obtain. Then there exists and a and b such that U(C) = #a and U(D) = #b (switching out the underline for ‘#’). Also, there exists a p and q such that p > q and U(D) = #p and U(C) = #q. Now U(C) = #a > #b = U(D) = #c > #q = U(C) so U(C) > U(C). I may actually be confused on some details, since you indicate that a and b are numbers rather than variables (for instance in proposition 2 you select a and b as rational numbers), yet you further denote numerals for them, but I’m assuming that a >b iff #a > #b. Is this a valid assumption? I feel as though I’m missing some details here, and I’m not 100% sure how to fill them in. If my assumption isn’t valid I have a couple of other ideas. I’m not sure if I should be thinking of #a as ‘a’ under some interpretation.
Going a little bit further, it looks like your lemma 1 only invokes the first two lines and can easily be extended for use in conjecture 4 - look at parts 1 and 3. Part 1 is one step away from being identical to part 3 with C instead of D.
~Con(PA) → ~ Con(PA + anything), just plug in Con(PA) in there. Adding axioms can’t make an inconsistent theory consistent. Getting the analogous lemma to use in conjecture 4 looks pretty straightforward—switching out C with D and vice versa in each of the parts yields another true four part lemma, identical in form but for the switched D’s and C’s so you can use them analogously to how you used the original lemma 1 in the proof of proposition 3.
I think you may possibly be committing the fundamental attribution error. It’s my understanding that Al Qaeda terrorists are often people who were in a set of circumstances that made them highly succeptible to propaganda—often illiterate, living in poverty and with few, if any, prospects for advancment. It is easy to manipulate the ignorant and disenfranchised. If they knew more, saw the possibilities and understood more about the world I would be surprised if they would choose a path that diverges so greatly with your own that CEV would have to resort to looking at the reptilian brain.
“What evolution wants” doesn’t seem like a clear concept—at least I’m having trouble making concrete sense of it. I think that you’re conflating “evolution” with “more ancient drives”—the described extrapolation is an extrapolation with respect to evolutionarily ancient drives.
In particular, you seem to be suggesting that a CEV including only humans will coincide with a CEV including all vertibrates possessing a reptillian brain on the basis that our current goals seem wildly incompatible. However, as I understand it, CEV asks what we would want if we “knew more, grew up further together” etc.