EDIT 2: Did you mean that there are advantages to having both courage and caution, so you can’t have a machine that has maximal courage and maximal caution? That’s true, but you can probably still make pareto improvements over humans in terms of courage and caution.
Would changing “increase” to “optimize” fix your objection? Also, I don’t see how your first paragraph contradicts the first quoted sentence.
Mathematically impossible. If X matters then so does -X, but any increase in X corresponds to a decrease in -X.
I don’t know how the second sentence leads to the first. Why should a decrease in -X lead to less success? Moreover, claims of mathematical impossibility are often over-stated.
As for the paragraph after, it seems like it assumes current traits being on some sort of pareto frontier of economic-fitness. (And, perhaps, an assumption of adequate equilibria). But I don’t see why that’d be true. Like, I know of people who are more diligent than me, more intelligent, have lower discount rates etc. And they are indeed successful. EDIT: AFAICT, there’s a tonne of frictions and barriers, which weaken the force of the economic argument I think you’re making here.
Thank you for this. The analogies are quite helpful in forcing me to consider if my argument is valid. (Admittedly, this post was written in haste, and probably errs somehow. But realistically, I wouldn’t have polished this rant any further. So publishing as is, it is.) It feels like the “good/bad for alignment”, “p doom changed” discussions are not useful in the way that analyzing winning probabilities in a chess game is useful. I’m not sure what it is, exactly.
Perhaps thinking through an analogy to go, with which I’ve got more experience, would help. When I play go, I rarely think about updating my “probability of victory” directly. Usually, I look at the strength of my groups, their solidity etc. and that of my enemy. And, of course, if they move as I wish. Usually, I wish them to move in such a way that I can accomplish some tactical objective, say killing a group in the top right so I can form a solid band of territory there and make some immortal groups. When my opponent moves, I update my plans/estimates regarding my local objectives, which propagates to my “chances of victory”.
“Wait, the opponent moved there!? Crap, now my group is under threat. Are they trying to threaten me? Oh, wait, this bugger wants to surround me? I see. Can I circumvent that? Hmm… Yep, if I place this stone at C 4, it will push the field of battle to the lower left, where I’m stronger and can threaten more pieces than right now, and connect to the middle left.”
In other words, most of my time is spent focused on robust bottlenecks to victory, as they mostly determine my victory. My thoughts are not shaped like “ah, my odds of victory went down because my enemy place a stone at H 12 ”. The thoughts of victory come after the details. The updates to P(victory), likewise, are computed after computing P(details).