An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.
One way to model willpower is that it is a muscle that uses up brain energy to accomplish things. This is a common model but it is not my current working hypothesis for how things “really universally work in human brains”. Rather, I see a need for “that which people vaguely gesture towards with the word willpower” as a sign that a person’s total cognitive makeup contains inconsistent elements that are destructively interfering with each other. In other words, the argument against logically coherent beliefs is sort of an argument in favor of akrasia.
Some people seem to have a standard response to this idea that is consonant with the slogan “that which can be destroyed by the truth should be” and this is generally not my preferred response except as a fallback in cases of a poverty of alternative options. The problem I have with “destroy my akrasia with the truth” responses is roughly that they are sort of like censoring a part of yourself without proper justification for doing so. I generally expect constraints of inferential distances and patience to make the detailed reasoning here opaque, but for those interested, a useful place to start is to consider the analogy of “cognitive components as assets” and then play compare and contrast with modern portfolio theory (MPT).
However, explicitly learning about MPT appears not to be within the cognitive means of most people at the present time… which means that if the related set of insights is critical to optimal real life functioning as an epistemic agent then an implicit form of the same insights is likely to be embedded in people in “latent but effective form”. It doesn’t mean that such people are “bad” or “trying to dominate you” necessarily, it just means that they have a sort of in-theory-culturally-rectifiable disability in the context of something like “explicitly negotiated life optimization”.
If this disability is emotionally affirmed as a desirable state and taken to logical extremes in a context of transhuman self modification abilities you might end up with something like dream apes:
Their ancestors stripped back the language centres to the level of higher primates. They still have stronger general intelligence than any other primate, but their material culture has been reduced dramatically – and they can no longer modify themselves, even if they want to. I doubt that they even understand their own origins any more.
Once you’ve reached the general ballpark of dream apes, the cognitive MPT insight has reached back around to touch on ethical questions that come up in daily life. You can imagine a sort of grid of social and political possibilities based on questions like: What if the dream ape is more (or less) ethical than me? What if a dream ape is more (or less) behaviorally effective than me, but in a “directly active” way (with learning and teaching perhaps expected to work by direct observation of gross body motions and direct inference of the justifications for those actions)? What if the dream ape has a benevolent (or hostile) attitude towards me right now? What if, relative to someone else, I’m the dream ape?
You can get an interesting intellectual puzzle by imagining that “becoming a god-like dream ape” (ie lesioning verbal processing but getting better at tools and science and ethics) turned out as “scientific fact” to be the morally and pragmatically correct outcome of the transhuman possibility. In that context, imagine that one of these “super awesome transhuman dream apes” runs into a person from a different virtue ethical clade who is (1) worth saving but (2) has tried (successfully or unsuccessfully) to totally close themselves to anything except verbally explicit forms of influence, and then (3) fallen into sin somehow. In this scenario, what does the angelic dream ape do to get a positive outcome?
EDITED: Ran into the comment length limit and trimmed the thought to a vaguely convenient stopping point.
One way to model willpower is that it is a muscle that uses up brain energy to accomplish things. This is a common model but it is not my current working hypothesis for how things “really universally work in human brains”. Rather, I see a need for “that which people vaguely gesture towards with the word willpower” as a sign that a person’s total cognitive makeup contains inconsistent elements that are destructively interfering with each other. In other words, the argument against logically coherent beliefs is sort of an argument in favor of akrasia.
Some people seem to have a standard response to this idea that is consonant with the slogan “that which can be destroyed by the truth should be” and this is generally not my preferred response except as a fallback in cases of a poverty of alternative options. The problem I have with “destroy my akrasia with the truth” responses is roughly that they are sort of like censoring a part of yourself without proper justification for doing so. I generally expect constraints of inferential distances and patience to make the detailed reasoning here opaque, but for those interested, a useful place to start is to consider the analogy of “cognitive components as assets” and then play compare and contrast with modern portfolio theory (MPT).
However, explicitly learning about MPT appears not to be within the cognitive means of most people at the present time… which means that if the related set of insights is critical to optimal real life functioning as an epistemic agent then an implicit form of the same insights is likely to be embedded in people in “latent but effective form”. It doesn’t mean that such people are “bad” or “trying to dominate you” necessarily, it just means that they have a sort of in-theory-culturally-rectifiable disability in the context of something like “explicitly negotiated life optimization”.
If this disability is emotionally affirmed as a desirable state and taken to logical extremes in a context of transhuman self modification abilities you might end up with something like dream apes:
Once you’ve reached the general ballpark of dream apes, the cognitive MPT insight has reached back around to touch on ethical questions that come up in daily life. You can imagine a sort of grid of social and political possibilities based on questions like: What if the dream ape is more (or less) ethical than me? What if a dream ape is more (or less) behaviorally effective than me, but in a “directly active” way (with learning and teaching perhaps expected to work by direct observation of gross body motions and direct inference of the justifications for those actions)? What if the dream ape has a benevolent (or hostile) attitude towards me right now? What if, relative to someone else, I’m the dream ape?
You can get an interesting intellectual puzzle by imagining that “becoming a god-like dream ape” (ie lesioning verbal processing but getting better at tools and science and ethics) turned out as “scientific fact” to be the morally and pragmatically correct outcome of the transhuman possibility. In that context, imagine that one of these “super awesome transhuman dream apes” runs into a person from a different virtue ethical clade who is (1) worth saving but (2) has tried (successfully or unsuccessfully) to totally close themselves to anything except verbally explicit forms of influence, and then (3) fallen into sin somehow. In this scenario, what does the angelic dream ape do to get a positive outcome?
EDITED: Ran into the comment length limit and trimmed the thought to a vaguely convenient stopping point.
trying to delete