Hayekian subjectivism of limited knowledge, and limited reason, and error, resulting in Bayesian probabilities in the .8 range and below, with required updating, and impact on making +EV decisions...
Hippie subjectivism of you believe what you want to believe, and I believe what I want to believe.
There’s also the subjectivism of taste, sometimes known as consumer sovereignty (the idea, from David Friedman’s The Machinery of Freedom, that a person’s own good is defined as whatever he says it is). Not believing in that leads to outbreaks of senseless and counterproductive nannyism, whether carried out alone or with the help of authorities.
I assume that what you mean by “whatever he says it is” is whatever preferences his choices reveal, not literally what he says it is.
Believing that a person’s good is literally what they say it is can just as easily lead to “nannyism”, if we decided to prevent people from acting against their own good.
It’s a balance, what with akrasia and all—but yes, flat out accepting that people want precisely and only what they verbally and publicly indicate would be problematic.
Personally, I have yet to be convinced that “I really want to do X, but due to akrasia I don’t behave in ways that reflect my actual desire to do X” is a more accurate description of the world than “I don’t really want to do X, but due to signalling I express a desire to do X I don’t really have.”
I don’t think either of those is accurate. How about “I have reasons to do X and reasons not to do X, and I have not resolved the conflict. In fact, I may not be aware of what all the reasons on both sides are.”
He has signaled that he identifies with his “baser urges” (a.k.a., system 1), rather then his “higher faculties” (a.k.a., system 2, a.k.a., the part that makes promises to allies). As such when I really need him, he’s more likely to give in to akrasia on the grounds that any promises he made were merely signaling.
He has signaled that he is more likely to have baser urges that are in accord with higher faculties. So he is less likely to make promises that he can’t keep, betray me because he doesn’t really want to behave according to unrealistic ideals but then either express sincere remorse about his betrayal or outright self delusion and denial that he didn’t live up to the verbal symbols expressed.
He has also signaled that regardless of whether or not he would make a good ally he is probably not your ally. That is, your philosophy tends to be particularly idealistic and so you have a fair indication that he is going to be opposed to your social political moves when it comes to meme expression and belief enforcement.
I infer from your rather cryptic comment that you mean something like: if I ever actually experienced the thing we’re labeling akrasia, I’d understand that it’s not just signaling, but since I never have, I don’t. Is that right?
Pretty much yes. Although I think that you have experienced it to some extent. But when it is not so bad you can work around it and maintain your image, so then your signaling explanation is a good model. Whereas other times it makes you fail at important things. Or just forces you to do so much apologizing and compensating that it is very bad from a status/signaling perspective and costs you more than it would to just do the thing that you supposedly don’t really want to do.
Also “I believe it’s good to want to do X”. Like belief in belief where believing it’s good to believe something makes people think they believe it, I suspect that people confuse their really wanting to do something and their belief that it is good to really want to do said thing. You may have meant this too, but I think it’s different from just signaling. Is there a term for internal signaling?
I did mean that too, but you’re right that using the term without qualification the way I did is unnecessarily ambiguous. I don’t know of any concise unambiguous term for it; perhaps we should coin one.
It doesn’t, and indeed there are better alternatives than both. But “akrasia” often functions as a narrative attractor around here, so it seemed useful to provide an alternative.
I thought that “want” was dissected sufficiently to make that distinction testable a while back.
Of course, not everyone is consistent with their approval, or at least I’d think it’s difficult to extract preferences without people signalling all over them, but I personally identify with my approval rather than with my wanting.
I thought that “want” was dissolved somewhat a while back.
That isn’t dissolved. Reduced and described in detail perhaps but ‘dissolved’ ideas are the kind that aren’t even used at all once you are done thinking them through.
I agree that I’m using dissolved a bit wrong; I wrote the comment fairly late. What I meant was that the concept of “wanting” has been specified to the extent that the difference between akrasia and signaling was a practical question with actual predictive differences.
I’ll go back and edit the parent to specify better when I have the chance.
Aretae
There’s also the subjectivism of taste, sometimes known as consumer sovereignty (the idea, from David Friedman’s The Machinery of Freedom, that a person’s own good is defined as whatever he says it is). Not believing in that leads to outbreaks of senseless and counterproductive nannyism, whether carried out alone or with the help of authorities.
I assume that what you mean by “whatever he says it is” is whatever preferences his choices reveal, not literally what he says it is.
Believing that a person’s good is literally what they say it is can just as easily lead to “nannyism”, if we decided to prevent people from acting against their own good.
It’s a balance, what with akrasia and all—but yes, flat out accepting that people want precisely and only what they verbally and publicly indicate would be problematic.
Personally, I have yet to be convinced that “I really want to do X, but due to akrasia I don’t behave in ways that reflect my actual desire to do X” is a more accurate description of the world than “I don’t really want to do X, but due to signalling I express a desire to do X I don’t really have.”
I don’t think either of those is accurate. How about “I have reasons to do X and reasons not to do X, and I have not resolved the conflict. In fact, I may not be aware of what all the reasons on both sides are.”
(nods) That’s fair.
You’ve just signaled that you wouldn’t make a very reliable ally. I’ll keep that in mind. ;)
What do you look for in an ally?
No he hasn’t. He has signaled a lack of hypocrisy—a desirable trait in an ally.
He has signaled that he identifies with his “baser urges” (a.k.a., system 1), rather then his “higher faculties” (a.k.a., system 2, a.k.a., the part that makes promises to allies). As such when I really need him, he’s more likely to give in to akrasia on the grounds that any promises he made were merely signaling.
He has signaled that he is more likely to have baser urges that are in accord with higher faculties. So he is less likely to make promises that he can’t keep, betray me because he doesn’t really want to behave according to unrealistic ideals but then either express sincere remorse about his betrayal or outright self delusion and denial that he didn’t live up to the verbal symbols expressed.
He has also signaled that regardless of whether or not he would make a good ally he is probably not your ally. That is, your philosophy tends to be particularly idealistic and so you have a fair indication that he is going to be opposed to your social political moves when it comes to meme expression and belief enforcement.
Let’s hope that you never have to find out otherwise.
I infer from your rather cryptic comment that you mean something like: if I ever actually experienced the thing we’re labeling akrasia, I’d understand that it’s not just signaling, but since I never have, I don’t. Is that right?
Pretty much yes. Although I think that you have experienced it to some extent. But when it is not so bad you can work around it and maintain your image, so then your signaling explanation is a good model. Whereas other times it makes you fail at important things. Or just forces you to do so much apologizing and compensating that it is very bad from a status/signaling perspective and costs you more than it would to just do the thing that you supposedly don’t really want to do.
OK. Thanks for clarifying.
Also “I believe it’s good to want to do X”. Like belief in belief where believing it’s good to believe something makes people think they believe it, I suspect that people confuse their really wanting to do something and their belief that it is good to really want to do said thing. You may have meant this too, but I think it’s different from just signaling. Is there a term for internal signaling?
I did mean that too, but you’re right that using the term without qualification the way I did is unnecessarily ambiguous. I don’t know of any concise unambiguous term for it; perhaps we should coin one.
I don’t see why it has to be one or the other.
It doesn’t, and indeed there are better alternatives than both. But “akrasia” often functions as a narrative attractor around here, so it seemed useful to provide an alternative.
I thought that “want” was dissected sufficiently to make that distinction testable a while back.
Of course, not everyone is consistent with their approval, or at least I’d think it’s difficult to extract preferences without people signalling all over them, but I personally identify with my approval rather than with my wanting.
(edited to remove poor use of “dissolved”)
That isn’t dissolved. Reduced and described in detail perhaps but ‘dissolved’ ideas are the kind that aren’t even used at all once you are done thinking them through.
I agree that I’m using dissolved a bit wrong; I wrote the comment fairly late. What I meant was that the concept of “wanting” has been specified to the extent that the difference between akrasia and signaling was a practical question with actual predictive differences.
I’ll go back and edit the parent to specify better when I have the chance.
Yes, I think I agree with your main point.