I’m actually am having a little trouble grasping your meaning with these questions. I agree that someone giving advice on (1) should have applied (1). Otherwise, they don’t have a justified claim to the knowledge in (1). But this is the case whether or not they demonstrate (2), which is why I’m confused by the wording of your question.
What I don’t get is why it is relevant if the advice-giver failed to realize that they should have applied (1) in some particular case, even if they ought to have known that they should have applied (1).
To give a gruesome example, a professional hitman might be able to give very good advice on how to kill someone you’ve decided to kill, even if his advice on when to decide to kill someone is spectacularly bad.
Similarly, your evaluation of Alicorn’s advice on how to like someone you’ve decided to like should be independent of your belief that she’s very bad at deciding when to like someone.
So that’s what your entire criticism amounts to? That maybe Alicorn just didn’t recognize this as an opportunity to use her skills, even as she goes through the terror of seeing my comments pop up all over the place?
That would kind of require you to believe that Alicorn was lying about the whole psychological stress thing, which is a spectacularly nasty thing to lie about. If you’re fine with that if it proves me wrong … I guess that’s a call you have to make.
That would kind of require you to believe that Alicorn was lying about the whole psychological stress thing. . .
I don’t follow this inference at all.
I would guess that she “didn’t recognize this as an opportunity to use her skills” because of the psychological stress.
That is, because of the psychological stress of interacting with you, she came to the mistaken conclusion that she ought not to like you, so she never tried to apply her advice. That would be my guess.
ETA: Maybe this is your reasoning (please confirm or deny): A person with the ability to choose to like anyone would choose to like everyone, especially the people that he or she really, really doesn’t like. This is because disliking someone is unpleasant, and it’s more unpleasant the more you dislike them. But liking someone is pleasant, so that is what someone with the power in the OP would choose to do. Therefore, someone who claims to have the power in the OP, but who also evidently doesn’t like someone, is probably lying or deluded.
Fourth time: the advice applies to canceling dislike, just as much as changing to like.
So, your position is now that Alicorn suffers psychological stress from seeing my name all over her favorite[1] discussion site, but feels this is just “something she has to live with” (though it has disadvantages of its own), becuase of the severe wrongness of turning off her dislike of me?
It’s okay to say “oops”.
ETA:
Maybe this is your reasoning (please confirm or deny): A person with the ability to choose to like anyone would choose to like everyone, especially the people that he or she really, really doesn’t like. …
Not even close: I listed the reasons Alicorn unnecessarily adheres to a dislike that are specific to this situation, and how the unpleasantness can be good for and the site by switching to non-dislike … already it looks nothing like the reasoning you posited.
[1] please, please don’t nitpick this one—you get the point, I hope
So, your position is now that Alicorn suffers psychological stress from seeing my name all over her favorite[1] discussion site, but feels this is just “something she has to live with” (though it has disadvantages of its own), becuase of the severe wrongness of turning off her dislike of me?
Yes, I think that that is a fair description of my position. (ETA: However, the “severe wrongness” need not be moral wrongness. Humans often want to do unpleasant things and very much don’t want to do something that would increase their pleasure. It’s not all that unusual. Usually this is for moral reasons, as conventionally understood, but not always.)
Did you read my edit to my last comment? Does it capture your reasoning (with “like” replaced with “not dislike”, if you like)?
Cute, but considering how contorted your position has turned out to be, you can forgive me for wondering if you wanted to stick with it.
Yes, I think that that is a fair description of my position. (ETA: However, the “severe wrongness” need not be moral wrongness. Humans often want to do unpleasant things and very much don’t want to do something that would increase their pleasure. It’s not all that unusual. Usually this is for moral reasons, as conventionally understood, but not always.)
And that’s what I mean: on top of the already contorted position I attributed to you, you’re adding this moral-or-maybe-something else wrongness, which has no precedent in your earlier justifications. Do you think it’s probably one of the non-moral-wrongness things? Is that just a matter of terminology?
My earlier comment has been revised to respond to your addition, but it’s just an elaboration of “wtf? no”.
Do you take enjoyment in participating in these long, often repetitive arguments? Do you not find the antagonism consistently grating or stressful? If you have been wronged, surely from experience you can see that repeatedly bringing it up is simply not going to change anything. I’m curious as to whether this apparent futility bothers you in the same way that I know it would bother me.
Do you take enjoyment in participating in these long, often repetitive arguments?
No.
Do you not find the antagonism consistently grating or stressful?
I do find the antagonism grating and/or stressful. (The same with questions posed in the negative, but I digress.)
If you have been wronged, surely from experience you can see that repeatedly bringing it up is simply not going to change anything.
It’s definitely going to change the cardinality of the set of non-anonymous people who can indepently confirm or disconfirm being on the receiving end of Alicorn’s wisdom, which is what I was mainly hoping for.
To your broader, implied query: I’m between a rock in a hard place. I’ve wanted to point out what a crock Alicorn’s supposed insight on the matter is since her luminosity series (this isn’t the first time she’s posted advice in direct contradiction of how all evidence reveals she handles situations). After about the ~8th article, I couldn’t let her go on promoting this two-faced act, so I spoke up.
No, I don’t enjoy becoming LW’s whipping boy every three months. But what can I say—no good deed goes unpunished.
(The same with questions posed in the negative, but I digress.)
This doesn’t work online, but Steve Rayhawk has cultivated the habit of consistently responding to questions in the negative with an affirmative response (‘Yes, I do not believe that’, or simply ‘Yes’) and thus I feel I do not have to sacrifice meaning for ease of conversational flow. I really wish this would become a more common disposition. Anyway, sorry for doing that.
I think you discount the possibility (I have no idea how probable it is, by the way) that Alicorn is actually a generally luminous and thoughtful person and that for some reason you seem to be an especially rare and difficult case for her. Maybe she has legitimate things to say to help people generally, even if she messed up (or you messed up for her) the dynamic between you two specifically. I know Alicorn. She can be critical, but she’s genuinely a good person. It could be that you’ve been wronged, but it could also be that this is an an atypical result for people who interact with Alicorn, as most of the evidence seems to suggest. Generalizing from one example, although it probably feels justified, might actually be the wrong thing to do here. It might be impossible for you, but I’d suggest letting it go. All of the writing time you’ve spent on comments in this thread could have been spent on a good post, which is your strong point. One should generally not spend their time optimizing for cold harshies.
I’m not generalizing from one example, and my reaction is not atypical. Looking at the moderation difference between Alicorn and HughRistik regarding her advice here, and the numerous other times she posts dating/meeting friends advice in the comments section (rather than as an article), it seems that most men here aren’t benefitting from what she has to say in their daily lives—though they may certainly find the advice intellectually stimulating.
I’m not generalizing from one example, and my reaction is not typical.
Point taken (and I think you meant atypical?). It’s funny, because I know Hugh and I know Alicorn, and I bet they’d make decent friends in person (if they haven’t met already at a Less Wrong meetup while I was on vacation or something). Anyway, your claim here seems way more reasonable than the dramatized ones above. (“I couldn’t let her go on promoting this two-faced act”.) It seems you have narrowed your argument specifically to relationship advice, in which case I’m much more tempted to agree that your point has merit. But I think her luminosity sequence got a lot of upvotes for a reason. I personally found some useful concepts in there, and looking at the comments it seems many others also discovered her ideas about luminosity to be useful. First, I don’t think shouting ‘hypocrisy’ is a good argument against the usefulness of a post; second, I don’t think that shouting ‘hypocrisy’, or attempting ad hominem attacks, is going to get you anywhere anyway. If you want to make people think Alicorn is a bad person, fine, but why the heck would you want to do that? Vengeance? It seems you take the more reasonable position that Alicorn might be being trusted as an expert where she lacks skill, but continuing to attack her in areas where skill has been demonstrated erodes Less Wrongers’ ability to believe you are acting in good faith.
Cute, but considering how contorted your position has turned out to be, you can forgive me for wondering if you wanted to stick with it.
Hmm. I thought that I laid it out very cleanly here.
And that’s what I mean: on top of the already contorted position I attributed to you, you’re adding this moral-or-maybe-something else wrongness, which has no precedent in your earlier justifications. Do you think it’s probably one of the non-moral-wrongness things? Is that just a matter of terminology?
I think that it’s probably moral wrongness, but I’m less certain, so I’m more cautious about attributing that view to her.
But, at any rate, I honestly don’t see the contortions to which you refer. Perhaps she would experience a certain increase in pleasure if she modified herself not to dislike you. If she has this power, but chooses not to use it, then you may conclude that she cares about something more than that pleasure. It’s sort of like how Ghandi wouldn’t take a pill to make himself like to kill people, even if he knew that he would have lots of opportunities to kill people at no cost. There is a very standard distinction between what you think you ought to do and what you think will give you the most pleasure. I would expect the inferential distance on LW for this to be very short. That is why I don’t see my position as contorted.
Give me just a little credit here: yes I do understand the difference between “this increases my pleasure” and “I should do this”; and yes, there should be low inferential distance on explaining such a point on LW. That’s wasn’t in dispute. What’s in dispute is how much contortion you have to go through to justify why that distinction would be relevant and applicable here (which even the contortion leaves out).
And you didn’t lay it out very cleanly in the linked comment: you just made one distinction that is a very small part of what you have to say to specify your position.
My view is that Alicorn probably perceives certain benefits from not disliking you, such as the ones you’ve enumerated. But evidently she also sees other costs from not disliking you (costs which are probably moral). In her estimation (which I think is incorrect) the costs outweigh the benefits. Therefore, she has chosen not to apply the advice in the OP.
What’s contorted about that? As I see it, I’m just taking her revealed preferences at face value, while giving her the benefit of the doubt that she has the powers described in the OP.
Not even close: I listed the reasons Alicorn unnecessarily adheres to a dislike that are specific to this situation, and how the unpleasantness can be good for and the site by switching to non-dislike … already it looks nothing like the reasoning you posited.
Okay, how about this*:
Alicorn knows** that she ought to like Silas. Therefore, if she had the power to like whomever she wanted, she would have chosen to like Silas. Since she hasn’t chosen to like Silas, she must not have the powers she claims in the OP. Therefore, she was deluded or lying when she wrote the OP, so we can dismiss her advice
* I’m honestly just trying to understand your view. I expect that my picture of your view is still wrong in significant respects. But the best way that I know to improve my understanding is to give you my picture so far, so that you can correct it. I am not trying to characterize your view for rhetorical purposes. Again, I know that my picture is probably wrong.
* It is not enough that she ought to know, any more than we should dismiss the hitman’s advice on how to kill just because he is so clearly wrong about when* to kill.
I’m actually am having a little trouble grasping your meaning with these questions. I agree that someone giving advice on (1) should have applied (1). Otherwise, they don’t have a justified claim to the knowledge in (1). But this is the case whether or not they demonstrate (2), which is why I’m confused by the wording of your question.
What I don’t get is why it is relevant if the advice-giver failed to realize that they should have applied (1) in some particular case, even if they ought to have known that they should have applied (1).
To give a gruesome example, a professional hitman might be able to give very good advice on how to kill someone you’ve decided to kill, even if his advice on when to decide to kill someone is spectacularly bad.
Similarly, your evaluation of Alicorn’s advice on how to like someone you’ve decided to like should be independent of your belief that she’s very bad at deciding when to like someone.
So that’s what your entire criticism amounts to? That maybe Alicorn just didn’t recognize this as an opportunity to use her skills, even as she goes through the terror of seeing my comments pop up all over the place?
That would kind of require you to believe that Alicorn was lying about the whole psychological stress thing, which is a spectacularly nasty thing to lie about. If you’re fine with that if it proves me wrong … I guess that’s a call you have to make.
I don’t follow this inference at all.
I would guess that she “didn’t recognize this as an opportunity to use her skills” because of the psychological stress.
That is, because of the psychological stress of interacting with you, she came to the mistaken conclusion that she ought not to like you, so she never tried to apply her advice. That would be my guess.
ETA: Maybe this is your reasoning (please confirm or deny): A person with the ability to choose to like anyone would choose to like everyone, especially the people that he or she really, really doesn’t like. This is because disliking someone is unpleasant, and it’s more unpleasant the more you dislike them. But liking someone is pleasant, so that is what someone with the power in the OP would choose to do. Therefore, someone who claims to have the power in the OP, but who also evidently doesn’t like someone, is probably lying or deluded.
Fourth time: the advice applies to canceling dislike, just as much as changing to like.
So, your position is now that Alicorn suffers psychological stress from seeing my name all over her favorite[1] discussion site, but feels this is just “something she has to live with” (though it has disadvantages of its own), becuase of the severe wrongness of turning off her dislike of me?
It’s okay to say “oops”.
ETA:
Not even close: I listed the reasons Alicorn unnecessarily adheres to a dislike that are specific to this situation, and how the unpleasantness can be good for and the site by switching to non-dislike … already it looks nothing like the reasoning you posited.
[1] please, please don’t nitpick this one—you get the point, I hope
You know, I’ve been thinking the same thing :).
Yes, I think that that is a fair description of my position. (ETA: However, the “severe wrongness” need not be moral wrongness. Humans often want to do unpleasant things and very much don’t want to do something that would increase their pleasure. It’s not all that unusual. Usually this is for moral reasons, as conventionally understood, but not always.)
Did you read my edit to my last comment? Does it capture your reasoning (with “like” replaced with “not dislike”, if you like)?
Cute, but considering how contorted your position has turned out to be, you can forgive me for wondering if you wanted to stick with it.
And that’s what I mean: on top of the already contorted position I attributed to you, you’re adding this moral-or-maybe-something else wrongness, which has no precedent in your earlier justifications. Do you think it’s probably one of the non-moral-wrongness things? Is that just a matter of terminology?
My earlier comment has been revised to respond to your addition, but it’s just an elaboration of “wtf? no”.
Do you take enjoyment in participating in these long, often repetitive arguments? Do you not find the antagonism consistently grating or stressful? If you have been wronged, surely from experience you can see that repeatedly bringing it up is simply not going to change anything. I’m curious as to whether this apparent futility bothers you in the same way that I know it would bother me.
No.
I do find the antagonism grating and/or stressful. (The same with questions posed in the negative, but I digress.)
It’s definitely going to change the cardinality of the set of non-anonymous people who can indepently confirm or disconfirm being on the receiving end of Alicorn’s wisdom, which is what I was mainly hoping for.
To your broader, implied query: I’m between a rock in a hard place. I’ve wanted to point out what a crock Alicorn’s supposed insight on the matter is since her luminosity series (this isn’t the first time she’s posted advice in direct contradiction of how all evidence reveals she handles situations). After about the ~8th article, I couldn’t let her go on promoting this two-faced act, so I spoke up.
No, I don’t enjoy becoming LW’s whipping boy every three months. But what can I say—no good deed goes unpunished.
This doesn’t work online, but Steve Rayhawk has cultivated the habit of consistently responding to questions in the negative with an affirmative response (‘Yes, I do not believe that’, or simply ‘Yes’) and thus I feel I do not have to sacrifice meaning for ease of conversational flow. I really wish this would become a more common disposition. Anyway, sorry for doing that.
I think you discount the possibility (I have no idea how probable it is, by the way) that Alicorn is actually a generally luminous and thoughtful person and that for some reason you seem to be an especially rare and difficult case for her. Maybe she has legitimate things to say to help people generally, even if she messed up (or you messed up for her) the dynamic between you two specifically. I know Alicorn. She can be critical, but she’s genuinely a good person. It could be that you’ve been wronged, but it could also be that this is an an atypical result for people who interact with Alicorn, as most of the evidence seems to suggest. Generalizing from one example, although it probably feels justified, might actually be the wrong thing to do here. It might be impossible for you, but I’d suggest letting it go. All of the writing time you’ve spent on comments in this thread could have been spent on a good post, which is your strong point. One should generally not spend their time optimizing for cold harshies.
I’m not generalizing from one example, and my reaction is not atypical. Looking at the moderation difference between Alicorn and HughRistik regarding her advice here, and the numerous other times she posts dating/meeting friends advice in the comments section (rather than as an article), it seems that most men here aren’t benefitting from what she has to say in their daily lives—though they may certainly find the advice intellectually stimulating.
Point taken (and I think you meant atypical?). It’s funny, because I know Hugh and I know Alicorn, and I bet they’d make decent friends in person (if they haven’t met already at a Less Wrong meetup while I was on vacation or something). Anyway, your claim here seems way more reasonable than the dramatized ones above. (“I couldn’t let her go on promoting this two-faced act”.) It seems you have narrowed your argument specifically to relationship advice, in which case I’m much more tempted to agree that your point has merit. But I think her luminosity sequence got a lot of upvotes for a reason. I personally found some useful concepts in there, and looking at the comments it seems many others also discovered her ideas about luminosity to be useful. First, I don’t think shouting ‘hypocrisy’ is a good argument against the usefulness of a post; second, I don’t think that shouting ‘hypocrisy’, or attempting ad hominem attacks, is going to get you anywhere anyway. If you want to make people think Alicorn is a bad person, fine, but why the heck would you want to do that? Vengeance? It seems you take the more reasonable position that Alicorn might be being trusted as an expert where she lacks skill, but continuing to attack her in areas where skill has been demonstrated erodes Less Wrongers’ ability to believe you are acting in good faith.
Hmm. I thought that I laid it out very cleanly here.
I think that it’s probably moral wrongness, but I’m less certain, so I’m more cautious about attributing that view to her.
But, at any rate, I honestly don’t see the contortions to which you refer. Perhaps she would experience a certain increase in pleasure if she modified herself not to dislike you. If she has this power, but chooses not to use it, then you may conclude that she cares about something more than that pleasure. It’s sort of like how Ghandi wouldn’t take a pill to make himself like to kill people, even if he knew that he would have lots of opportunities to kill people at no cost. There is a very standard distinction between what you think you ought to do and what you think will give you the most pleasure. I would expect the inferential distance on LW for this to be very short. That is why I don’t see my position as contorted.
Give me just a little credit here: yes I do understand the difference between “this increases my pleasure” and “I should do this”; and yes, there should be low inferential distance on explaining such a point on LW. That’s wasn’t in dispute. What’s in dispute is how much contortion you have to go through to justify why that distinction would be relevant and applicable here (which even the contortion leaves out).
And you didn’t lay it out very cleanly in the linked comment: you just made one distinction that is a very small part of what you have to say to specify your position.
My view is that Alicorn probably perceives certain benefits from not disliking you, such as the ones you’ve enumerated. But evidently she also sees other costs from not disliking you (costs which are probably moral). In her estimation (which I think is incorrect) the costs outweigh the benefits. Therefore, she has chosen not to apply the advice in the OP.
What’s contorted about that? As I see it, I’m just taking her revealed preferences at face value, while giving her the benefit of the doubt that she has the powers described in the OP.
Okay, how about this*:
* I’m honestly just trying to understand your view. I expect that my picture of your view is still wrong in significant respects. But the best way that I know to improve my understanding is to give you my picture so far, so that you can correct it. I am not trying to characterize your view for rhetorical purposes. Again, I know that my picture is probably wrong.
* It is not enough that she ought to know, any more than we should dismiss the hitman’s advice on how to kill just because he is so clearly wrong about when* to kill.