No, I do not think that your fallacy depends on what DVH thinks.
If I’m saying something to have an effect in another person then the quality of my reasoning process depends on whether my model of the other person is correct.
It’s like debugging a phobia at a LW meetup. People complain that language isn’t logical, but in the end the phobia is gone. The fact that the language superficially pattern matches to fallacies is besides the point as long as it has the desired consequences.
You’re confusing risk aversion and learned helplessness.
No, I’m talking to a person who at least self-labels as schizoid and about whom I have more information beyond that.
If I would think the issue is risk aversion and I wanted to convince him, I would appeal to the value of courage. Risk aversion doesn’t prevent people from considering an option and seeing the act of considering an option as irresponsible.
What result did I achieve here? I got someone who hates his job to think about whether to learn a different skillset to switch to a more enjoyable job and ask for advice about what he could do. He shows more agentship about his situation.
If I’m saying something to have an effect in another person then the quality of my reasoning process depends on whether my model of the other person is correct.
LOL. Let me reformulate that: “If I’m trying to manipulate another person, I can lie and that’s “besides the point as long as it has the desired consequences”. Right? X-)
Saying “There’s no real safe job” is in no lie. It true on it’s surface. If my mental model of DVH is correct it leads to an update in a direction that more in line with reality and saying things to move other people to a more accurate way of seeing the world isn’t lying.
Ahem. So you are saying that if you believe that your lie is justified, it’s no lie.
saying things to move other people to a more accurate way of seeing the world isn’t lying.
Let’s try that on a example. Say, Alice is dating Bob, but you think that Bob is a dirtbag and not good for Alice. You want to move Alice “to a more accurate way of seeing the world” and so you invent a story about how Bob has a hobby of kicking kittens and is an active poster on revenge porn forums. You’re saying that this would not be lying because it will move Alice to a more accurate way of seeing Bob. Well...
No. There are two factors: 1) It’s true. There are really no 100% safe jobs. 2) The likely update by the audience is in the direction of a more accurate belief.
Getting Alice to believe that Bob is an active poster on revenge porn forums by saying it likely doesn’t fulfill either criteria 1) or criteria 2).
There is really no 100% safe anything, but I don’t think that when DVH said “I will not leave a safe job for a startup” by “safe” he meant “100% safe”.
That doesn’t prevent the statement from being true. The fact that there’s no 100% safe anything doesn’t turn the statement into a lie while the example that Lumifer provides happens to be clear lying.
meant
I didn’t focus on what he “meant” but on my idea of what I believed his mental model to be.
I don’t think DVH’s mental models have getting inaccurate in any way as a result of my communication.
He didn’t pick up the belief “Startups as as safe as my current job”. I didn’t intent to get him to pick up that belief either. I don’t believe that statement either.
My statement thus does fulfill the two criteria: 1) It’s true on it’s surface. 2) It didn’t lead to inaccurate beliefs in the person I’m talking with.
Statement that fulfill both of those criteria aren’t lies.
I have no problem with including intentions as a third category but in general “see that you intention aren’t to mislead” is very simply to “see that you reach an outcome where the audience isn’t mislead” so I don’t list it separately.
That doesn’t prevent the statement from being true.
It doesn’t (though it does mostly prevent it from being useful), but the statement you made upthread was not that one. It was “Today there’s nothing like a real safe job”, in which context “safe” would normally be taken to mean something like “reasonably safe”, not “exactly 100% safe”.
1) It’s true on it’s surface.
What do you mean by “on its surface”? What matters is if it’s true in its most likely reasonable interpretation in its context.
Meh. Enough with the wordplays and let’s get quantitative. What do you think the P(DVH will lose his current job before he wants to|he doesn’t leave it for a startup) is? What do you think he thinks it is?
in which context “safe” would normally be taken to mean something like “reasonably safe”, not “exactly 100% safe”.
I didn’t just say “safe” I added the qualifier “real” to it. I also started the sentence with “today” with makes it more like a general platitude.
I specifically didn’t say your job isn’t safe but made the general statement that no job is really safe.
It happens to be a general platitude commonly repeated in popular culture.
What do you think he thinks it is?
I think he didn’t have a probability estimate for that in his mind at the time I was writing those lines. When you assume he had such a thing you miss the point of the exercise.
No, I do not think that your fallacy depends on what DVH thinks.
You’re confusing risk aversion and learned helplessness.
Another English irregular verb.
“I can see that this won’t work. You are risk-averse. He exhibits learned helplessness.”
If I’m saying something to have an effect in another person then the quality of my reasoning process depends on whether my model of the other person is correct.
It’s like debugging a phobia at a LW meetup. People complain that language isn’t logical, but in the end the phobia is gone. The fact that the language superficially pattern matches to fallacies is besides the point as long as it has the desired consequences.
No, I’m talking to a person who at least self-labels as schizoid and about whom I have more information beyond that.
If I would think the issue is risk aversion and I wanted to convince him, I would appeal to the value of courage. Risk aversion doesn’t prevent people from considering an option and seeing the act of considering an option as irresponsible.
What result did I achieve here? I got someone who hates his job to think about whether to learn a different skillset to switch to a more enjoyable job and ask for advice about what he could do. He shows more agentship about his situation.
LOL. Let me reformulate that: “If I’m trying to manipulate another person, I can lie and that’s “besides the point as long as it has the desired consequences”. Right? X-)
Saying “There’s no real safe job” is in no lie. It true on it’s surface. If my mental model of DVH is correct it leads to an update in a direction that more in line with reality and saying things to move other people to a more accurate way of seeing the world isn’t lying.
Ahem. So you are saying that if you believe that your lie is justified, it’s no lie.
Let’s try that on a example. Say, Alice is dating Bob, but you think that Bob is a dirtbag and not good for Alice. You want to move Alice “to a more accurate way of seeing the world” and so you invent a story about how Bob has a hobby of kicking kittens and is an active poster on revenge porn forums. You’re saying that this would not be lying because it will move Alice to a more accurate way of seeing Bob. Well...
No. There are two factors:
1) It’s true. There are really no 100% safe jobs.
2) The likely update by the audience is in the direction of a more accurate belief.
Getting Alice to believe that Bob is an active poster on revenge porn forums by saying it likely doesn’t fulfill either criteria 1) or criteria 2).
There is really no 100% safe anything, but I don’t think that when DVH said “I will not leave a safe job for a startup” by “safe” he meant “100% safe”.
That doesn’t prevent the statement from being true. The fact that there’s no 100% safe anything doesn’t turn the statement into a lie while the example that Lumifer provides happens to be clear lying.
I didn’t focus on what he “meant” but on my idea of what I believed his mental model to be.
I don’t think DVH’s mental models have getting inaccurate in any way as a result of my communication. He didn’t pick up the belief “Startups as as safe as my current job”. I didn’t intent to get him to pick up that belief either. I don’t believe that statement either.
My statement thus does fulfill the two criteria:
1) It’s true on it’s surface.
2) It didn’t lead to inaccurate beliefs in the person I’m talking with.
Statement that fulfill both of those criteria aren’t lies.
That would mean that if you say something that is literally true but intended to mislead, and someone figures that out, it’s not a lie.
I have no problem with including intentions as a third category but in general “see that you intention aren’t to mislead” is very simply to “see that you reach an outcome where the audience isn’t mislead” so I don’t list it separately.
It doesn’t (though it does mostly prevent it from being useful), but the statement you made upthread was not that one. It was “Today there’s nothing like a real safe job”, in which context “safe” would normally be taken to mean something like “reasonably safe”, not “exactly 100% safe”.
What do you mean by “on its surface”? What matters is if it’s true in its most likely reasonable interpretation in its context.
Meh. Enough with the wordplays and let’s get quantitative. What do you think the P(DVH will lose his current job before he wants to|he doesn’t leave it for a startup) is? What do you think he thinks it is?
I didn’t just say “safe” I added the qualifier “real” to it. I also started the sentence with “today” with makes it more like a general platitude. I specifically didn’t say your job isn’t safe but made the general statement that no job is really safe.
It happens to be a general platitude commonly repeated in popular culture.
I think he didn’t have a probability estimate for that in his mind at the time I was writing those lines. When you assume he had such a thing you miss the point of the exercise.