Imagine that in the current discussion, we suddenly realize that we’ve been writing all that time not to find the truth, but to convince each other (which I think is actually the case).
It would be one of those situations where someone like Kaj Sotala would say: “it seems you’re deeply motivated in finding the truth, but you’re only trying to make people think you have the truth (=convince them)”.
Then my point would be: unless you’re cynical, convincing and finding the truth are exactly the same. If you’re cynical, you just think short term and your truth won’t last (people will soon realize you were wrong). If you’re sincere, you think long term and your truth will last.
I would even argue that the only proper definition of truth is: what convinces most people in the long run. Similarly, a proper definition of good (or “important to do”) would be: what brings gratitude from most people in the long run.
I think that defocussing a bit and taking the outside view for a second might be clarifying, so let’s not talk about what it is exactly that people do.
Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home.
Then you come, and say “there is no problem here,” that everything boils down to us just using the wrong definition of motivation (or something). But what’s with the charities that can’t find anyone to do their mucky jobs? What’s with the people who could offer great service and/or reduce their working hours by working from home, if only they could get themselves to do it? Where does your argument solve these problems?
The reason I reacted to your post was not that I saw the exact flaw in your argument. The reason I answered is that I saw that your argument doesn’t solve the problem at hand; in fact, it fails to even recognize it in the first place.
I think that you are probably overvaluing criticism. If so, you can increase the usefulness of your thoughts significantly if you stop yourself from paying much attention to flaws and try to identify the heart of the material first, and only apply criticism afterwards, and even then only if it’s worth it.
Sorry, but I am only refining the statement I made from the start, which in my view is still perfectly relevant to the material. You don’t agree with me, now let’s not loose too much time on meta-discussions...
I understand your concern about the problems mentioned in the article, and your feeling that I don’t address them. You’re right, I don’t: my feeling about these problems is that they occur in complex situations where lots of actors are involved, and i am not convinced at all that they result from a lack of motivation or a problem of unconscious motivation hijacking.
Kaj Sotala would say: “it seems you’re deeply motivated in finding the truth, but you’re only trying to make people think you have the truth (=convince them)”.
You think he would make the mistake of thinking there is only one motivation behind each human action?
I would even argue that the only proper definition of truth is: what convinces most people in the long run.
Just to clarify: consider two competing theories T1 and T2 about what will happen to the Earth’s surface after all people have died. You would argue that if T1 is extremely popular prior among living people prior to that time, and T2 is unpopular, then that’s all we need to know to conclude that T1 is more true than T2. Further, if all other competing theories are even less popular than T2, then you would argue further that T1 is true and all other competing theories false. What actually happens to the Earth’s surface is completely irrelevant to the truth of T1.
I made my example extreme to make it easy for you to confirm or refute. But given your refutation, I honestly have no idea what you mean when you suggest that the only proper definition of truth is what convinces the most people in the long run. It sure sounds like you’re saying that the truth about a system is a function of people’s beliefs about that system rather than a function of the system itself.
Yes in a sense. The pragmatic conception of truth holds that we do not have access to an absolute truth, nor to any system as it is “in itself”, but only to our beliefs and representations of systems. All we can do is test our beliefs and the accuracy of our representations.
Within that conception, a belief is true if “it works”, that is, if it can be successfully confronted to other established belief systems and serve as a base for action with expectied result (e.g. scientific inquiry). Incidentally, there is no truth outside our beliefs, and truth is always temporary. A truth could be considered universal if it could convince everyone.
I’m entirely on board with endorsing beliefs that can successfully serve as a basis for action with expected results by calling them “true,” and on board with the whole “we don’t have access to absolutes” thing.
I am not on board with endorsing beliefs as “true” just because I can convince other people of them.
You seem to be talking about both things at once, which is why I’m confused.
Can you clarify what differences you see (if any) between “it works/it serves as a reliable basis for action” on the one hand, and “it can convince people” on the other, as applied to a belief, and why those differences matter (if they do)?
In my view, going from subjective truth to universal (inter-subjective) truth requires agreement between different people, that is, convincing others (or being convinced).
I hold a belief because it is reliable for me. If it is reliable for others as well, then they’ll probably agree with me. I will convince them.
So, at the risk of caricaturing your view again, consider the following scenario:
At time T1, I observe some repeatable phenomenon X. For the sake of concreteness, suppose X is my underground telescope detecting a new kind of rock formation deep underground that no person has ever before seen… that is, I am the discoverer of X.
At time T2, I publish my results and show everyone X, and everyone agrees that yes, there exists such a rock formation deep underground.
If I’ve understood you correctly, you would say that if B is the belief that there exists such a rock formation deep underground, then at T1 B is “subjectively true,” prior to T1 B doesn’t exist at all, and at T2 B is “inter-subjectively or universally true”. Is that right?
Let’s call NOT(B) the denial of B—that is, NOT(B) is the belief that such rock formations don’t exist.
At times between T1 and T2, when some people believe B and others believe NOT(B) with varying degrees of confidence, what is the status of B in your view? What is the status of NOT(B)? Are either of those beliefs true?
And if I never report X to anyone else, then B remains subjectively true, but never becomes inter-subjectively true. Yes?
Now suppose that at T3, I discover that my tools for scanning underground rock formations were flawed, and upon fixing those tools I no longer observe X. Suppose I reject B accordingly. I report those results, and soon nobody believes B anymore.
On your view, what is the status of B at T3? Is it still intersubjectively true? Is it still subjectively true? Is it true at all?
Does knowing the status of B at T3 change your evaluation of the status of B at T2 or T1?
At T1, B is “subjectively true” (I believe that B). However it’s not an established truth. From the point of view of the whole society, the result needs replication: what if I was deceiving everyone?
At T2, B is controversial.
At T3, B is false.
Now is the status of B changing over time? That’s a good question. I would say that the status of B is contextual. B was true at T1 to the extent of the actions I had performed at that time. It was “weakly true” because I had not checked every flaws in my instruments. It became false in the context of T3. Similarly, one could say that Newtonian physics is true in the context of slow speeds and weak energies.
Kid in The Wire, Pragmatist Special Episode when asked how he could keep count of how many vials of crack were left in the stash but couldn’t solve the word problem in his math homework.
Imagine that in the current discussion, we suddenly realize that we’ve been writing all that time not to find the truth, but to convince each other (which I think is actually the case). It would be one of those situations where someone like Kaj Sotala would say: “it seems you’re deeply motivated in finding the truth, but you’re only trying to make people think you have the truth (=convince them)”. Then my point would be: unless you’re cynical, convincing and finding the truth are exactly the same. If you’re cynical, you just think short term and your truth won’t last (people will soon realize you were wrong). If you’re sincere, you think long term and your truth will last. I would even argue that the only proper definition of truth is: what convinces most people in the long run. Similarly, a proper definition of good (or “important to do”) would be: what brings gratitude from most people in the long run.
I think that defocussing a bit and taking the outside view for a second might be clarifying, so let’s not talk about what it is exactly that people do.
Kaj Sotala says that he has identified something which constitutes a major problem source, with exemplary problems a) - f), all very real problems like failing charities and people being unable to work from home. Then you come, and say “there is no problem here,” that everything boils down to us just using the wrong definition of motivation (or something). But what’s with the charities that can’t find anyone to do their mucky jobs? What’s with the people who could offer great service and/or reduce their working hours by working from home, if only they could get themselves to do it? Where does your argument solve these problems?
The reason I reacted to your post was not that I saw the exact flaw in your argument. The reason I answered is that I saw that your argument doesn’t solve the problem at hand; in fact, it fails to even recognize it in the first place.
I think that you are probably overvaluing criticism. If so, you can increase the usefulness of your thoughts significantly if you stop yourself from paying much attention to flaws and try to identify the heart of the material first, and only apply criticism afterwards, and even then only if it’s worth it.
Sorry, but I am only refining the statement I made from the start, which in my view is still perfectly relevant to the material. You don’t agree with me, now let’s not loose too much time on meta-discussions...
I understand your concern about the problems mentioned in the article, and your feeling that I don’t address them. You’re right, I don’t: my feeling about these problems is that they occur in complex situations where lots of actors are involved, and i am not convinced at all that they result from a lack of motivation or a problem of unconscious motivation hijacking.
You think he would make the mistake of thinking there is only one motivation behind each human action?
Just to clarify: consider two competing theories T1 and T2 about what will happen to the Earth’s surface after all people have died. You would argue that if T1 is extremely popular prior among living people prior to that time, and T2 is unpopular, then that’s all we need to know to conclude that T1 is more true than T2. Further, if all other competing theories are even less popular than T2, then you would argue further that T1 is true and all other competing theories false. What actually happens to the Earth’s surface is completely irrelevant to the truth of T1.
Have I understood you?
This is a bit caricatural. I made my statement as simple as possible for the sake of the argument, but I subscribe to the pragmatic theory of truth.
I made my example extreme to make it easy for you to confirm or refute. But given your refutation, I honestly have no idea what you mean when you suggest that the only proper definition of truth is what convinces the most people in the long run. It sure sounds like you’re saying that the truth about a system is a function of people’s beliefs about that system rather than a function of the system itself.
Yes in a sense. The pragmatic conception of truth holds that we do not have access to an absolute truth, nor to any system as it is “in itself”, but only to our beliefs and representations of systems. All we can do is test our beliefs and the accuracy of our representations.
Within that conception, a belief is true if “it works”, that is, if it can be successfully confronted to other established belief systems and serve as a base for action with expectied result (e.g. scientific inquiry). Incidentally, there is no truth outside our beliefs, and truth is always temporary. A truth could be considered universal if it could convince everyone.
I’m entirely on board with endorsing beliefs that can successfully serve as a basis for action with expected results by calling them “true,” and on board with the whole “we don’t have access to absolutes” thing.
I am not on board with endorsing beliefs as “true” just because I can convince other people of them.
You seem to be talking about both things at once, which is why I’m confused.
Can you clarify what differences you see (if any) between “it works/it serves as a reliable basis for action” on the one hand, and “it can convince people” on the other, as applied to a belief, and why those differences matter (if they do)?
In my view, going from subjective truth to universal (inter-subjective) truth requires agreement between different people, that is, convincing others (or being convinced). I hold a belief because it is reliable for me. If it is reliable for others as well, then they’ll probably agree with me. I will convince them.
So, at the risk of caricaturing your view again, consider the following scenario:
At time T1, I observe some repeatable phenomenon X. For the sake of concreteness, suppose X is my underground telescope detecting a new kind of rock formation deep underground that no person has ever before seen… that is, I am the discoverer of X.
At time T2, I publish my results and show everyone X, and everyone agrees that yes, there exists such a rock formation deep underground.
If I’ve understood you correctly, you would say that if B is the belief that there exists such a rock formation deep underground, then at T1 B is “subjectively true,” prior to T1 B doesn’t exist at all, and at T2 B is “inter-subjectively or universally true”. Is that right?
Let’s call NOT(B) the denial of B—that is, NOT(B) is the belief that such rock formations don’t exist.
At times between T1 and T2, when some people believe B and others believe NOT(B) with varying degrees of confidence, what is the status of B in your view? What is the status of NOT(B)? Are either of those beliefs true?
And if I never report X to anyone else, then B remains subjectively true, but never becomes inter-subjectively true. Yes?
Now suppose that at T3, I discover that my tools for scanning underground rock formations were flawed, and upon fixing those tools I no longer observe X. Suppose I reject B accordingly. I report those results, and soon nobody believes B anymore.
On your view, what is the status of B at T3? Is it still intersubjectively true? Is it still subjectively true? Is it true at all?
Does knowing the status of B at T3 change your evaluation of the status of B at T2 or T1?
At T1, B is “subjectively true” (I believe that B). However it’s not an established truth. From the point of view of the whole society, the result needs replication: what if I was deceiving everyone? At T2, B is controversial. At T3, B is false.
Now is the status of B changing over time? That’s a good question. I would say that the status of B is contextual. B was true at T1 to the extent of the actions I had performed at that time. It was “weakly true” because I had not checked every flaws in my instruments. It became false in the context of T3. Similarly, one could say that Newtonian physics is true in the context of slow speeds and weak energies.
OK, thanks for clarifying; I think I understand your view now.
“They fuck you up, count be wrong”
Kid in The Wire, Pragmatist Special Episode when asked how he could keep count of how many vials of crack were left in the stash but couldn’t solve the word problem in his math homework.