Throwing your hands in the air and saying “well we can never know for sure” is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they’re all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathematical model and apply the math without making mistakes.
If all you’re looking for is confidence, why must you assign probabilities? I’m pushing you in hopes of understanding, not necessarily disagreeing. If I’m very religious and use that as my life-guide, I could be extremely confident in a given answer. In other words, the value of using probabilities must extend beyond confidence in my own answer—confidence is just a personal feeling. Being “right” in a normative sense is also relevant, but as you point out, we often don’t actually know what answer is correct. If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance—this is simply not practical in many situations precisely because the world is so complex. I guess it boils down to this—what is the value of being “right” if what is “right” cannot be determined? I think there are decisions where what is right can be determined—and rationality and the bayesian model works quite well. I think far more decisions (social relationships, politics, economics—particularly decisions that do not directly affect the decision maker) are too subjective to know what is “right” or accurately model inputs. In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one.
I think I’m the only one on LessWrong that finds EY’s writing maddening—mostly the style—I keep screaming to myself, “get to the point!”—as noted, perhaps its just me. His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.
Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, “Murder is wrong,” even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.
Wait, does this “truth is relative” stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there’s a sizable minority here who wont.
What do you disagree with? That “truth is relative” applies to only moral questions? or that it applies to more than moral questions?
If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)
My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as “right according to human morals.” Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI—so I would have a hard time calling them “subjective”.
Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise—see e.g. Nazis.
EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I’m having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for “metaethics sequence”.
We can know what is Right, as long as we define it as “right according to human morals.” Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI—so I would have a hard time calling them “subjective”
I don’t dispute the possibility that your conclusion may be correct, I’m wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I’m comfortable with that as the reason, but wondering if you are or if you have a different basis for the position.
From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I’ve chosen to operate as if they are relative because (i) if moral truths exist but I don’t know what they are, I’m in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn’t mean you don’t use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist.
OK, i’ll try to search for those EY writings, thanks.
I suspect that the word “confidence” is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.
Yes, this community is generally concerned with methods for, as you say, getting “the right answer more often than not.”
And, sure, sometimes a marginal increase in my chance of getting the right answer isn’t worth the cost of securing that increase—as you say, sometimes “accurately identifying the proper inputs and valuing them correctly [...] is simply not practical”—so I accept a lower chance of having the right answer. And, sure, complex contexts such as social relationships, politics, and economics are often cases where the cost of a greater chance of knowing the right answer is prohibitive, so we go with the highest chance of it we can profitably get.
To say that “rationality falls short” in these cases suggests that it’s being compared to something. If you’re saying it falls short compared to perfect knowledge, I absolutely agree. If you’re saying it falls short compared to something humans have access to, I’m interested in what that something is.
I agree that expressing beliefs numerically (e.g., as probabilities) can lead people to assign more value to the answer than it deserves. But saying that it’s “the best answer” has that problem, too. If someone tells me that answer A is the best answer I will likely assign more value to it than if they tell me they are 40% confident in answer A, 35% confident in answer B, and 25% confident in answer C.
I have no idea what you mean by the truth being “relative”.
I suspect that the word “confidence” is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.
i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used “confident as we should be”? Regardless, I am still wondering what the value of being “right” is if we can’t determine what is in fact right? If it gives confidence/ego/comfort that you’ve derived the right answer, being “right” in actuality is not necessary to have those feelings.
To say that “rationality falls short” in these cases suggests that it’s being compared to something.
Fair. The use of rationality and the belief in its merits generally biases the decision maker to form a belief that rationality will yield a correct answer, even if it does not—it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don’t know they are accurate). To say it differently, to the extent a question has no clear answer (for example, because we don’t have enough information or it isn’t worth the cost), I think we’d be better off withholding judgment altogether than forming a judgment for the sake of having an opinion. Rumsfeld had this great quote—“we dont know what we don’t know”—we also don’t know the importance of what we don’t know relative to what we do know when forming judgments. From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know. Rationality cannot take into account information that is not known to be relevant—what is the value of forming a judgment in this case? To be clear, I’m not “throwing my hands up” for all of life’s questions and saying we don’t know anything—I’m trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means).
Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue. This is because our notions of truth are man-made, even if we account for the possibility that there are certain universal truths (what relevance do those truths have if only you know them?). Despite the logic underlying probability theory/science in general, truths derived therefrom are accepted as such only because people value and trust probability theory and science. All other matters of truth are even more subjective—this does not mean that contradicting beliefs are equally true or equally valid, instead, truth is subjective precisely because we cannot even attempt prove anything as true outside of human comprehension. We’re stuck debating and determining truth only amongst ourselves. Its the human paradox of freedom of expression/reasoning trapped within an animal form that is fallible and will die. From my perspective, determining universal truth, if it exists, requires transcending the limitations of man—which of course i cannot do.
What was your understanding of how Mugasofer used “confident as we should be”?
Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.
what the value of being “right” is if we can’t determine what is in fact right?
I’m not quite sure what “right” means, but if nothing will happen differently depending on whether A or B is true, either now or in the future, then there’s no value in knowing whether A or B is true.
it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don’t know they are accurate).
Yes, pretty much. I wouldn’t say “errs”, but semantics aside, we’re always forming probability judgments, and those judgments are always flawed (or at least incomplete) for any interesting problem.
to the extent a question has no clear answer (for example, because we don’t have enough information or it isn’t worth the cost), I think we’d be better off withholding judgment altogether than forming a judgment for the sake of having an opinion.
There are many decisions I’m obligated to make where the effects of that decision for good or ill will differ depending on whether the world is A or B, but where the question “is the world A or B?” has no clear answer in the sense you mean. For those decisions, it is useful to make the procedure I use as reliable as is cost-effective.
But sure, given a question on which no such decision depends, I agree that withholding judgment on it is a perfectly reasonable thing to do. (Of course, the question arises of how sure I am that no such decision depends on it, and how reliable the process I used to arrive at that level of sureness is.)
From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know.
Yes, absolutely. Forming judgments based on a false idea of how much or how little we know is unlikely to have reliably good results.
Rationality cannot take into account information that is not known to be relevant—what is the value of forming a judgment in this case?
As above, there are many situations where I’m obligated to make a decision, even if that decision is to sit around and do nothing. If I have two decision procedures available, and one of them is marginally more reliable than the other, I should use the more reliable one. The value is that I will make decisions with better results more often.
I’m trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means).
I’d say LW is willing to push rationality as the best “theory” in all cases short of perfect knowledge right up until the point that a better one comes along, where “better” and “best” refer to their ability to reliably obtain benefits.
That’s why I asked you what you’re comparing it to; what it falls short relative to.
Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue.
So, I have two vials in front of me, one red and one green, and a thousand people are watching. All thousand-and-one of us believe that the red vial contains poison and the green vial contains yummy fruit juice. You are arguing that this is all I need to know to make a decision, because the relevance of the truth about which vial actually contains poison is limited to the extent to which other people agree that it does.
Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.
How is this different than being “comfortable” on a personal level? If it isn’t, the only value of rationality where the answer cannot be known is simply the confidence it gives you. Such a belief only requires rationality if you believe rationality provides the best answer—the “truth” is irrelevant. For example, as previously noted in the thread, if I’m super religious, I could use scripture to guide a decision and have the same confidence (on a subjective, personal way). Once the correctness of the belief cannot be determined as right or wrong, the manner in which the belief is created becomes irrelevant, EXCEPT to the extent laws/norms change because other people agree. I’ve taken the idea of absolute truth and simply converted it social truth because I think its a more appropriate term (more below).
You are suggesting that rationality provides the “best way” to get answers short of perfect knowledge. Reflecting on your request for a comparatively better system, I realized you are framing the issue differently than I am. You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers). In that model, looking for the “best system” to find answers makes sense. In other words, you assume answers exist, and only the manner in which to derive them is unknown. I am proposing that there are issues for which answers do not necessarily exist, or at least do not exist within world of human comprehension. In those cases, any model by which someone derives an answer is equally ridiculous. That is why I cannot give you a comparison. Again, this is not to throw up my hands, its a different way of looking at things. Rationality is important, but a smaller part of the bigger picture in my mind. Is my characterization of your position fair? If so, what is your basis for your position that all issues have answers?
So, I have two vials in front of me, one red and one green, and a thousand people are watching. All thousand-and-one of us believe that the red vial contains poison and the green vial contains yummy fruit juice.
You are arguing that this is all I need to know to make a decision, because the relevance of the truth about which vial actually contains poison is limited to the extent to which other people agree that it does.
I am only talking about the relevance of truth, not the absolute truth, because the absolute truth cannot be necessarily be known beforehand (as in your example!). Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent. Related to the point I made above, if you presuppose Truth exists, it is easy to question or point out how people could be wrong about what it is. I don’t think we have the luxury to know the Truth in most cases. Until future events prove otherwise, truth is just what we humans make of it, whether or not it conforms with the Truth—thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.
In your example, immediately after the vial is taken—we find out we’re right or wrong—and our subjective truths may change. They remain subjective truths so long as future facts could further change our conclusions.
You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers).
Yes. The vial is either poisoned or it isn’t, and my task is to decide whether to drink it or not. Do you deny that?
In that model, looking for the “best system” to find answers makes sense.
Yes, I agree. Indeed, looking for systems to find answers that are better than the one I’m using makes sense, even if they aren’t best, even if I can’t ever know whether they are best or not.
I am proposing that there are issues for which answers do not necessarily exist,
Sure. But “which vial is poisoned?” isn’t one of them. More generally, there are millions of issues we face in our lives for which answers exist, and productive techniques for approaching those questions are worth exploring and adopting.
Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent.
This is where we disagree.
Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it’s a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.
And if you have a more survival-conducive way of approaching the vials than I and the other 999 people in the room, we do better to listen to you than to each other, even though your opinion is inconsistent with ours.
thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.
Again, this is where we disagree. The relevance of “Truth” (as you’re referring to it… I would say “reality”) is also the extent to which some ways of approaching the world (for example, sniffing the two vials, or weighing them, or a thousand other tests) reliably have better results than just measuring the extent to which other humans agree with an assertion.
In your example, immediately after the vial is taken—we find out we’re right or wrong—and our subjective truths may change.
Sure, that’s true.
But it’s far more useful to better entangle our decisions (our “subjective truths,” as you put it) with reality (“Truth”) before we make those decisions.
With respect to your example, I can only play with those facts that you have given me. In your example, I assumed that knowledge of which vial has poison could not be known, and the best information we had was our collective beliefs (which are based on certain factors you listed). I agree with the task at hand as you put it, but the devil is of course in the details.
Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it’s a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.
But as noted above, if we cannot derive the truth, it is just as good as not existing. If the “vial picker” knows the truth beforehand, or is able to derive it, so be it, but immediately before he picks the vial, the Truth, as the vial picker knows it, is of limited value—he is unsure and everyone around him thinks hes an idiot. After the fact, everyone’s opinion will change accordingly with the results. By creating your own example, you’re presupposing (i) an answer exists to your question AND (ii) that we can derive it -- we don’t have that luxury in the real life, and even if we have that knowledge to know an “answer” exists, we don’t know whether the vial picker can accurately pick the appropriate vial based on the information available.
The idea of subjective truth (or subjective reality) doesn’t rely solely on the fact that reality doesn’t exist, most generally it is based on the idea that there may be cases a human cannot derive what is real even where there is some answer. If we cannot derive that reality, the existence of that reality must also be questioned. We of course don’t have to worry about these subtleties if the examples we use assume an answer to the issue exists.
The meaning of this is that rationality in my mind is helpful only to the extent (i) an answer exists and (ii) it can be derived. If the answer to (i) and (ii) are yes, rationality sounds great. If the answer to (i) is no, or the answer to (i) is yes but (ii) is no, rationality (or any other system) has no purpose other than to give us a false belief that we’re going about things in the best way. In such a world, there will be great uncertainty as to the appropriate human course of action.
This is why I’m asking why you are confident the answer to (i) is yes for all issues. You’re describing a world that provides a level of certainty such that the rationality model works in all cases—I’m asking why you know that amount of certainty exists in the world—its convenience is precisely what makes its universal application suspect. As noted in my answer to MugaSofer, perhaps your position is based on assumption/faith without substantiation, which I’m comfortable with as a plausible answer, but not sure that is the basis you are using for the conclusion (for the record, my personal belief is that any sort of theory or basis for going about our lives requires some type of faith/assumptions because we cannot have 100% certainty)
rationality in my mind is helpful only to the extent (i) an answer exists and (ii) it can be derived.
Or at least approximated. Yes.
If the answer to (i) and (ii) are yes, rationality sounds great.
Lovely.
If the answer to (i) is no, or the answer to (i) is yes but (ii) is no, rationality (or any other system) has no purpose other than to give us a false belief that we’re going about things in the best way
I would say, rather, that it has no purpose at all in the context of that question. Having a false belief is not a useful purpose.
This is why I’m asking why you are confident the answer to (i) is yes for all issues.
And, as I’ve said before, I agree that there exist questions without answers, and questions whose answers are necessarily beyond the scope of human knowledge, and I agree that rationality doesn’t provide much value in engaging with those questions… though it’s no worse than any approach I know of, either.
You’re describing a world that provides a level of certainty such that the rationality model works in all cases
As above, I submit that in all cases the approach I describe either works better than (if there are answers, which there often are) or as well (if not) as any other approach I know of. And, as I’ve said before, if you have a better approach to propose, propose it!
I’m asking why you know that amount of certainty exists in the world
I don’t know that. But I have to make decisions anyway, so I make them using the best approach I know. If you think I should do something different, tell me what you think I should do.
OTOH, if all you’re saying is that my approach might be wrong, then I agree with you completely, but so what? My choice is still between using the best approach I know of, or using some other approach, and given that choice I should still use the best approach I know of. And so should you.
for the record, my personal belief is that [..] we cannot have 100% certainty
For the record, that’s also the consensus position here.
The interesting question is, given that we don’t have 100% certainty, what do I do now?
i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used “confident as we should be”? Regardless, I am still wondering what the value of being “right” is if we can’t determine what is in fact right?
Because it helps us make decisions.
Incidentally, replacing words that may be unclear or misunderstood (by either party) with what we mean by those words is generally considered helpful ’round here for producing fruitful discussions—there’s no point arguing about whether the tree in the forest made a sound if I mean “auditory experience” and you mean “vibrations in the air”. This is known as “Rationalist’s Taboo”, after a game with similar rules, and replacing a word with (your) definition is known as “tabooing” it.
I actually don’t think we’re using the word differently—the issue was premised solely for issues where the answer cannot be known after the fact. In that case, our use of “confidence” is the same—it simply helps you make decisions. Once the value of the decision is limited to the belief in its soundness, and not ultimate “correctness” of the decision (because it cannot be known), rationality is important only if you believe it to be correct way to make decisions.
If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance—this is simply not practical in many situations precisely because the world is so complex.
Indeed. One of the purposes of this site is to help people become more rational—closer to a mathematical perfect reasoner—in everyday life. In math problems, however—and every real problem can, eventually, be reduced to a math problem—we can always make the right choice (unless we make a mistake with the math, which does happen.)
I think I’m the only one on LessWrong that finds EY’s writing maddening—mostly the style—I keep screaming to myself, “get to the point!”—as noted, perhaps its just me.
Unfortunately for you, most of the basic introductory-level stuff—and much of the really good stuff generally—is by him. So I’m guessing there’s a certain selection effect for people who enjoy/tolerate his style of writing.
His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.
I’m still not sure how truth could be “relative”—could you perhaps expand on what you mean by that? - although obviously it can be obscured by biases and simple lack of data. In addition, some questions may actually have no answer, because people are using different meanings for the same word or the question itself is contradictory (how many sides does a square triangle have?)
EDIT:
In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one.
A lot of people here—myself included—practice or advise testing how accurate your estimates are. There are websites and such dedicated to helping people do this.
If all you’re looking for is confidence, why must you assign probabilities? I’m pushing you in hopes of understanding, not necessarily disagreeing. If I’m very religious and use that as my life-guide, I could be extremely confident in a given answer. In other words, the value of using probabilities must extend beyond confidence in my own answer—confidence is just a personal feeling. Being “right” in a normative sense is also relevant, but as you point out, we often don’t actually know what answer is correct. If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance—this is simply not practical in many situations precisely because the world is so complex. I guess it boils down to this—what is the value of being “right” if what is “right” cannot be determined? I think there are decisions where what is right can be determined—and rationality and the bayesian model works quite well. I think far more decisions (social relationships, politics, economics—particularly decisions that do not directly affect the decision maker) are too subjective to know what is “right” or accurately model inputs. In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one.
I think I’m the only one on LessWrong that finds EY’s writing maddening—mostly the style—I keep screaming to myself, “get to the point!”—as noted, perhaps its just me. His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.
Inasmuch as subjectivism is a form of relativism, those comments seem to contradict each other.
Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, “Murder is wrong,” even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.
Thanks for the clarifiction.
Wait, does this “truth is relative” stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there’s a sizable minority here who wont.
What do you disagree with? That “truth is relative” applies to only moral questions? or that it applies to more than moral questions?
If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)
My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as “right according to human morals.” Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI—so I would have a hard time calling them “subjective”.
Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise—see e.g. Nazis.
EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I’m having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for “metaethics sequence”.
I, ah … I’m not seeing anything here. Have you accidentally posted just a space or something?
I don’t dispute the possibility that your conclusion may be correct, I’m wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I’m comfortable with that as the reason, but wondering if you are or if you have a different basis for the position.
From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I’ve chosen to operate as if they are relative because (i) if moral truths exist but I don’t know what they are, I’m in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn’t mean you don’t use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist.
OK, i’ll try to search for those EY writings, thanks.
I suspect that the word “confidence” is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.
Yes, this community is generally concerned with methods for, as you say, getting “the right answer more often than not.”
And, sure, sometimes a marginal increase in my chance of getting the right answer isn’t worth the cost of securing that increase—as you say, sometimes “accurately identifying the proper inputs and valuing them correctly [...] is simply not practical”—so I accept a lower chance of having the right answer. And, sure, complex contexts such as social relationships, politics, and economics are often cases where the cost of a greater chance of knowing the right answer is prohibitive, so we go with the highest chance of it we can profitably get.
To say that “rationality falls short” in these cases suggests that it’s being compared to something. If you’re saying it falls short compared to perfect knowledge, I absolutely agree. If you’re saying it falls short compared to something humans have access to, I’m interested in what that something is.
I agree that expressing beliefs numerically (e.g., as probabilities) can lead people to assign more value to the answer than it deserves. But saying that it’s “the best answer” has that problem, too. If someone tells me that answer A is the best answer I will likely assign more value to it than if they tell me they are 40% confident in answer A, 35% confident in answer B, and 25% confident in answer C.
I have no idea what you mean by the truth being “relative”.
i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used “confident as we should be”? Regardless, I am still wondering what the value of being “right” is if we can’t determine what is in fact right? If it gives confidence/ego/comfort that you’ve derived the right answer, being “right” in actuality is not necessary to have those feelings.
Fair. The use of rationality and the belief in its merits generally biases the decision maker to form a belief that rationality will yield a correct answer, even if it does not—it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don’t know they are accurate). To say it differently, to the extent a question has no clear answer (for example, because we don’t have enough information or it isn’t worth the cost), I think we’d be better off withholding judgment altogether than forming a judgment for the sake of having an opinion. Rumsfeld had this great quote—“we dont know what we don’t know”—we also don’t know the importance of what we don’t know relative to what we do know when forming judgments. From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know. Rationality cannot take into account information that is not known to be relevant—what is the value of forming a judgment in this case? To be clear, I’m not “throwing my hands up” for all of life’s questions and saying we don’t know anything—I’m trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means).
Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue. This is because our notions of truth are man-made, even if we account for the possibility that there are certain universal truths (what relevance do those truths have if only you know them?). Despite the logic underlying probability theory/science in general, truths derived therefrom are accepted as such only because people value and trust probability theory and science. All other matters of truth are even more subjective—this does not mean that contradicting beliefs are equally true or equally valid, instead, truth is subjective precisely because we cannot even attempt prove anything as true outside of human comprehension. We’re stuck debating and determining truth only amongst ourselves. Its the human paradox of freedom of expression/reasoning trapped within an animal form that is fallible and will die. From my perspective, determining universal truth, if it exists, requires transcending the limitations of man—which of course i cannot do.
Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.
I’m not quite sure what “right” means, but if nothing will happen differently depending on whether A or B is true, either now or in the future, then there’s no value in knowing whether A or B is true.
Yes, pretty much. I wouldn’t say “errs”, but semantics aside, we’re always forming probability judgments, and those judgments are always flawed (or at least incomplete) for any interesting problem.
There are many decisions I’m obligated to make where the effects of that decision for good or ill will differ depending on whether the world is A or B, but where the question “is the world A or B?” has no clear answer in the sense you mean. For those decisions, it is useful to make the procedure I use as reliable as is cost-effective.
But sure, given a question on which no such decision depends, I agree that withholding judgment on it is a perfectly reasonable thing to do. (Of course, the question arises of how sure I am that no such decision depends on it, and how reliable the process I used to arrive at that level of sureness is.)
Yes, absolutely. Forming judgments based on a false idea of how much or how little we know is unlikely to have reliably good results.
As above, there are many situations where I’m obligated to make a decision, even if that decision is to sit around and do nothing. If I have two decision procedures available, and one of them is marginally more reliable than the other, I should use the more reliable one. The value is that I will make decisions with better results more often.
I’d say LW is willing to push rationality as the best “theory” in all cases short of perfect knowledge right up until the point that a better one comes along, where “better” and “best” refer to their ability to reliably obtain benefits.
That’s why I asked you what you’re comparing it to; what it falls short relative to.
So, I have two vials in front of me, one red and one green, and a thousand people are watching. All thousand-and-one of us believe that the red vial contains poison and the green vial contains yummy fruit juice.
You are arguing that this is all I need to know to make a decision, because the relevance of the truth about which vial actually contains poison is limited to the extent to which other people agree that it does.
Did I understand that correctly?
How is this different than being “comfortable” on a personal level? If it isn’t, the only value of rationality where the answer cannot be known is simply the confidence it gives you. Such a belief only requires rationality if you believe rationality provides the best answer—the “truth” is irrelevant. For example, as previously noted in the thread, if I’m super religious, I could use scripture to guide a decision and have the same confidence (on a subjective, personal way). Once the correctness of the belief cannot be determined as right or wrong, the manner in which the belief is created becomes irrelevant, EXCEPT to the extent laws/norms change because other people agree. I’ve taken the idea of absolute truth and simply converted it social truth because I think its a more appropriate term (more below).
You are suggesting that rationality provides the “best way” to get answers short of perfect knowledge. Reflecting on your request for a comparatively better system, I realized you are framing the issue differently than I am. You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers). In that model, looking for the “best system” to find answers makes sense. In other words, you assume answers exist, and only the manner in which to derive them is unknown. I am proposing that there are issues for which answers do not necessarily exist, or at least do not exist within world of human comprehension. In those cases, any model by which someone derives an answer is equally ridiculous. That is why I cannot give you a comparison. Again, this is not to throw up my hands, its a different way of looking at things. Rationality is important, but a smaller part of the bigger picture in my mind. Is my characterization of your position fair? If so, what is your basis for your position that all issues have answers?
I am only talking about the relevance of truth, not the absolute truth, because the absolute truth cannot be necessarily be known beforehand (as in your example!). Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent. Related to the point I made above, if you presuppose Truth exists, it is easy to question or point out how people could be wrong about what it is. I don’t think we have the luxury to know the Truth in most cases. Until future events prove otherwise, truth is just what we humans make of it, whether or not it conforms with the Truth—thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.
In your example, immediately after the vial is taken—we find out we’re right or wrong—and our subjective truths may change. They remain subjective truths so long as future facts could further change our conclusions.
Yes. The vial is either poisoned or it isn’t, and my task is to decide whether to drink it or not. Do you deny that?
Yes, I agree. Indeed, looking for systems to find answers that are better than the one I’m using makes sense, even if they aren’t best, even if I can’t ever know whether they are best or not.
Sure. But “which vial is poisoned?” isn’t one of them. More generally, there are millions of issues we face in our lives for which answers exist, and productive techniques for approaching those questions are worth exploring and adopting.
This is where we disagree.
Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it’s a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.
And if you have a more survival-conducive way of approaching the vials than I and the other 999 people in the room, we do better to listen to you than to each other, even though your opinion is inconsistent with ours.
Again, this is where we disagree. The relevance of “Truth” (as you’re referring to it… I would say “reality”) is also the extent to which some ways of approaching the world (for example, sniffing the two vials, or weighing them, or a thousand other tests) reliably have better results than just measuring the extent to which other humans agree with an assertion.
Sure, that’s true.
But it’s far more useful to better entangle our decisions (our “subjective truths,” as you put it) with reality (“Truth”) before we make those decisions.
With respect to your example, I can only play with those facts that you have given me. In your example, I assumed that knowledge of which vial has poison could not be known, and the best information we had was our collective beliefs (which are based on certain factors you listed). I agree with the task at hand as you put it, but the devil is of course in the details.
But as noted above, if we cannot derive the truth, it is just as good as not existing. If the “vial picker” knows the truth beforehand, or is able to derive it, so be it, but immediately before he picks the vial, the Truth, as the vial picker knows it, is of limited value—he is unsure and everyone around him thinks hes an idiot. After the fact, everyone’s opinion will change accordingly with the results. By creating your own example, you’re presupposing (i) an answer exists to your question AND (ii) that we can derive it -- we don’t have that luxury in the real life, and even if we have that knowledge to know an “answer” exists, we don’t know whether the vial picker can accurately pick the appropriate vial based on the information available.
The idea of subjective truth (or subjective reality) doesn’t rely solely on the fact that reality doesn’t exist, most generally it is based on the idea that there may be cases a human cannot derive what is real even where there is some answer. If we cannot derive that reality, the existence of that reality must also be questioned. We of course don’t have to worry about these subtleties if the examples we use assume an answer to the issue exists.
The meaning of this is that rationality in my mind is helpful only to the extent (i) an answer exists and (ii) it can be derived. If the answer to (i) and (ii) are yes, rationality sounds great. If the answer to (i) is no, or the answer to (i) is yes but (ii) is no, rationality (or any other system) has no purpose other than to give us a false belief that we’re going about things in the best way. In such a world, there will be great uncertainty as to the appropriate human course of action.
This is why I’m asking why you are confident the answer to (i) is yes for all issues. You’re describing a world that provides a level of certainty such that the rationality model works in all cases—I’m asking why you know that amount of certainty exists in the world—its convenience is precisely what makes its universal application suspect. As noted in my answer to MugaSofer, perhaps your position is based on assumption/faith without substantiation, which I’m comfortable with as a plausible answer, but not sure that is the basis you are using for the conclusion (for the record, my personal belief is that any sort of theory or basis for going about our lives requires some type of faith/assumptions because we cannot have 100% certainty)
Or at least approximated. Yes.
Lovely.
I would say, rather, that it has no purpose at all in the context of that question. Having a false belief is not a useful purpose.
And, as I’ve said before, I agree that there exist questions without answers, and questions whose answers are necessarily beyond the scope of human knowledge, and I agree that rationality doesn’t provide much value in engaging with those questions… though it’s no worse than any approach I know of, either.
As above, I submit that in all cases the approach I describe either works better than (if there are answers, which there often are) or as well (if not) as any other approach I know of.
And, as I’ve said before, if you have a better approach to propose, propose it!
I don’t know that. But I have to make decisions anyway, so I make them using the best approach I know.
If you think I should do something different, tell me what you think I should do.
OTOH, if all you’re saying is that my approach might be wrong, then I agree with you completely, but so what? My choice is still between using the best approach I know of, or using some other approach, and given that choice I should still use the best approach I know of. And so should you.
For the record, that’s also the consensus position here.
The interesting question is, given that we don’t have 100% certainty, what do I do now?
Because it helps us make decisions.
Incidentally, replacing words that may be unclear or misunderstood (by either party) with what we mean by those words is generally considered helpful ’round here for producing fruitful discussions—there’s no point arguing about whether the tree in the forest made a sound if I mean “auditory experience” and you mean “vibrations in the air”. This is known as “Rationalist’s Taboo”, after a game with similar rules, and replacing a word with (your) definition is known as “tabooing” it.
I actually don’t think we’re using the word differently—the issue was premised solely for issues where the answer cannot be known after the fact. In that case, our use of “confidence” is the same—it simply helps you make decisions. Once the value of the decision is limited to the belief in its soundness, and not ultimate “correctness” of the decision (because it cannot be known), rationality is important only if you believe it to be correct way to make decisions.
Indeed. And probability is confidence, and Bayesian probability is the correct amount of confidence.
Indeed. One of the purposes of this site is to help people become more rational—closer to a mathematical perfect reasoner—in everyday life. In math problems, however—and every real problem can, eventually, be reduced to a math problem—we can always make the right choice (unless we make a mistake with the math, which does happen.)
Unfortunately for you, most of the basic introductory-level stuff—and much of the really good stuff generally—is by him. So I’m guessing there’s a certain selection effect for people who enjoy/tolerate his style of writing.
I’m still not sure how truth could be “relative”—could you perhaps expand on what you mean by that? - although obviously it can be obscured by biases and simple lack of data. In addition, some questions may actually have no answer, because people are using different meanings for the same word or the question itself is contradictory (how many sides does a square triangle have?)
EDIT:
A lot of people here—myself included—practice or advise testing how accurate your estimates are. There are websites and such dedicated to helping people do this.