and rational is defined as ‘that which makes you win’.
I was the one who wrote this in a previous comment regarding the rationality of the fear of darkness. I think this definition(Which I learned from Eliezer) is in fact useful: if you know of two procedures A and B where A is correct according to some standard of rationality but will make you lose whereas B will make you win I would choose B. Eliezer makes this point in Newcomb’s problem:
http://www.overcomingbias.com/2008/01/newcombs-proble.html
For example, let’s say that we find that certain manipulations of tarot decks permit us to predict the weather, even though we have no idea of why the two should be correlated at all. With rationality, we don’t need to know why. Once we’ve recognized that the relationship exists, it becomes rational for us to use it.
Here you are making the exact same point! Knowing that the tarot decks will “make you win” is reason enough to use them, no matter how irrational that may appear.
If you let ‘rationality’ mean simply ‘whatever makes you win’, then its definition drifts with the wind. According to a useful definition of rationality, it would be defined by some procedure that you could then determine whether you’re following. You could then empirically test whether ‘rationality’ makes you win.
Example Dialogue:
Amy: Craig was being irrational. He believed contradictory things and made arguments that did not logically follow. Due to some strange facts about his society and religion, this made him a well-respected and powerful person Beth: But David insisted that his own arguments should be logically valid and avoided contradictory beliefs, and that made him unpopular so he was not successful in life. Clearly this was not rational, since David had a much worse life than Craig.
Amy, here, is using the typical definition of rational, while Beth is using ‘what makes you win’. Is there any advantage to using Beth’s definition? Does it make anything clearer to call Craig rational and David irrational?
Or could we just use Amy’s definition of rationality, and say that Craig was being irrational and David was being rational, and then we have a clear idea of what sorts of things they were doing.
More to the point, David could set out to be rational by Amy’s definition, but there’s no way to plan to be rational by Beth’s definition. ‘Be rational’ is in no way a guide to life, as it’s defined entirely by consequences that haven’t occurred yet.
It all depends on what you value, what do you want to achieve, what is your utility function? If being popular is your goal then being able to lie, manipulate, use impressive arguments even if they are wrong can be a successful way, it’s called politics.
For Amy winning means reasoning correctly. For Beth winning meant being popular.
Winning for a paperclip maximizer looks different than for you and me.
I understand where you want to go. For you rationality is a procedure that will bring you closer to the truth. The problem is, where do we get the correct procedure from and how can we be sure that we are applying it correctly? Here is where the “winning” test comes in. According to the prevailing scientific consensus in the past, airplanes where impossible and anyone investing time and money trying to build one was clearly acting irrationally. Yet, in the end those who acted irrationally won, that is, they achieved scientific truth as you can see today.
PS: Here is where Newcomb’s problem comes in. It seems that it is very hard to define a rational procedure(starting from fundamental principles) that will one-box, yet one-boxing is the correct choice(at least if you value money).
At least, if you value money more than whatever (emotional) value you place on sounding logically consistent. ;-)
However, since any formal system can contain undecidable propositions, and since there is no reason to suppose that human brains (or the universe!) are NOT equivalent to a formal system, then there isn’t any way to guarantee a mapping from “procedure” to “truth”, anyway!
So I prefer to treat logical consistency as a useful tool for winning, rather than treating winning as a useful tool for testing logical consistency.
Actually, neither Craig nor David were rational, if it’s defined as “what makes you predictably win, for an empirical, declared-in-advance definition of winning”. Craig did not choose his beliefs in order to achieve some particular definition of winning. And David didn’t win… EVEN IF his declared-in-advance goals placed a higher utility on logical consistency than popularity or success.
Of course, the real flaw in your examples is that popularity isn’t directly created or destroyed through logical consistency or a lack thereof… although that idea seems to be a strangely common bias among people who are interested in logical consistency!
Unless David actually assigned ZERO utility to popularity, then he failed to “win” (in the sense of failing to achieve his optimum utility), by choosing actions that showed other people he valued logical consistency and correctness more than their pleasant company (or whatever else it was he did).
I’m not married to my off-the-cuff definition, and I’m certainly not claiming it’s comprehensive. But I think that a definition of rationality that does NOT include the things that I’m including—i.e. predicting maximal utility for a pre-defined utility function—would be severely lacking.
After all, note that this is dangerously close to Eliezer’s definition of “intelligence”: a process for optimizing the future according to some (implicitly, predefined) utility function.
And that definition is neither circular nor meaningless.
So you have to be a utilitarian to be rational? Bad luck for the rest of us. Apparently Aristotle was not pursuing rationality, by your definition. Nor am I.
I don’t know what you mean by “utilitarian”, but if you mean, “one who chooses his actions according to their desired results”, then how can you NOT be a utilitarian? That would indicate that either 1) you’re using a different utility function, or 2) you’re very, very confused.
Or to put it another way, if you say “I choose not to be a utilitarian”, you must be doing it because not being a utilitarian has some utility to you.
If you are arguing that truth is more important than utility in general, rather than being simply one component of a utility function, then you are simply describing what you perceive to be your utility function.
For human beings, all utility boils down to emotion of some kind. That is, if you are arguing that truth (or “rationality” or “validity” or “propriety” or whatever other concept) is most important, you can only do this because that idea makes you feel good… or because it makes you feel less bad than whatever you perceive the alternative is!
The problem with humans is that we don’t have a single, globally consistent, absolutely-determined utility function. We have a collection of ad-hoc, context-sensitive, relative utility and distutility functions. Hell, we can’t even make good decisions when looking at pros and cons simultaneously!
So, if intelligence is efficiently optimizing the future according to your utility function, then rationality could perhaps be considered the process of optimizing your local and non-terminal utility functions to better satisfy your more global ones.
(And I’d like to see how that conflicts with Aristotle—or any other “great” philosopher, for that matter—in a way that doesn’t simply amount to word confusion.)
I’m neither a utilitarian nor a consequentialist, by those definitions. That’s a bunch of stuff that applies only to the map, not the territory.
My statement is that humans do what they do, either to receive pleasure or avoid pain. (What other beings get from their actions is only relevant insofar as that creates pleasure or pain for the decider.)
In order to falsify this statement, you’d need to prove the existence of some supernatural entity that is not ruled by cause-and-effect. That is, you’d have to prove that “free will” or a “soul” exists. Good luck with that. ;-)
For verification of this statement, on the other hand, we can simply continue to understand better and better how the brain works, especially how pain and pleasure interact with memory formation and retrieval.
In order to falsify this statement, you’d need to prove the existence of some supernatural entity that is not ruled by cause-and-effect
Congratulations! Your claim is non-falsifiable, and therefore is nonsense.
You claim that humans do what they do either to receive pleasure or avoid pain. That sounds implausible to me. I’d happily list counterexamples, but I get the impression you’d just explain them away as “Oh, what he’s really going after is pleasure” or “What he’s really doing is avoiding pain.”
if your explanation fits all possible data, then it doesn’t explain anything.
Congratulations! Your claim is non-falsifiable, and therefore is nonsense.… I’d happily list counterexamples, but I get the impression you’d just explain them away as “Oh, what he’s really going after is pleasure” or “What he’s really doing is avoiding pain.”
Wait… are you saying that atheism, science, and materialism are all nonsense?
I’m only saying that people do things for reasons. That is, our actions are the effects of causes.
So, are you really saying that the idea of cause-and-effect is nonsense? Because I can’t currently conceive of a definition of rationality where there’s no such thing as cause-and-effect.
Meanwhile, I notice you’re being VERY selective in your quoting… like dropping off the “that is not ruled by cause-and-effect” part of the sentence you just quoted. I don’t think that’s very helpful to the dialog, since it makes you appear more interested in rhetorically “winning” some sort of debate, than in collaborating towards truth. Is that the sort of “character” you are recommending people develop as rationalists?
(Note: this is not an attack… because I’m not fighting you. My definition of “win” in this context is better understanding—first for me, and then for everyone else. So it is not necessary for someone else to “lose” in order for me to “win”.)
I didn’t think the ‘that is not ruled by cause-and-effect’ was relevant—I was granting that your argument required something less specific than it actually did, since I didn’t even need that other stuff. But if you prefer, I’ll edit it into my earlier comment.
Atheism (as a theory) is falsifiable, if you specify exactly which god you don’t believe in and how you’d know it if you saw it. Then if that being is ever found, you know your theory has been falsified.
I’ve never heard ‘Science’ framed as a theory, so my criticism would not apply. Feel free to posit a theory of science and I’ll tell you whether it makes sense.
Materialism is mostly justified on methodological grounds, and is also not a theory.
Psychological hedonism, however, is a theory, and if it’s clearly specified then there are easy counterexamples.
A reason is not the same as a cause. Though reasons can be causes.
I didn’t say anything like “the idea of cause-and-effect is nonsense”. Rather, I said that our actions have causes other than the avoidance of pain and the pursuit of pleasure. You seem to think that the only thing that can constitute a ‘cause’ for a human is pleasure or pain, given that you’ve equated the concepts.
I’m only saying that people do things for reasons. That is, our actions are the effects of causes.
That’s not all you’re saying, at all. I would agree wholeheartedly with this sentiment. Whilst denying that I try to maximize any sort of utility or am ruled by drives towards pleasure and pain.
“I was the one who wrote this in a previous comment regarding the rationality of the fear of darkness. I think this definition(Which I learned from Eliezer) is in fact useful: if you know of two procedures A and B where A is correct according to some standard of rationality but will make you lose whereas B will make you win I would choose B.”
That’s the point: it’s knowing that B leads to winning, and acknowledging that winning is the goal, that makes choosing B rational.
“Knowing that the tarot decks will “make you win” is reason enough to use them, no matter how irrational that may appear.”
If we establish that we want to predict something, and we acknowledge that tarot is correlated to whatever we want to predict, using tarot to predict that thing IS COMPLETELY RATIONAL. We do not need to know the mechanism behind the correlation. What we DO need is to be able to look at our evaluation of the tarot’s usefulness and determine that every step in the reasoning is correct.
The reasoning that concludes looking at tarot is an effective way of predicting [whatever] is fairly simple and trivially easy to verify.
I was the one who wrote this in a previous comment regarding the rationality of the fear of darkness. I think this definition(Which I learned from Eliezer) is in fact useful: if you know of two procedures A and B where A is correct according to some standard of rationality but will make you lose whereas B will make you win I would choose B. Eliezer makes this point in Newcomb’s problem: http://www.overcomingbias.com/2008/01/newcombs-proble.html
Here you are making the exact same point! Knowing that the tarot decks will “make you win” is reason enough to use them, no matter how irrational that may appear.
No.
If you let ‘rationality’ mean simply ‘whatever makes you win’, then its definition drifts with the wind. According to a useful definition of rationality, it would be defined by some procedure that you could then determine whether you’re following. You could then empirically test whether ‘rationality’ makes you win.
Example Dialogue:
Amy: Craig was being irrational. He believed contradictory things and made arguments that did not logically follow. Due to some strange facts about his society and religion, this made him a well-respected and powerful person
Beth: But David insisted that his own arguments should be logically valid and avoided contradictory beliefs, and that made him unpopular so he was not successful in life. Clearly this was not rational, since David had a much worse life than Craig.
Amy, here, is using the typical definition of rational, while Beth is using ‘what makes you win’. Is there any advantage to using Beth’s definition? Does it make anything clearer to call Craig rational and David irrational?
Or could we just use Amy’s definition of rationality, and say that Craig was being irrational and David was being rational, and then we have a clear idea of what sorts of things they were doing.
More to the point, David could set out to be rational by Amy’s definition, but there’s no way to plan to be rational by Beth’s definition. ‘Be rational’ is in no way a guide to life, as it’s defined entirely by consequences that haven’t occurred yet.
It all depends on what you value, what do you want to achieve, what is your utility function? If being popular is your goal then being able to lie, manipulate, use impressive arguments even if they are wrong can be a successful way, it’s called politics.
For Amy winning means reasoning correctly. For Beth winning meant being popular. Winning for a paperclip maximizer looks different than for you and me.
I understand where you want to go. For you rationality is a procedure that will bring you closer to the truth. The problem is, where do we get the correct procedure from and how can we be sure that we are applying it correctly? Here is where the “winning” test comes in. According to the prevailing scientific consensus in the past, airplanes where impossible and anyone investing time and money trying to build one was clearly acting irrationally. Yet, in the end those who acted irrationally won, that is, they achieved scientific truth as you can see today.
PS: Here is where Newcomb’s problem comes in. It seems that it is very hard to define a rational procedure(starting from fundamental principles) that will one-box, yet one-boxing is the correct choice(at least if you value money).
At least, if you value money more than whatever (emotional) value you place on sounding logically consistent. ;-)
However, since any formal system can contain undecidable propositions, and since there is no reason to suppose that human brains (or the universe!) are NOT equivalent to a formal system, then there isn’t any way to guarantee a mapping from “procedure” to “truth”, anyway!
So I prefer to treat logical consistency as a useful tool for winning, rather than treating winning as a useful tool for testing logical consistency.
Actually, neither Craig nor David were rational, if it’s defined as “what makes you predictably win, for an empirical, declared-in-advance definition of winning”. Craig did not choose his beliefs in order to achieve some particular definition of winning. And David didn’t win… EVEN IF his declared-in-advance goals placed a higher utility on logical consistency than popularity or success.
Of course, the real flaw in your examples is that popularity isn’t directly created or destroyed through logical consistency or a lack thereof… although that idea seems to be a strangely common bias among people who are interested in logical consistency!
Unless David actually assigned ZERO utility to popularity, then he failed to “win” (in the sense of failing to achieve his optimum utility), by choosing actions that showed other people he valued logical consistency and correctness more than their pleasant company (or whatever else it was he did).
I’m not married to my off-the-cuff definition, and I’m certainly not claiming it’s comprehensive. But I think that a definition of rationality that does NOT include the things that I’m including—i.e. predicting maximal utility for a pre-defined utility function—would be severely lacking.
After all, note that this is dangerously close to Eliezer’s definition of “intelligence”: a process for optimizing the future according to some (implicitly, predefined) utility function.
And that definition is neither circular nor meaningless.
So you have to be a utilitarian to be rational? Bad luck for the rest of us. Apparently Aristotle was not pursuing rationality, by your definition. Nor am I.
I don’t know what you mean by “utilitarian”, but if you mean, “one who chooses his actions according to their desired results”, then how can you NOT be a utilitarian? That would indicate that either 1) you’re using a different utility function, or 2) you’re very, very confused.
Or to put it another way, if you say “I choose not to be a utilitarian”, you must be doing it because not being a utilitarian has some utility to you.
If you are arguing that truth is more important than utility in general, rather than being simply one component of a utility function, then you are simply describing what you perceive to be your utility function.
For human beings, all utility boils down to emotion of some kind. That is, if you are arguing that truth (or “rationality” or “validity” or “propriety” or whatever other concept) is most important, you can only do this because that idea makes you feel good… or because it makes you feel less bad than whatever you perceive the alternative is!
The problem with humans is that we don’t have a single, globally consistent, absolutely-determined utility function. We have a collection of ad-hoc, context-sensitive, relative utility and distutility functions. Hell, we can’t even make good decisions when looking at pros and cons simultaneously!
So, if intelligence is efficiently optimizing the future according to your utility function, then rationality could perhaps be considered the process of optimizing your local and non-terminal utility functions to better satisfy your more global ones.
(And I’d like to see how that conflicts with Aristotle—or any other “great” philosopher, for that matter—in a way that doesn’t simply amount to word confusion.)
utilitarianism or, if you prefer, consequentialism
Is this theory falsifiable? What experiment would convince you that this isn’t true? Or will this quickly turn into a ‘true scotsman’ argument?
Take this, reverse the normative content, and you have a view more like mine.
Humans are awesome in part because we’re not utility maximizers. To paraphrase Nietzsche:
I’m neither a utilitarian nor a consequentialist, by those definitions. That’s a bunch of stuff that applies only to the map, not the territory.
My statement is that humans do what they do, either to receive pleasure or avoid pain. (What other beings get from their actions is only relevant insofar as that creates pleasure or pain for the decider.)
In order to falsify this statement, you’d need to prove the existence of some supernatural entity that is not ruled by cause-and-effect. That is, you’d have to prove that “free will” or a “soul” exists. Good luck with that. ;-)
For verification of this statement, on the other hand, we can simply continue to understand better and better how the brain works, especially how pain and pleasure interact with memory formation and retrieval.
Congratulations! Your claim is non-falsifiable, and therefore is nonsense.
You claim that humans do what they do either to receive pleasure or avoid pain. That sounds implausible to me. I’d happily list counterexamples, but I get the impression you’d just explain them away as “Oh, what he’s really going after is pleasure” or “What he’s really doing is avoiding pain.”
if your explanation fits all possible data, then it doesn’t explain anything.
Example: Masochists pursue pain. Discuss.
Black Belt Bayesian: Unfalsifiable Ideas versus Unfalsifiable People
Wait… are you saying that atheism, science, and materialism are all nonsense?
I’m only saying that people do things for reasons. That is, our actions are the effects of causes.
So, are you really saying that the idea of cause-and-effect is nonsense? Because I can’t currently conceive of a definition of rationality where there’s no such thing as cause-and-effect.
Meanwhile, I notice you’re being VERY selective in your quoting… like dropping off the “that is not ruled by cause-and-effect” part of the sentence you just quoted. I don’t think that’s very helpful to the dialog, since it makes you appear more interested in rhetorically “winning” some sort of debate, than in collaborating towards truth. Is that the sort of “character” you are recommending people develop as rationalists?
(Note: this is not an attack… because I’m not fighting you. My definition of “win” in this context is better understanding—first for me, and then for everyone else. So it is not necessary for someone else to “lose” in order for me to “win”.)
I didn’t think the ‘that is not ruled by cause-and-effect’ was relevant—I was granting that your argument required something less specific than it actually did, since I didn’t even need that other stuff. But if you prefer, I’ll edit it into my earlier comment.
Atheism (as a theory) is falsifiable, if you specify exactly which god you don’t believe in and how you’d know it if you saw it. Then if that being is ever found, you know your theory has been falsified.
I’ve never heard ‘Science’ framed as a theory, so my criticism would not apply. Feel free to posit a theory of science and I’ll tell you whether it makes sense.
Materialism is mostly justified on methodological grounds, and is also not a theory.
Psychological hedonism, however, is a theory, and if it’s clearly specified then there are easy counterexamples.
A reason is not the same as a cause. Though reasons can be causes.
I didn’t say anything like “the idea of cause-and-effect is nonsense”. Rather, I said that our actions have causes other than the avoidance of pain and the pursuit of pleasure. You seem to think that the only thing that can constitute a ‘cause’ for a human is pleasure or pain, given that you’ve equated the concepts.
That’s not all you’re saying, at all. I would agree wholeheartedly with this sentiment. Whilst denying that I try to maximize any sort of utility or am ruled by drives towards pleasure and pain.
“I was the one who wrote this in a previous comment regarding the rationality of the fear of darkness. I think this definition(Which I learned from Eliezer) is in fact useful: if you know of two procedures A and B where A is correct according to some standard of rationality but will make you lose whereas B will make you win I would choose B.”
That’s the point: it’s knowing that B leads to winning, and acknowledging that winning is the goal, that makes choosing B rational.
“Knowing that the tarot decks will “make you win” is reason enough to use them, no matter how irrational that may appear.”
If we establish that we want to predict something, and we acknowledge that tarot is correlated to whatever we want to predict, using tarot to predict that thing IS COMPLETELY RATIONAL. We do not need to know the mechanism behind the correlation. What we DO need is to be able to look at our evaluation of the tarot’s usefulness and determine that every step in the reasoning is correct.
The reasoning that concludes looking at tarot is an effective way of predicting [whatever] is fairly simple and trivially easy to verify.