For instance, reasoning by expected utility, which you probably consider too basic to mention
Actually, I consider it too complicated for my first book! That’s going to focus on getting across even more basic concepts like ‘the point of reasoning about your beliefs is to function as a mapping engine that produces correlations between a map and the territory’ and ‘strong evidence is the sort of evidence we couldn’t possibly find if the hypothesis were false’.
‘strong evidence is the sort of evidence we couldn’t possibly find if the hypothesis were false’.
-blink-
If you mean this, please elaborate. If not, please change the wording before you confuse the living daylights out of some poor newcomer.
Edit: I’m not nitpicking him for infinite certainty. I acknowledge it’s reasonable informally to tell me a ticket I’m thinking of buying couldn’t possibly win the lottery. That’s not what I mean. I mean even finding some overwhelmingly strong evidence doesn’t necessarily mean the hypothesis is overwhelmingly likely to be true. If the comment’s misleading then given it’s subject it seems worth pointing out!
Example: Say you’re randomly chosen to take a test with a false positive rate of 1% for a cancer that occurs in 0.1% of the population, and it returns positive. That’s strong evidence for the hypothesis that you have that cancer, but the hypothesis is probably false.
Strongly seconded. Generally, it seems to me that Eliezer frequently seriously confuses people by mixing literal statements with hyperbole like this or “shut up and do the impossible”. I definitely see the merit of the greater emotional impact, but I hope there’s some way to get it without putting off the unusually literal-minded (which I expect most people who will get anything out of OB or The Book are).
Yeah, that is kind of tricky. Let me try to explain what Eliezer_Yudkowsky meant in terms of my preferred form of the Bayes Theorem:
O(H|E) = O(H) * P(E|H) / P(E|~H)
where O indicates odds instead of probability and | indicates “given”.
In other words, “any time you observe evidence, amplify the odds you assign to your beliefs by the probability of observing the evidence if the belief were true, divided by the probabily of observing it if the belief were false.”
Also, keep in mind that Eliezer_Yudkowsky has written about how you should treat very low probability events as being “impossible”, even though you have to assign a non-zero probability to everything.
Nevertheless, his statement still isn’t literally true. The strength of the evidence depends on the ratio P(E|H)/P(E|~H), while the quoted statement only refers to the denominator. So there can be situations where you have 100:1 odds of seeing E if the hypothesis were true, but 1:1000 odds (about a 0.1% chance) of seeing E if it were false.
Such evidence is very strong—it forces you to amplify the odds you assign to H by a factor of 100,000 -- but it’s far from evidence you “couldn’t possibly find”, which to me means something like 1:10^-10 odds.
Still, Eliezer_Yudkowsky is right that, generally, strong evidence will have a very small denominator.
Strong evidence is evidence that, given certain premises, has no chance of arising.
Of course, Eliezer has also claimed that nothing can have no chance of arising (probability zero), so it’s easy to see how one might be confused about his position.
Traditionally, evidence that has less than a particular value of arising given the truth of a hypothesis (usually 5%) is considered to be strong, but that’s really an arbitrary decision.
Traditionally, evidence that has less than a particular value of arising given the truth of a hypothesis (usually 5%) is considered to be strong, but that’s really an arbitrary decision.
Correction: traditionally evidence against an hypothesis is considered strong if the chance of that evidence or any more extreme evidence arising given the truth of the hypothesis is less than an arbitrary value. (If this tradition doesn’t make sense to you, you are not alone.)
First, you have to explain why relying on external math, rather than on a hunch, is a good idea. Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.
This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries “Don’t trust! It’s dangerous! It’s useless! It’s wrong!”. The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.
Here I’m focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I’m brewing a mini-sequence on this (2 or 3 posts).
I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.
At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn’t give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.
That’s in some ways easier—basically this comes down to standard arguments in decision theory, I think...
Since real gambles are always also part of the state of the world that one’s utility function is defined over, you also need the moral principle that there shouldn’t be (dis)utility attached to their structure. Decision theory strictly has nothing to say to the person who considers it evil to gamble with lives (operationalized as not taking the choice with the lowest variance in possible outcomes, or whatever), although it’s easy to make it sound like it does. The moral principle here seems intuitive to me, but I have no idea if it is in general. (Something to Protect is the only post I can think of dealing with this.)
I don’t really know the formal definition or theory of expected utility, but it is something which seems to underpin almost everything that is said here on LW or on OB.
Can anyone please point me to a good reference or write a wiki entry?
As I recall, I made this up to suit my own ends :-(
Wikipedia quibbles with me significantly—stressing the idea that utilitarianism is a form of consequentialism:
“Utilitarianism is the idea that the moral worth of an action is determined solely by its contribution to overall perceivable utility: that is, its contribution to happiness or pleasure as summed among an ill-defined group of people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome.”
I don’t really want “utilitarianism” to refer to a form of consequentialism—thus my
crude attempt at hijacking the term :-|
I hadn’t even considered the possibility that your definition might lead to a ‘utilitarianism’ that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to ‘rule utilitarianism’, but more interesting—the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?
I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.
Actually, I consider it too complicated for my first book! That’s going to focus on getting across even more basic concepts like ‘the point of reasoning about your beliefs is to function as a mapping engine that produces correlations between a map and the territory’ and ‘strong evidence is the sort of evidence we couldn’t possibly find if the hypothesis were false’.
Funny. I feel like on OB and LW utility theory is generally taken as the air we breathe.
It is—but that’s OB and LW.
-blink-
If you mean this, please elaborate. If not, please change the wording before you confuse the living daylights out of some poor newcomer.
Edit: I’m not nitpicking him for infinite certainty. I acknowledge it’s reasonable informally to tell me a ticket I’m thinking of buying couldn’t possibly win the lottery. That’s not what I mean. I mean even finding some overwhelmingly strong evidence doesn’t necessarily mean the hypothesis is overwhelmingly likely to be true. If the comment’s misleading then given it’s subject it seems worth pointing out!
Example: Say you’re randomly chosen to take a test with a false positive rate of 1% for a cancer that occurs in 0.1% of the population, and it returns positive. That’s strong evidence for the hypothesis that you have that cancer, but the hypothesis is probably false.
Strongly seconded. Generally, it seems to me that Eliezer frequently seriously confuses people by mixing literal statements with hyperbole like this or “shut up and do the impossible”. I definitely see the merit of the greater emotional impact, but I hope there’s some way to get it without putting off the unusually literal-minded (which I expect most people who will get anything out of OB or The Book are).
Yeah, that is kind of tricky. Let me try to explain what Eliezer_Yudkowsky meant in terms of my preferred form of the Bayes Theorem:
O(H|E) = O(H) * P(E|H) / P(E|~H)
where O indicates odds instead of probability and | indicates “given”.
In other words, “any time you observe evidence, amplify the odds you assign to your beliefs by the probability of observing the evidence if the belief were true, divided by the probabily of observing it if the belief were false.”
Also, keep in mind that Eliezer_Yudkowsky has written about how you should treat very low probability events as being “impossible”, even though you have to assign a non-zero probability to everything.
Nevertheless, his statement still isn’t literally true. The strength of the evidence depends on the ratio P(E|H)/P(E|~H), while the quoted statement only refers to the denominator. So there can be situations where you have 100:1 odds of seeing E if the hypothesis were true, but 1:1000 odds (about a 0.1% chance) of seeing E if it were false.
Such evidence is very strong—it forces you to amplify the odds you assign to H by a factor of 100,000 -- but it’s far from evidence you “couldn’t possibly find”, which to me means something like 1:10^-10 odds.
Still, Eliezer_Yudkowsky is right that, generally, strong evidence will have a very small denominator.
EDIT: added link
In comments like this, we should link to the existing pages of the wiki, or create stubs of the new ones.
Bayes’ theorem on LessWrong wiki.
Strong evidence is evidence that, given certain premises, has no chance of arising.
Of course, Eliezer has also claimed that nothing can have no chance of arising (probability zero), so it’s easy to see how one might be confused about his position.
Traditionally, evidence that has less than a particular value of arising given the truth of a hypothesis (usually 5%) is considered to be strong, but that’s really an arbitrary decision.
Correction: traditionally evidence against an hypothesis is considered strong if the chance of that evidence or any more extreme evidence arising given the truth of the hypothesis is less than an arbitrary value. (If this tradition doesn’t make sense to you, you are not alone.)
I’m really surprised to hear you say that—I would have thought it was pretty fundamental. Don’t you at least have to introduce “shut up and multiply”?
First, you have to explain why relying on external math, rather than on a hunch, is a good idea. Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.
That applies to Bayesian reasoning too, doesn’t it?
That’s in some ways easier—basically this comes down to standard arguments in decision theory, I think...
This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries “Don’t trust! It’s dangerous! It’s useless! It’s wrong!”. The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.
Here I’m focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I’m brewing a mini-sequence on this (2 or 3 posts).
I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.
At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn’t give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.
Since real gambles are always also part of the state of the world that one’s utility function is defined over, you also need the moral principle that there shouldn’t be (dis)utility attached to their structure. Decision theory strictly has nothing to say to the person who considers it evil to gamble with lives (operationalized as not taking the choice with the lowest variance in possible outcomes, or whatever), although it’s easy to make it sound like it does. The moral principle here seems intuitive to me, but I have no idea if it is in general. (Something to Protect is the only post I can think of dealing with this.)
I don’t really know the formal definition or theory of expected utility, but it is something which seems to underpin almost everything that is said here on LW or on OB.
Can anyone please point me to a good reference or write a wiki entry?
Are the wikipedia references recommended?
The wikipedia reference is a bit patchy. This Introduction to Choice under Risk and Uncertainy is pretty good if you have a bit more time, and can handle the technical parts.
Thanks conchis.
Perhaps check my references here:
http://timtyler.org/expected_utility_maximisers/
Thanks! I hadn’t heard that definition of utilitarianism before.
As I recall, I made this up to suit my own ends :-(
Wikipedia quibbles with me significantly—stressing the idea that utilitarianism is a form of consequentialism:
“Utilitarianism is the idea that the moral worth of an action is determined solely by its contribution to overall perceivable utility: that is, its contribution to happiness or pleasure as summed among an ill-defined group of people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome.”
I don’t really want “utilitarianism” to refer to a form of consequentialism—thus my crude attempt at hijacking the term :-|
I hadn’t even considered the possibility that your definition might lead to a ‘utilitarianism’ that is not consequentialist. In some circles, the two terms are used interchangeably. Sounds akin to ‘rule utilitarianism’, but more interesting—the right action is one that maximizes expected utility, regardless of its actual consequences. Does that sound like a good enough characterization?
I would still be prepared to call an agent “utilitarian” if it operated via maximising expected utility—even if its expectations turned out to be completely wrong, and its actions were far from those that would have actually maximised utility.
Humans are often a bit like this. They “expect” that hoarding calories is a good idea—and so that is what they do. Actually this often turns out to be not so smart. However, this flaw doesn’t make humans less utilitarian in my book—rather they have some bad priors—and they are wired-in ones that are tricky to update.