You should update on all available evidence, of course, but there is very strong evidence that this community is reasonably competent.
This hasn’t been my experience on technical issues that I have experience with. Certainly the fact that some people on LW have started worrying about extremely low-probability events and invoking expected utility maximization to say they are “required to” in order to be rational means that something is wrong.
In this case it turns out that actually VNM is correct, but misunderstood (utility is sufficiently unintuitive that conflating it with anything that you have intuition about will probably lead to issues down the road if you’re not willing to ignore the math when it [seems to] lead to ridiculous conclusions). I’m about to edit the original post to reflect this. I think it was worth the 9 units of karma to resolve my own confusion, though.
This hasn’t been my experience on technical issues that I have experience with.
Which technical issues are these? Do they represent a large subset of the issues for which there is a significant consensus on LW?
If VNM is correct, what misunderstanding makes caring about low probability/high utility scenarios irrational? If I have the wrong idea about how to maximize utility, I would really like to know.
See the updated original post. The issue is that utility is (a) bounded and (b) probably doesn’t correspond at all to your intuition of what it should be. In particular, scenarios that you think are high utility actually just have high {monetary value, lives saved, etc.}, which may or may not have anything to do with your utility function (except that if lives saved is a terminal value for you, then your utility function is increasing with lives saved, but could increase arbitrarily slowly).
The technical issues that I have experience with are AI and cognitive science. While I’ve only had a few actual technical discussions with people on LessWrong about AI, I had the general impression that the other person ranged from failing to grasp subtle but important insights to having little more than an abstract notion of how an AGI should work. Of course, the Sequences mean that the majority of people at least know enough to understand the general idea of the Bayesian approach to statistical machine learning, but this doesn’t imply a deep knowledge of exactly why the Bayesian approach is a good idea or what the current issues are on a computational level. I consider this to be a huge gap to people who are interested in FAI—if you don’t even know how the AI is going to work, you don’t have much chance of telling it to be friendly. In particular, your conception of how to influence its decision theory might be completely different from what is actually possible.
[EDIT: I should also note as a former diehard Bayesian that the immense hate towards frequentists is entirely unjustified. First of all, LW has an entirely different definition of the term than everyone else in the world, as far as I can tell. However, I believe that many people here would still consider the Bayesian approach to be obviously superior even after getting the right definition, despite the fact that the frequentist approach to statistical machine learning is quite reasonable. I believe that this was the first instance that led me to believe that LW wasn’t quite as knowledgeable as I had originally supposed.]
I am less familiar with cognitive science, but certain cognitive biases taken for granted on LW are simply empirically nonexistent, for instance actor-observer bias, which I think is incorrectly labelled the affect heuristic on LW, although I could be misremembering.
While I’ve only had a few actual technical discussions with people on LessWrong about AI, I had the general impression that the other person ranged from failing to grasp subtle but important insights to having little more than an abstract notion of how an AGI should work.
Woah. If you have a pretty solid handle on how an AGI should work, you’re way too far out of my league for me to contribute meaningfully to this conversation.
But to try and clear up a hole in my understanding: VNM utility functions assign real number utility values, and multiplying them by arbitrary scalars doesn’t change anything meaningful. Since the reals are uncountably infinite, where do the bounds on VNM utilities come from?
Given that I originally failed to understand VNM, I doubt I’m out of anyone’s league. I’m just saying that I have a good enough general background that if a commonly held assumption seems to lead to ridiculous conclusions, and there is a simple way, with sound technical justifications, to avoid those conclusions, but that involves rejecting the assumption, I am willing to reject the assumption rather than assuming that my reasoning is incorrect. This might be a bad idea, but as long as I post my rejection so that everyone can tell me why I’m stupid, it seems to work reasonably.
Also, I certainly don’t have a solid handle on how AGI should work, but I can see the different relevant components and the progress that people seem to be making on a couple of the fronts.
But to answer your question, the bound is on min |u(x)-u(y)| / max |u(a)-u(b)|, where the min is over alternatives x and y that you would consider sufficiently different to be worth distinguishing. This gets around the fact that u is only defined up to affine transformations, since ratios of differences are invariant under affine transformations. The point is basically that if p is very small and x and y are alternatives that are different enough that I would take (1-p)x+pa over (1-p)y+pb for ANY possible a and b (even very bad a and very good b), then by expected utility maximization I must have (1-p)u(x)+pu(a) > (1-p)u(y)+pu(b) for all a and b. Algebraic manipulation of this gives a bound on p/(1-p), which is basically p for small p, and then finding the optimum bound over x, y, a, and b gives the result claimed at the beginning of this paragraph.
The point is basically that if p is very small and x and y are alternatives that are different enough that I would take (1-p)x+pa over (1-p)y+pb for ANY possible a and b (even very bad a and very good b)
Hold on, aren’t you assuming your conclusion here? Unless the utility function is bounded already, a and b can be arbitrarily large values, and can always be made large enough to alter the inequality, regardless of the values of x, y, and p. That is, there are no possible alternatives x and y for which your statement holds.
My claim is that IF we don’t care about probabilities that are smaller than p (and I would claim that we shouldn’t care about probabilities less than, say, 1 in 3^^^3), then the utility function is bounded. This is because if we don’t care about probabilities smaller than p, then for any two objects x and y that are sufficiently different that we DO care about one over the other, we must have
(1-p)x+pa > (1-p)y+pb,
no matter how bad a is and how good b is. I am being somewhat sloppy in my language here. Probably differences in outcome can very continuously, so I can always find x and y that are so similar to each other that I do start carrying about other equally irrelevant differences in the distribution of outcomes. For instance, I would always choose spending 5 minutes of time and having a 10^-100 chance of being tortured for 10^1000 years over spending 10 minutes of time. But 5 minutes + chance of torture probably isn’t superior to 5+10^-1000 minutes. What I really mean is that, if x and y are sufficiently different that probabilities smaller than p just don’t come into the picture, then I can bound the range of my utility function in terms of |u(x)-u(y)| and p.
So just to be clear, my claim is that [not caring about small probabilities] implies [utility function is bounded]. This I can prove mathematically.
The other part of my claim [which I can’t prove mathematically] is that it doesn’t make sense to care about small probabilities. Of course, you can care about small probabilities while still having consistent preferences (heck, just pick some quantity that doesn’t have an obvious upper bound and maximize its expected value). But I would have a hard time believing that that is the utility function corresponding to your true set of preferences.
If you begin to suspect that the majority of LW believes something incorrectly, your prior probability distribution should resemble
P(“I’m wrong”) >>
P(“They’re wrong because the problem is incredibly hard”) >
P(“They’re wrong because of a subtle bias or flaw in reasoning”) >
P(“They’re wrong because they’re missing something obvious.”)
You should update on all available evidence, of course, but there is very strong evidence that this community is reasonably competent.
This hasn’t been my experience on technical issues that I have experience with. Certainly the fact that some people on LW have started worrying about extremely low-probability events and invoking expected utility maximization to say they are “required to” in order to be rational means that something is wrong.
In this case it turns out that actually VNM is correct, but misunderstood (utility is sufficiently unintuitive that conflating it with anything that you have intuition about will probably lead to issues down the road if you’re not willing to ignore the math when it [seems to] lead to ridiculous conclusions). I’m about to edit the original post to reflect this. I think it was worth the 9 units of karma to resolve my own confusion, though.
Which technical issues are these? Do they represent a large subset of the issues for which there is a significant consensus on LW?
If VNM is correct, what misunderstanding makes caring about low probability/high utility scenarios irrational? If I have the wrong idea about how to maximize utility, I would really like to know.
See the updated original post. The issue is that utility is (a) bounded and (b) probably doesn’t correspond at all to your intuition of what it should be. In particular, scenarios that you think are high utility actually just have high {monetary value, lives saved, etc.}, which may or may not have anything to do with your utility function (except that if lives saved is a terminal value for you, then your utility function is increasing with lives saved, but could increase arbitrarily slowly).
The technical issues that I have experience with are AI and cognitive science. While I’ve only had a few actual technical discussions with people on LessWrong about AI, I had the general impression that the other person ranged from failing to grasp subtle but important insights to having little more than an abstract notion of how an AGI should work. Of course, the Sequences mean that the majority of people at least know enough to understand the general idea of the Bayesian approach to statistical machine learning, but this doesn’t imply a deep knowledge of exactly why the Bayesian approach is a good idea or what the current issues are on a computational level. I consider this to be a huge gap to people who are interested in FAI—if you don’t even know how the AI is going to work, you don’t have much chance of telling it to be friendly. In particular, your conception of how to influence its decision theory might be completely different from what is actually possible.
[EDIT: I should also note as a former diehard Bayesian that the immense hate towards frequentists is entirely unjustified. First of all, LW has an entirely different definition of the term than everyone else in the world, as far as I can tell. However, I believe that many people here would still consider the Bayesian approach to be obviously superior even after getting the right definition, despite the fact that the frequentist approach to statistical machine learning is quite reasonable. I believe that this was the first instance that led me to believe that LW wasn’t quite as knowledgeable as I had originally supposed.]
I am less familiar with cognitive science, but certain cognitive biases taken for granted on LW are simply empirically nonexistent, for instance actor-observer bias, which I think is incorrectly labelled the affect heuristic on LW, although I could be misremembering.
Woah. If you have a pretty solid handle on how an AGI should work, you’re way too far out of my league for me to contribute meaningfully to this conversation.
But to try and clear up a hole in my understanding: VNM utility functions assign real number utility values, and multiplying them by arbitrary scalars doesn’t change anything meaningful. Since the reals are uncountably infinite, where do the bounds on VNM utilities come from?
Given that I originally failed to understand VNM, I doubt I’m out of anyone’s league. I’m just saying that I have a good enough general background that if a commonly held assumption seems to lead to ridiculous conclusions, and there is a simple way, with sound technical justifications, to avoid those conclusions, but that involves rejecting the assumption, I am willing to reject the assumption rather than assuming that my reasoning is incorrect. This might be a bad idea, but as long as I post my rejection so that everyone can tell me why I’m stupid, it seems to work reasonably.
Also, I certainly don’t have a solid handle on how AGI should work, but I can see the different relevant components and the progress that people seem to be making on a couple of the fronts.
But to answer your question, the bound is on min |u(x)-u(y)| / max |u(a)-u(b)|, where the min is over alternatives x and y that you would consider sufficiently different to be worth distinguishing. This gets around the fact that u is only defined up to affine transformations, since ratios of differences are invariant under affine transformations. The point is basically that if p is very small and x and y are alternatives that are different enough that I would take (1-p)x+pa over (1-p)y+pb for ANY possible a and b (even very bad a and very good b), then by expected utility maximization I must have (1-p)u(x)+pu(a) > (1-p)u(y)+pu(b) for all a and b. Algebraic manipulation of this gives a bound on p/(1-p), which is basically p for small p, and then finding the optimum bound over x, y, a, and b gives the result claimed at the beginning of this paragraph.
Hold on, aren’t you assuming your conclusion here? Unless the utility function is bounded already, a and b can be arbitrarily large values, and can always be made large enough to alter the inequality, regardless of the values of x, y, and p. That is, there are no possible alternatives x and y for which your statement holds.
My claim is that IF we don’t care about probabilities that are smaller than p (and I would claim that we shouldn’t care about probabilities less than, say, 1 in 3^^^3), then the utility function is bounded. This is because if we don’t care about probabilities smaller than p, then for any two objects x and y that are sufficiently different that we DO care about one over the other, we must have
(1-p)x+pa > (1-p)y+pb,
no matter how bad a is and how good b is. I am being somewhat sloppy in my language here. Probably differences in outcome can very continuously, so I can always find x and y that are so similar to each other that I do start carrying about other equally irrelevant differences in the distribution of outcomes. For instance, I would always choose spending 5 minutes of time and having a 10^-100 chance of being tortured for 10^1000 years over spending 10 minutes of time. But 5 minutes + chance of torture probably isn’t superior to 5+10^-1000 minutes. What I really mean is that, if x and y are sufficiently different that probabilities smaller than p just don’t come into the picture, then I can bound the range of my utility function in terms of |u(x)-u(y)| and p.
So just to be clear, my claim is that [not caring about small probabilities] implies [utility function is bounded]. This I can prove mathematically.
The other part of my claim [which I can’t prove mathematically] is that it doesn’t make sense to care about small probabilities. Of course, you can care about small probabilities while still having consistent preferences (heck, just pick some quantity that doesn’t have an obvious upper bound and maximize its expected value). But I would have a hard time believing that that is the utility function corresponding to your true set of preferences.