…I’m not sure what ‘laws’ you are talking about...
In the latest interview featuring Eliezer Yudkowsky he said that “there are parts of rationality that we do understand very well in principle.”, namely “Bayes’ Theorem, the expected utility formula, and Solomonoff induction”. He also often refers to ‘laws’ when talking about the basic principles that are being taught on LessWrong. For example in ‘When (Not) To Use Probabilities’ he writes “The laws of probability are laws, not suggestions,”.
...what it is I have to cherry-pick, what one has to do with the other, what either of them have to do with Pascal’s Mugging or discount rates, or in what sense either of those are unsolved problems.
In the post ‘Pascal’s Mugging: Tiny Probabilities of Vast Utilities’ Eliezer Yudkowsky writes, “I don’t feel I have a satisfactory resolution as yet”. This might sound like yet another circumstantial problem to be solved, with no relevance for rationality in general or the bulk of friendly AI research. But it is actually one the most important problems because it does undermine the basis of rational choice.
Why do you do what you do? Most people just do what they feel is the right thing to do, they base their decisions on gut feeling. But to make decision making maximally efficient people started questioning if our evolutionary prior is still adequate for modern day decision making. And indeed, advances in probability theory allowed us to make a lot of progress and still are the best choice when dealing with an endless number of problems. This wasn’t however enough when dealing with terminal goals, we had to add some kind of monetary value to be able to choose between goals of different probability, after all probability itself is no sufficient measure to discern desirable goals from goals that are not worthwhile. Doing so we were now able to formalize how probable a certain outcome was and how much we desired that outcome. Together it seemed that we could now discern how worthwhile it would be to pursue a certain outcome regardless of our instincts. Yet a minor problem arose, sometimes bothersome sometimes welcome. Our probabilities and utility calculations were often based solely on gut feeling, because it was often the only available evidence, which we called our prior probability. That was unsatisfactory as we were still relying on our evolutionary prior, something we tried to overcome after all. So we came up with the Solomonoff Prior to finally obliterate instinct, and it was very good. Only after a while we noticed that our new heuristics often told us to seek outcomes that seemed not only undesirable but plainly wrong. We were able to outweigh any probability by expecting additional utility and disregard any undesirable action by switching from experience-utility to decision-utility as we pleased. Those who didn’t switch were susceptible for taking any chance, either because other agents told them that a certain outcome had an amount of utility that could outweigh their low credulity or because the utility of a certain decision was able to outweigh its low probability. People wouldn’t decide not to marry their girlfriend because that decision would make two other girls and their mother equally happy and therefore outweigh their own happiness and that of their girlfriend, they just assigned even more utility to the decision to marry their girlfriend. Others would seek extreme risks and give all their money to a charity that was trying to take over the Matrix, the almost infinite utility associated with a success outweighing its astronomical low probability. This was unbearable so people decided that something must be wrong with their heuristics and that they would rather doubt their grasp of “rationality” than acting according to it. But it couldn’t be completely wrong, after all their heuristics had been very successful on a number of problems? So people decided to just ignore certain extremes and only use their heuristics when they felt they would deliver reasonable results. Consequently, in the end we were still where we started, using our gut feelings to decide what to do. But how do we program that into an AI? Several solutions have been proposed, using discount rates to disregard extremes or measuring the running time or space requirements of computations, but all had their flaws. It all seemed to have something to do with empirical evidence, but we were already too committed to the laws of probability as the ultimate guidance that we missed out on the possibility that those ‘laws’ might actually be helpful tools not prescriptions of optimal and obligatory decisions.
And, you’re right, I’m committed to the idea that its best to take the course of action with the highest expected utility.
That said, I do agree with you that if my calculations of expected utility lead to wildly counterintuitive results, sometimes that means my calculations are wrong.
Then again, sometimes it means my intuitions are wrong.
So I have to decide how much I trust my intuitions, and how much I trust my calculations.
This situation isn’t unique to probability or calculations of expected utility. It applies to, say, ballistics just as readily.
That is, I have certain evolved habits of ballistic calculation that allow me to do things like hit dartboards with darts and catch baseballs with my hand and estimate how far my car is from other cars. By diligent practice I can improve those skills.
But I’m never going to be able to judge the distance of the moon by eyeballing it—that’s too far outside the range of what my instincts evolved to handle.
Fortunately, we have formalized some laws that help me compute distances and trajectories and firing solutions.
Unfortunately, because air resistance is important and highly variable, it turns out that we can’t fully compute those things within an atmosphere. Our missiles don’t always hit their targets.
Of course, one important difference is that with ballistics it turns out to be relatively easy to develop a formal system that is strictly better than our brains at hitting targets. Which is just another way of saying that our evolved capability for ballistics turns out to be not particularly good, compared to the quality of formal system that we know how to develop.
Whereas with other kinds of judgments, like figuring out what to do next, our evolved capability is much much better than the formal systems that we know how to develop.
Of course, two hundred years ago that was also true of ballistics. It turns out that we’re pretty good at improving the quality of our formal systems.
I am still leaning towards the general resentment that I voiced in my first submission here, although I learnt a lot since then. I perceive the tendency to base decisions solely on logical implications to be questionable. I do not doubt that everything we know hints at the possibility that artificial intelligence could undergo explosive self-improvement and reach superhuman capabilities, but I do doubt that we know enough to justify the kind of commitment given within this community. I am not able to formalize my doubts right now or explain what exactly is wrong, but I feel the need to voice my skepticism nonetheless. And I feel that my recent discovery of some of the admitted problems reinforce my skepticism.
Take for example the risks from asteroids. There is a lot of empirical evidence that asteroids have caused damage before and that they pose an existential risk in future. One can use probability theory, the expected utility formula and other heuristics to determine how reasonable it would be to support the mitigation of that risk. Then there are risks like those posed by particle accelerators. There is some evidence that high-energy physics might pose and existential risk but more evidence that it does not. The risks and benefits are still not solely based on sheer speculation. Yet I don’t think that we could just factor in the expected utility of the whole Earth and our possible future as galactic civilization to conclude that we shouldn’t do high-energy physics. In the case of risks from AI we have a scenario that is solely based on extrapolation, on sheer speculation. The only reason to currently care strongly about risks from AI is the expected utility of success, respectively the disutility of a negative outcome. And that is where I become very skeptical, there can be no empirical criticism. Such scenarios then are those that I compare to ‘Pascal’s Mugging’ because they are susceptible to the same problem of expected utility outweighing any amount of reasonable doubt. I feel that such scenarios can lead us astray by removing themselves from other kinds of evidence and empirical criticism and therefore are solely justifying themselves by making up huge numbers when the available data and understanding doesn’t allow any such conclusions. Take for example MWI, itself a logical implication of the available data and current understanding. But is it enough to commit quantum suicide? I don’t feel that such a conclusion is reasonable. I believe that logical implications and extrapolations are not enough to outweigh any risk. If there is no empirical evidence, if there can be no empirical criticism then all bets are off.
As you mentioned there are many uses like ballistic calculation where mere extrapolation works and is the best we can do. But since there are problems like ‘Pascal’s Mugging’, that we perceive to be undesirable and that lead to an infinite hunt for ever larger expected utility, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics. We agree that we are not going to stop pursuing whatever terminal goal we have chosen just because someone promises us even more utility if we do what that person wants. We also agree that we are not going to stop loving our girlfriend just because there are many people who do not approve our relationship and who would be happy if we divorced. Therefore we already informally established some upper and lower bounds. But when do we start to take our heuristics seriously and do whatever they prove to be the optimal decision? That is an important question and I think that one of the answers, as mentioned above, is that we shouldn’t trust our heuristics without enough empirical evidence as their fuel.
If there is no empirical evidence, if there can be no empirical criticism then all bets are off.
My usual example of the IT path being important is Microsoft. IT improvments have been responsible for much of the recent progress of humans. For many years, Microsoft played the role of the evil emperor of IT, with nasty business practices and shoddy, insecure software. They screwed humanity, and it was a nightmare—a serious setback for the whole planet. Machine intelligence could be like that—but worse.
(nods) I endorse drawing inferences from evidence, and being skeptical about the application of heuristics that were developed against one reference class to a different reference class.
In the post ‘Pascal’s Mugging: Tiny Probabilities of Vast Utilities’ Eliezer Yudkowsky writes, “I don’t feel I have a satisfactory resolution as yet”. This might sound like yet another circumstantial problem to be solved, with no relevance for rationality in general or the bulk of friendly AI research. But it is actually one the most important problems because it does undermine the basis of rational choice.
The evolved/experienced heuristic is probably to ignore those. In most cases where such things crop up, it is an attempted mugging—like it was with Pascal and God. So, most people just learn to tune such things out.
In the latest interview featuring Eliezer Yudkowsky he said that “there are parts of rationality that we do understand very well in principle.”, namely “Bayes’ Theorem, the expected utility formula, and Solomonoff induction”. He also often refers to ‘laws’ when talking about the basic principles that are being taught on LessWrong. For example in ‘When (Not) To Use Probabilities’ he writes “The laws of probability are laws, not suggestions,”.
In the post ‘Pascal’s Mugging: Tiny Probabilities of Vast Utilities’ Eliezer Yudkowsky writes, “I don’t feel I have a satisfactory resolution as yet”. This might sound like yet another circumstantial problem to be solved, with no relevance for rationality in general or the bulk of friendly AI research. But it is actually one the most important problems because it does undermine the basis of rational choice.
Why do you do what you do? Most people just do what they feel is the right thing to do, they base their decisions on gut feeling. But to make decision making maximally efficient people started questioning if our evolutionary prior is still adequate for modern day decision making. And indeed, advances in probability theory allowed us to make a lot of progress and still are the best choice when dealing with an endless number of problems. This wasn’t however enough when dealing with terminal goals, we had to add some kind of monetary value to be able to choose between goals of different probability, after all probability itself is no sufficient measure to discern desirable goals from goals that are not worthwhile. Doing so we were now able to formalize how probable a certain outcome was and how much we desired that outcome. Together it seemed that we could now discern how worthwhile it would be to pursue a certain outcome regardless of our instincts. Yet a minor problem arose, sometimes bothersome sometimes welcome. Our probabilities and utility calculations were often based solely on gut feeling, because it was often the only available evidence, which we called our prior probability. That was unsatisfactory as we were still relying on our evolutionary prior, something we tried to overcome after all. So we came up with the Solomonoff Prior to finally obliterate instinct, and it was very good. Only after a while we noticed that our new heuristics often told us to seek outcomes that seemed not only undesirable but plainly wrong. We were able to outweigh any probability by expecting additional utility and disregard any undesirable action by switching from experience-utility to decision-utility as we pleased. Those who didn’t switch were susceptible for taking any chance, either because other agents told them that a certain outcome had an amount of utility that could outweigh their low credulity or because the utility of a certain decision was able to outweigh its low probability. People wouldn’t decide not to marry their girlfriend because that decision would make two other girls and their mother equally happy and therefore outweigh their own happiness and that of their girlfriend, they just assigned even more utility to the decision to marry their girlfriend. Others would seek extreme risks and give all their money to a charity that was trying to take over the Matrix, the almost infinite utility associated with a success outweighing its astronomical low probability. This was unbearable so people decided that something must be wrong with their heuristics and that they would rather doubt their grasp of “rationality” than acting according to it. But it couldn’t be completely wrong, after all their heuristics had been very successful on a number of problems? So people decided to just ignore certain extremes and only use their heuristics when they felt they would deliver reasonable results. Consequently, in the end we were still where we started, using our gut feelings to decide what to do. But how do we program that into an AI? Several solutions have been proposed, using discount rates to disregard extremes or measuring the running time or space requirements of computations, but all had their flaws. It all seemed to have something to do with empirical evidence, but we were already too committed to the laws of probability as the ultimate guidance that we missed out on the possibility that those ‘laws’ might actually be helpful tools not prescriptions of optimal and obligatory decisions.
OK… I think I understand.
And, you’re right, I’m committed to the idea that its best to take the course of action with the highest expected utility.
That said, I do agree with you that if my calculations of expected utility lead to wildly counterintuitive results, sometimes that means my calculations are wrong.
Then again, sometimes it means my intuitions are wrong.
So I have to decide how much I trust my intuitions, and how much I trust my calculations.
This situation isn’t unique to probability or calculations of expected utility. It applies to, say, ballistics just as readily.
That is, I have certain evolved habits of ballistic calculation that allow me to do things like hit dartboards with darts and catch baseballs with my hand and estimate how far my car is from other cars. By diligent practice I can improve those skills.
But I’m never going to be able to judge the distance of the moon by eyeballing it—that’s too far outside the range of what my instincts evolved to handle.
Fortunately, we have formalized some laws that help me compute distances and trajectories and firing solutions.
Unfortunately, because air resistance is important and highly variable, it turns out that we can’t fully compute those things within an atmosphere. Our missiles don’t always hit their targets.
Of course, one important difference is that with ballistics it turns out to be relatively easy to develop a formal system that is strictly better than our brains at hitting targets. Which is just another way of saying that our evolved capability for ballistics turns out to be not particularly good, compared to the quality of formal system that we know how to develop.
Whereas with other kinds of judgments, like figuring out what to do next, our evolved capability is much much better than the formal systems that we know how to develop.
Of course, two hundred years ago that was also true of ballistics. It turns out that we’re pretty good at improving the quality of our formal systems.
I am still leaning towards the general resentment that I voiced in my first submission here, although I learnt a lot since then. I perceive the tendency to base decisions solely on logical implications to be questionable. I do not doubt that everything we know hints at the possibility that artificial intelligence could undergo explosive self-improvement and reach superhuman capabilities, but I do doubt that we know enough to justify the kind of commitment given within this community. I am not able to formalize my doubts right now or explain what exactly is wrong, but I feel the need to voice my skepticism nonetheless. And I feel that my recent discovery of some of the admitted problems reinforce my skepticism.
Take for example the risks from asteroids. There is a lot of empirical evidence that asteroids have caused damage before and that they pose an existential risk in future. One can use probability theory, the expected utility formula and other heuristics to determine how reasonable it would be to support the mitigation of that risk. Then there are risks like those posed by particle accelerators. There is some evidence that high-energy physics might pose and existential risk but more evidence that it does not. The risks and benefits are still not solely based on sheer speculation. Yet I don’t think that we could just factor in the expected utility of the whole Earth and our possible future as galactic civilization to conclude that we shouldn’t do high-energy physics. In the case of risks from AI we have a scenario that is solely based on extrapolation, on sheer speculation. The only reason to currently care strongly about risks from AI is the expected utility of success, respectively the disutility of a negative outcome. And that is where I become very skeptical, there can be no empirical criticism. Such scenarios then are those that I compare to ‘Pascal’s Mugging’ because they are susceptible to the same problem of expected utility outweighing any amount of reasonable doubt. I feel that such scenarios can lead us astray by removing themselves from other kinds of evidence and empirical criticism and therefore are solely justifying themselves by making up huge numbers when the available data and understanding doesn’t allow any such conclusions. Take for example MWI, itself a logical implication of the available data and current understanding. But is it enough to commit quantum suicide? I don’t feel that such a conclusion is reasonable. I believe that logical implications and extrapolations are not enough to outweigh any risk. If there is no empirical evidence, if there can be no empirical criticism then all bets are off.
As you mentioned there are many uses like ballistic calculation where mere extrapolation works and is the best we can do. But since there are problems like ‘Pascal’s Mugging’, that we perceive to be undesirable and that lead to an infinite hunt for ever larger expected utility, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics. We agree that we are not going to stop pursuing whatever terminal goal we have chosen just because someone promises us even more utility if we do what that person wants. We also agree that we are not going to stop loving our girlfriend just because there are many people who do not approve our relationship and who would be happy if we divorced. Therefore we already informally established some upper and lower bounds. But when do we start to take our heuristics seriously and do whatever they prove to be the optimal decision? That is an important question and I think that one of the answers, as mentioned above, is that we shouldn’t trust our heuristics without enough empirical evidence as their fuel.
My usual example of the IT path being important is Microsoft. IT improvments have been responsible for much of the recent progress of humans. For many years, Microsoft played the role of the evil emperor of IT, with nasty business practices and shoddy, insecure software. They screwed humanity, and it was a nightmare—a serious setback for the whole planet. Machine intelligence could be like that—but worse.
(nods) I endorse drawing inferences from evidence, and being skeptical about the application of heuristics that were developed against one reference class to a different reference class.
The evolved/experienced heuristic is probably to ignore those. In most cases where such things crop up, it is an attempted mugging—like it was with Pascal and God. So, most people just learn to tune such things out.