Well, regardless of the value of metaprobability, or its lack of value, in the case of the black box, it doesn’t seem to offer any help in finding a decision strategy. (I find it helpful in understanding the problem, but not in formulating an answer.)
How would you go about choosing a strategy for the black box?
My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us.
The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the one that you have, and you will inevitably use it to solve problems unoptimally, where “unoptimal” if taken strictly means everything anybody has ever done.
The reflection part of this is important, as it’s the only thing we have control over, and I suppose could involve discussions about metaprobabilities. It doesn’t really do it for me though, although I’m only just a single point in the mind design space. To me, metaprobability seems isomorphic to a collection of reducible considerations, and so doesn’t seem like a useful shortcut or abstraction. My particular strategy for reflection would be something like that in dspeyer’s comment, things such as reasoning about the source of the box, possibilities for what could be in the box that I might reasonably expect to be there. Depending on how much time I have, I’d be very systematic about it, listing out possibilities, solving infinite series on expected value, etc.
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases. Maybe I ought to have made that clearer! The approach I would take to the black box does not rely on metaprobability, so let’s set that aside.
So, your mind is already in motion, and you do have priors about black boxes. What do you think you ought to in this case? I don’t want to waste your time with that… Maybe the thought experiment ought to have specified a time limit. Personally, I don’t think enumerating things the boxes could possibly do would be helpful at all. Isn’t there an easier approach?
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases.
Ah! I didn’t quite pick up on that. I’ll note that infinite regress problems aren’t necessarily defeaters of an approach. Good minds that could fall into that trap implement a “Screw it, I’m going to bed” trigger to keep from wasting cycles even when using an otherwise helpful heuristic.
Maybe the thought experiment ought to have specified a time limit. Personally, I don’t think enumerating things the boxes could possibly do would be helpful at all. Isn’t there an easier approach?
Maybe, but I can’t guarantee you won’t get blown up by a black box with a bomb inside! As a friend, I would be furiously lending you my reasoning to help you make the best decision, worrying very little what minds better and faster than both of ours would be able to do.
It is, at the end of the day, just the General AI problem: Don’t think too hard on brute-force but perfect methods or else you might skip a heuristic that could have gotten you an answer within the time limit! But when do you know whether the time limit is at that threshold? You could spend cycles on that too, but time is wasting! Time limit games presume that the participant has already underwent a lot of unintentional design (by evolution, history, past reflections, etc.). This is the “already in-motion” part which, frustratingly, cannot ever be optimal unless somebody on the outside designed you for it. It’s a formal problem what source code performs best under what game. Being a source code involves taking the discussion we’re having now and applying it the best you can, because that’s what your source code does.
Yes—this is part of what I’m driving at in this post! The kinds of problems that probability and decision theory work well for have a well-defined set of hypotheses, actions, and outcomes. Often the real world isn’t like that. One point of the black box is that the hypothesis and outcome spaces are effectively unbounded. Trying to enumerate everything it could do isn’t really feasible. That’s one reason the uncertainty here is “Knightian” or “radical.”
In fact, in the real world, “and then you get eaten by a black hole incoming near the speed of light” is always a possibility. Life comes with no guarantees at all.
Often in Knightian problems you are just screwed and there’s nothing rational you can do. But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
Often in Knightian problems you are just screwed and there’s nothing rational you can do.
As you know, this attitude isn’t particularly common ’round these parts, and while I fall mostly in the “Decision theory can account for everything” camp, there may still be a point there. “Rational” isn’t really a category so much as a degree. Formally, it’s a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there’s Godelian consideration lurking, which is where the “Omega fines you exorbitantly for using TDT” thought experiment comes into play.
That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there’s nothing rational you can do to improve your situation. You’ve already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people.
In such situations, the words “rational” and “irrational” are less useful descriptors than just observing source code being executed. If you’re formal about it using metric R, then you would be more R, but its correlation to “rational” wouldn’t really be at point.
But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
So, I don’t think the black box is really one of the situations I’ve described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don’t think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.
Well, regardless of the value of metaprobability, or its lack of value, in the case of the black box, it doesn’t seem to offer any help in finding a decision strategy. (I find it helpful in understanding the problem, but not in formulating an answer.)
How would you go about choosing a strategy for the black box?
My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us.
The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the one that you have, and you will inevitably use it to solve problems unoptimally, where “unoptimal” if taken strictly means everything anybody has ever done.
The reflection part of this is important, as it’s the only thing we have control over, and I suppose could involve discussions about metaprobabilities. It doesn’t really do it for me though, although I’m only just a single point in the mind design space. To me, metaprobability seems isomorphic to a collection of reducible considerations, and so doesn’t seem like a useful shortcut or abstraction. My particular strategy for reflection would be something like that in dspeyer’s comment, things such as reasoning about the source of the box, possibilities for what could be in the box that I might reasonably expect to be there. Depending on how much time I have, I’d be very systematic about it, listing out possibilities, solving infinite series on expected value, etc.
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases. Maybe I ought to have made that clearer! The approach I would take to the black box does not rely on metaprobability, so let’s set that aside.
So, your mind is already in motion, and you do have priors about black boxes. What do you think you ought to in this case? I don’t want to waste your time with that… Maybe the thought experiment ought to have specified a time limit. Personally, I don’t think enumerating things the boxes could possibly do would be helpful at all. Isn’t there an easier approach?
Ah! I didn’t quite pick up on that. I’ll note that infinite regress problems aren’t necessarily defeaters of an approach. Good minds that could fall into that trap implement a “Screw it, I’m going to bed” trigger to keep from wasting cycles even when using an otherwise helpful heuristic.
Maybe, but I can’t guarantee you won’t get blown up by a black box with a bomb inside! As a friend, I would be furiously lending you my reasoning to help you make the best decision, worrying very little what minds better and faster than both of ours would be able to do.
It is, at the end of the day, just the General AI problem: Don’t think too hard on brute-force but perfect methods or else you might skip a heuristic that could have gotten you an answer within the time limit! But when do you know whether the time limit is at that threshold? You could spend cycles on that too, but time is wasting! Time limit games presume that the participant has already underwent a lot of unintentional design (by evolution, history, past reflections, etc.). This is the “already in-motion” part which, frustratingly, cannot ever be optimal unless somebody on the outside designed you for it. It’s a formal problem what source code performs best under what game. Being a source code involves taking the discussion we’re having now and applying it the best you can, because that’s what your source code does.
Yes—this is part of what I’m driving at in this post! The kinds of problems that probability and decision theory work well for have a well-defined set of hypotheses, actions, and outcomes. Often the real world isn’t like that. One point of the black box is that the hypothesis and outcome spaces are effectively unbounded. Trying to enumerate everything it could do isn’t really feasible. That’s one reason the uncertainty here is “Knightian” or “radical.”
In fact, in the real world, “and then you get eaten by a black hole incoming near the speed of light” is always a possibility. Life comes with no guarantees at all.
Often in Knightian problems you are just screwed and there’s nothing rational you can do. But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
As you know, this attitude isn’t particularly common ’round these parts, and while I fall mostly in the “Decision theory can account for everything” camp, there may still be a point there. “Rational” isn’t really a category so much as a degree. Formally, it’s a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there’s Godelian consideration lurking, which is where the “Omega fines you exorbitantly for using TDT” thought experiment comes into play.
That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there’s nothing rational you can do to improve your situation. You’ve already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people.
In such situations, the words “rational” and “irrational” are less useful descriptors than just observing source code being executed. If you’re formal about it using metric R, then you would be more R, but its correlation to “rational” wouldn’t really be at point.
So, I don’t think the black box is really one of the situations I’ve described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don’t think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.