Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases.
Ah! I didn’t quite pick up on that. I’ll note that infinite regress problems aren’t necessarily defeaters of an approach. Good minds that could fall into that trap implement a “Screw it, I’m going to bed” trigger to keep from wasting cycles even when using an otherwise helpful heuristic.
Maybe the thought experiment ought to have specified a time limit. Personally, I don’t think enumerating things the boxes could possibly do would be helpful at all. Isn’t there an easier approach?
Maybe, but I can’t guarantee you won’t get blown up by a black box with a bomb inside! As a friend, I would be furiously lending you my reasoning to help you make the best decision, worrying very little what minds better and faster than both of ours would be able to do.
It is, at the end of the day, just the General AI problem: Don’t think too hard on brute-force but perfect methods or else you might skip a heuristic that could have gotten you an answer within the time limit! But when do you know whether the time limit is at that threshold? You could spend cycles on that too, but time is wasting! Time limit games presume that the participant has already underwent a lot of unintentional design (by evolution, history, past reflections, etc.). This is the “already in-motion” part which, frustratingly, cannot ever be optimal unless somebody on the outside designed you for it. It’s a formal problem what source code performs best under what game. Being a source code involves taking the discussion we’re having now and applying it the best you can, because that’s what your source code does.
Yes—this is part of what I’m driving at in this post! The kinds of problems that probability and decision theory work well for have a well-defined set of hypotheses, actions, and outcomes. Often the real world isn’t like that. One point of the black box is that the hypothesis and outcome spaces are effectively unbounded. Trying to enumerate everything it could do isn’t really feasible. That’s one reason the uncertainty here is “Knightian” or “radical.”
In fact, in the real world, “and then you get eaten by a black hole incoming near the speed of light” is always a possibility. Life comes with no guarantees at all.
Often in Knightian problems you are just screwed and there’s nothing rational you can do. But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
Often in Knightian problems you are just screwed and there’s nothing rational you can do.
As you know, this attitude isn’t particularly common ’round these parts, and while I fall mostly in the “Decision theory can account for everything” camp, there may still be a point there. “Rational” isn’t really a category so much as a degree. Formally, it’s a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there’s Godelian consideration lurking, which is where the “Omega fines you exorbitantly for using TDT” thought experiment comes into play.
That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there’s nothing rational you can do to improve your situation. You’ve already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people.
In such situations, the words “rational” and “irrational” are less useful descriptors than just observing source code being executed. If you’re formal about it using metric R, then you would be more R, but its correlation to “rational” wouldn’t really be at point.
But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
So, I don’t think the black box is really one of the situations I’ve described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don’t think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.
Ah! I didn’t quite pick up on that. I’ll note that infinite regress problems aren’t necessarily defeaters of an approach. Good minds that could fall into that trap implement a “Screw it, I’m going to bed” trigger to keep from wasting cycles even when using an otherwise helpful heuristic.
Maybe, but I can’t guarantee you won’t get blown up by a black box with a bomb inside! As a friend, I would be furiously lending you my reasoning to help you make the best decision, worrying very little what minds better and faster than both of ours would be able to do.
It is, at the end of the day, just the General AI problem: Don’t think too hard on brute-force but perfect methods or else you might skip a heuristic that could have gotten you an answer within the time limit! But when do you know whether the time limit is at that threshold? You could spend cycles on that too, but time is wasting! Time limit games presume that the participant has already underwent a lot of unintentional design (by evolution, history, past reflections, etc.). This is the “already in-motion” part which, frustratingly, cannot ever be optimal unless somebody on the outside designed you for it. It’s a formal problem what source code performs best under what game. Being a source code involves taking the discussion we’re having now and applying it the best you can, because that’s what your source code does.
Yes—this is part of what I’m driving at in this post! The kinds of problems that probability and decision theory work well for have a well-defined set of hypotheses, actions, and outcomes. Often the real world isn’t like that. One point of the black box is that the hypothesis and outcome spaces are effectively unbounded. Trying to enumerate everything it could do isn’t really feasible. That’s one reason the uncertainty here is “Knightian” or “radical.”
In fact, in the real world, “and then you get eaten by a black hole incoming near the speed of light” is always a possibility. Life comes with no guarantees at all.
Often in Knightian problems you are just screwed and there’s nothing rational you can do. But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
As you know, this attitude isn’t particularly common ’round these parts, and while I fall mostly in the “Decision theory can account for everything” camp, there may still be a point there. “Rational” isn’t really a category so much as a degree. Formally, it’s a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there’s Godelian consideration lurking, which is where the “Omega fines you exorbitantly for using TDT” thought experiment comes into play.
That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there’s nothing rational you can do to improve your situation. You’ve already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people.
In such situations, the words “rational” and “irrational” are less useful descriptors than just observing source code being executed. If you’re formal about it using metric R, then you would be more R, but its correlation to “rational” wouldn’t really be at point.
So, I don’t think the black box is really one of the situations I’ve described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don’t think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.