Sometimes the special case can be much easier than the fully general case, just as a DFA is a special case of a Turing Machine. In that respect, the constraints can make life a lot easier, in regards to proving certain properties about a DFA versus proving properties of an unconstrained TM.
Sometimes the special case can be harder, like going from “program a fully functional Operating System” to “program a fully functional OS which requires only 12MB of RAM”.
It’s correct that fully general cases can lead to impossibility results which noone should care about, since they wouldn’t translate to actually implemented systems, which break some (in any case unrealistic) “ideal” condition. We shouldn’t forget, after all, that no matter how powerful our future AI overlord will be, it can still be perfectly simulated by a finite state machine (no infinite tape in the real world).
(Interesting comment on Laplace’s demon. I wasn’t sure why you’d call it a walking paradox (as opposed to Maxwell’s, what is it with famous scientists and their demons anyways), but I see there’s a recent paywalled paper proving as much. Deutsch’s much older The Fabric or Reality has some cool stuff on that as well, not that I’ve read it in depth.)
Right. MIRI’s most important paper to date, Definability of Truth in Probabilistic Logic, isn’t constructive either. However, you take what you can get.
I think there are two different kinds of constructivity being discussed here: regarding existence theorems and regarding the values of variables. We can afford to be nonconstructive about existence theorems, but if you want to characterize the value of a variable like “the optimal action for the agent to take”, your solution must necessarily be constructive in the sense of being algorithmic. You can say, “the action with the highest expected utility under the agent’s uncertainty at the time the action was calculated!”, but of course, that assumes that you know how to define and calculate expected utility, which, as the paper shows, you often don’t.
I brought up Laplace’s Demon because it seems to me like it might be possible to treat Omega as adversarial in the No Free Lunch sense, that it might be possible that any decision theory can be “broken” by some sufficiently perverse situation, when we make the paradoxical assumption that our agent has unlimited computing resources and our adversary has unlimited computing resources and we can reason perfectly about each-other (ie: that Omega is Laplace’s Demon but we can reason about Omega).
Right. MIRI’s most important paper to date, Definability of Truth in Probabilistic Logic, isn’t constructive either. However, you take what you can get.
It does depend on the problem domain a lot.
Sometimes the special case can be much easier than the fully general case, just as a DFA is a special case of a Turing Machine. In that respect, the constraints can make life a lot easier, in regards to proving certain properties about a DFA versus proving properties of an unconstrained TM.
Sometimes the special case can be harder, like going from “program a fully functional Operating System” to “program a fully functional OS which requires only 12MB of RAM”.
It’s correct that fully general cases can lead to impossibility results which noone should care about, since they wouldn’t translate to actually implemented systems, which break some (in any case unrealistic) “ideal” condition. We shouldn’t forget, after all, that no matter how powerful our future AI overlord will be, it can still be perfectly simulated by a finite state machine (no infinite tape in the real world).
(Interesting comment on Laplace’s demon. I wasn’t sure why you’d call it a walking paradox (as opposed to Maxwell’s, what is it with famous scientists and their demons anyways), but I see there’s a recent paywalled paper proving as much. Deutsch’s much older The Fabric or Reality has some cool stuff on that as well, not that I’ve read it in depth.)
I think there are two different kinds of constructivity being discussed here: regarding existence theorems and regarding the values of variables. We can afford to be nonconstructive about existence theorems, but if you want to characterize the value of a variable like “the optimal action for the agent to take”, your solution must necessarily be constructive in the sense of being algorithmic. You can say, “the action with the highest expected utility under the agent’s uncertainty at the time the action was calculated!”, but of course, that assumes that you know how to define and calculate expected utility, which, as the paper shows, you often don’t.
I brought up Laplace’s Demon because it seems to me like it might be possible to treat Omega as adversarial in the No Free Lunch sense, that it might be possible that any decision theory can be “broken” by some sufficiently perverse situation, when we make the paradoxical assumption that our agent has unlimited computing resources and our adversary has unlimited computing resources and we can reason perfectly about each-other (ie: that Omega is Laplace’s Demon but we can reason about Omega).