There’s a lot of mathematicians involved in MIRIs mission right now.
If you have a lot of C# programmers in your company, you get a lot of recommendations about the importance of working in C#, and if you employ a lot of python programmers...
I don’t think so. You’ve got your causal arrow pointing the wrong way; MIRI has a lot of mathematicians because it’s been advertising for mathematicians, and it’s been doing that because it needs mathematicians, because its mission involves a lot of math right now. The causal chain goes “math is needed, therefore mathematicians get hired”, not “mathematicians are hired, therefore math is needed”.
Math is needed for what? MIRI produces paper after paper on unimplementable idealized systems., and produces no practical solutionsm and criticizes philosophy for producing nothing practical.
Easier to solve the idealized general case first, so that you know what a solution even looks like, then adapt that to the real world with its many caveats. Divide and conquer!
There are more benefits to MIRI’s approach, such as: If you find that a certain kind of system cannot exist even in an ideal environment (such as a general Halting tester), you don’t need to muck around with “implementable” solutions, never knowing whether the current roadblock you face is just a function of some real-world-constraint, or fundamental in principle.
Easier to solve the idealized general case first, so that you know what a solution even looks like
This is not generally true, in my experience. Real world problems come with constraints that often make things substantially easier. The general traveling salesman problem for high N might not generally be tractable, but a specific traveling salesman problem might have all sorts of symmetry you can exploit to make things doable.
I’m reminded of a friend in grad school who created an idealized version of some field theory problems, axiomatized it, and was demonstrating the resulting system had all sorts of really useful properties. Then his adviser showed him that the only field theory that satisfied his axioms was the trivial field theory…
Easier to solve the idealized general case first, so that you know what a solution even looks like, then adapt that to the real world with its many caveats. Divide and conquer!
While this is true in math, it’s not necessarily true in computer science, where we need constructive reasoning, but the “idealized general case” can often include paradoxical conditions such as “the agent possesses perfect information and full certainty about all aspects of its own world-model, including of itself, but still can reason coherently about counterfactuals regarding even itself.” Such a creature not only doesn’t make intuitive sense, it’s basically Laplace’s Demon: a walking paradox itself.
Sometimes the special case can be much easier than the fully general case, just as a DFA is a special case of a Turing Machine. In that respect, the constraints can make life a lot easier, in regards to proving certain properties about a DFA versus proving properties of an unconstrained TM.
Sometimes the special case can be harder, like going from “program a fully functional Operating System” to “program a fully functional OS which requires only 12MB of RAM”.
It’s correct that fully general cases can lead to impossibility results which noone should care about, since they wouldn’t translate to actually implemented systems, which break some (in any case unrealistic) “ideal” condition. We shouldn’t forget, after all, that no matter how powerful our future AI overlord will be, it can still be perfectly simulated by a finite state machine (no infinite tape in the real world).
(Interesting comment on Laplace’s demon. I wasn’t sure why you’d call it a walking paradox (as opposed to Maxwell’s, what is it with famous scientists and their demons anyways), but I see there’s a recent paywalled paper proving as much. Deutsch’s much older The Fabric or Reality has some cool stuff on that as well, not that I’ve read it in depth.)
Right. MIRI’s most important paper to date, Definability of Truth in Probabilistic Logic, isn’t constructive either. However, you take what you can get.
I think there are two different kinds of constructivity being discussed here: regarding existence theorems and regarding the values of variables. We can afford to be nonconstructive about existence theorems, but if you want to characterize the value of a variable like “the optimal action for the agent to take”, your solution must necessarily be constructive in the sense of being algorithmic. You can say, “the action with the highest expected utility under the agent’s uncertainty at the time the action was calculated!”, but of course, that assumes that you know how to define and calculate expected utility, which, as the paper shows, you often don’t.
I brought up Laplace’s Demon because it seems to me like it might be possible to treat Omega as adversarial in the No Free Lunch sense, that it might be possible that any decision theory can be “broken” by some sufficiently perverse situation, when we make the paradoxical assumption that our agent has unlimited computing resources and our adversary has unlimited computing resources and we can reason perfectly about each-other (ie: that Omega is Laplace’s Demon but we can reason about Omega).
Easier to solve the idealized general case first, so that you know what a solution even looks like, then adapt that to the real world with its many caveats.
And that’s something that has happened, .or will happen, .or should happen? Well. It’s not something that has happened. The best .AI isn’t cut down ideal systems.
There are more benefits to MIRI’s approach, such as: If you find that a certain kind of system cannot exist even in an ideal environment (such as a general Halting tester), you don’t need to muck around with “implementable” solutions, never
Negative results could be valuable but are far from guaranteed.
There’s a lot of mathematicians involved in MIRIs mission right now.
If you have a lot of C# programmers in your company, you get a lot of recommendations about the importance of working in C#, and if you employ a lot of python programmers...
I don’t think so. You’ve got your causal arrow pointing the wrong way; MIRI has a lot of mathematicians because it’s been advertising for mathematicians, and it’s been doing that because it needs mathematicians, because its mission involves a lot of math right now. The causal chain goes “math is needed, therefore mathematicians get hired”, not “mathematicians are hired, therefore math is needed”.
Math is needed for what? MIRI produces paper after paper on unimplementable idealized systems., and produces no practical solutionsm and criticizes philosophy for producing nothing practical.
Easier to solve the idealized general case first, so that you know what a solution even looks like, then adapt that to the real world with its many caveats. Divide and conquer!
There are more benefits to MIRI’s approach, such as: If you find that a certain kind of system cannot exist even in an ideal environment (such as a general Halting tester), you don’t need to muck around with “implementable” solutions, never knowing whether the current roadblock you face is just a function of some real-world-constraint, or fundamental in principle.
This is not generally true, in my experience. Real world problems come with constraints that often make things substantially easier. The general traveling salesman problem for high N might not generally be tractable, but a specific traveling salesman problem might have all sorts of symmetry you can exploit to make things doable.
I’m reminded of a friend in grad school who created an idealized version of some field theory problems, axiomatized it, and was demonstrating the resulting system had all sorts of really useful properties. Then his adviser showed him that the only field theory that satisfied his axioms was the trivial field theory…
While this is true in math, it’s not necessarily true in computer science, where we need constructive reasoning, but the “idealized general case” can often include paradoxical conditions such as “the agent possesses perfect information and full certainty about all aspects of its own world-model, including of itself, but still can reason coherently about counterfactuals regarding even itself.” Such a creature not only doesn’t make intuitive sense, it’s basically Laplace’s Demon: a walking paradox itself.
Right. MIRI’s most important paper to date, Definability of Truth in Probabilistic Logic, isn’t constructive either. However, you take what you can get.
It does depend on the problem domain a lot.
Sometimes the special case can be much easier than the fully general case, just as a DFA is a special case of a Turing Machine. In that respect, the constraints can make life a lot easier, in regards to proving certain properties about a DFA versus proving properties of an unconstrained TM.
Sometimes the special case can be harder, like going from “program a fully functional Operating System” to “program a fully functional OS which requires only 12MB of RAM”.
It’s correct that fully general cases can lead to impossibility results which noone should care about, since they wouldn’t translate to actually implemented systems, which break some (in any case unrealistic) “ideal” condition. We shouldn’t forget, after all, that no matter how powerful our future AI overlord will be, it can still be perfectly simulated by a finite state machine (no infinite tape in the real world).
(Interesting comment on Laplace’s demon. I wasn’t sure why you’d call it a walking paradox (as opposed to Maxwell’s, what is it with famous scientists and their demons anyways), but I see there’s a recent paywalled paper proving as much. Deutsch’s much older The Fabric or Reality has some cool stuff on that as well, not that I’ve read it in depth.)
I think there are two different kinds of constructivity being discussed here: regarding existence theorems and regarding the values of variables. We can afford to be nonconstructive about existence theorems, but if you want to characterize the value of a variable like “the optimal action for the agent to take”, your solution must necessarily be constructive in the sense of being algorithmic. You can say, “the action with the highest expected utility under the agent’s uncertainty at the time the action was calculated!”, but of course, that assumes that you know how to define and calculate expected utility, which, as the paper shows, you often don’t.
I brought up Laplace’s Demon because it seems to me like it might be possible to treat Omega as adversarial in the No Free Lunch sense, that it might be possible that any decision theory can be “broken” by some sufficiently perverse situation, when we make the paradoxical assumption that our agent has unlimited computing resources and our adversary has unlimited computing resources and we can reason perfectly about each-other (ie: that Omega is Laplace’s Demon but we can reason about Omega).
And that’s something that has happened, .or will happen, .or should happen? Well. It’s not something that has happened. The best .AI isn’t cut down ideal systems.
Negative results could be valuable but are far from guaranteed.