Look how robot controllers are implemented, look at real theories, observe that treating copies as extra servos is trivial change and works. It also works when the copies are not full and can distinguish between each other. Also, re-learn that values in theory are theoretical and are not homologous to underlying physical implementation; it is of no more interest that the action A is present in N physically independent systems, than that the action A is a real number but hardware is using floating point binary.
Philosophers have tendency to pick some random minor implementation detail, and get some sort of philosophical problem with it. For example the world may be deterministic, a minor implementation detail, the philosophers go “where’s my free will?”. Exact same thing with decision theories. Same theoretic action variable represents several different objects, that could be 2 robot arms wired in parallel, that could be two controllers with identical state wired to 2 robot arms, everything works the same but for the latter philosophers go “where’s my causality?”. Never mind that the physics is reversible at fundamental level and notion of causality is just a cognitive tool, for everyone else.
This reminds me of the debate between programmers who want to design an elegant system that accomplishes all the desired functions as consequences of a fundamentally simple design, and the programmers who just want to make it work and ship. Depending on the problem you’re solving, and the constraints you’re working under, I think either approach can be appropriate. Peter Norvig’s sudoku solver is in the “elegant” school, but if I were writing one from scratch, I’d do better to build something ugly and keep testing it until it seemed reliable.
I’m sorta leaning toward the “natural and elegant” approach for decision theories, since they’d have to face unknown new challenges without breaking, but patching CDT with cybernetics and such might work as well.
More to the point, actually solving some of these problems may well be NP-complete. But what do we and evolution do in practice, when we have to solve the problem and throwing up our hands is not an option? We and it use a numerical approximation which works pretty darned well. Worse is, in fact, better.
This reminds me of the debate between programmers who want to design an elegant system that accomplishes all the desired functions as consequences of a fundamentally simple design, and the programmers who just want to make it work and ship. Depending on the problem you’re solving, and the constraints you’re working under, I think either approach can be appropriate.
I think the resemblance is only superficial. There is nothing inelegant in treating two wired-in-parallel robotic arms controlled by the same controller, in same way regardless of whenever the controller is same ‘real physical object’, especially considering that we live in the world where if you have two electrons (or two identical anything), them being separate objects is purely in the eye of the beholder.
The whole point is that you abstract out inelegant details such as whenever the same controllers are physically one system or not. This abstraction is not at odds with mathematical elegance, it is the basis for mathematical elegance. It however is at odd with philosophical compactness-by-confusion. This abstraction does not allow for the notion of causality that was oversimplified to the point of irrelevance.
I’m not sure if you’re aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?
The philosophical thinking is usually done in terms of the concepts that are later found irrelevant (or which are known to be irrelevant to begin with). What I object to is philosopher’s arrogance in form of gross overestimate of the relevance of the philosophical ‘problems’ and philosophical ‘solutions’ to anything.
If the philosophical notion of causality has a problem with abstracting away irrelevant low level details of the method of control of a manipulator, that is a problem with philosophical notion of causality, not a problem with the design of intelligent systems. Philosophy seems to be an incredibly difficult to avoid failure mode of intelligences—whereby the intelligence fails to establish relevant concepts and proceeds to reason in terms of faulty concepts.
What’s your opinion of von Neumann and Morgenstern’s work on decision theory? Do you also consider it to be “later found irrelevant” or do you think consider it an exception to “usually”? Or do you not consider it to be “philosophical”? What about philosophical work on logic (e.g., Aristotle’s first steps towards formalizing correct reasoning)?
The parallels to them seem to be a form of ‘but many scientists were ridiculed’. The times when philosophy is useful seem to be restricted to building high level concepts out of lower level concepts, adapting high level concepts to be relevant. Rather than this top down process starting from something potentially very confused.
What we see with the causality is that in the common discourse, it is normal to say things like ‘because , something’. Because algorithm returned 1 box, the predictor predicted 1 box, and the robot took 1 box, is perfectly normal, valid statement. It’s probably the only kind of causality there could be, lacking any physical law of causality. The philosophers take that notion of causality, confuse it with some properties of the world, and end up having ‘does not compute’ moments about particular problems like Newcomb’s.
Look how robot controllers are implemented, look at real theories, observe that treating copies as extra servos is trivial change and works. It also works when the copies are not full and can distinguish between each other. Also, re-learn that values in theory are theoretical and are not homologous to underlying physical implementation; it is of no more interest that the action A is present in N physically independent systems, than that the action A is a real number but hardware is using floating point binary.
Philosophers have tendency to pick some random minor implementation detail, and get some sort of philosophical problem with it. For example the world may be deterministic, a minor implementation detail, the philosophers go “where’s my free will?”. Exact same thing with decision theories. Same theoretic action variable represents several different objects, that could be 2 robot arms wired in parallel, that could be two controllers with identical state wired to 2 robot arms, everything works the same but for the latter philosophers go “where’s my causality?”. Never mind that the physics is reversible at fundamental level and notion of causality is just a cognitive tool, for everyone else.
This reminds me of the debate between programmers who want to design an elegant system that accomplishes all the desired functions as consequences of a fundamentally simple design, and the programmers who just want to make it work and ship. Depending on the problem you’re solving, and the constraints you’re working under, I think either approach can be appropriate. Peter Norvig’s sudoku solver is in the “elegant” school, but if I were writing one from scratch, I’d do better to build something ugly and keep testing it until it seemed reliable.
I’m sorta leaning toward the “natural and elegant” approach for decision theories, since they’d have to face unknown new challenges without breaking, but patching CDT with cybernetics and such might work as well.
More to the point, actually solving some of these problems may well be NP-complete. But what do we and evolution do in practice, when we have to solve the problem and throwing up our hands is not an option? We and it use a numerical approximation which works pretty darned well. Worse is, in fact, better.
I think the resemblance is only superficial. There is nothing inelegant in treating two wired-in-parallel robotic arms controlled by the same controller, in same way regardless of whenever the controller is same ‘real physical object’, especially considering that we live in the world where if you have two electrons (or two identical anything), them being separate objects is purely in the eye of the beholder.
The whole point is that you abstract out inelegant details such as whenever the same controllers are physically one system or not. This abstraction is not at odds with mathematical elegance, it is the basis for mathematical elegance. It however is at odd with philosophical compactness-by-confusion. This abstraction does not allow for the notion of causality that was oversimplified to the point of irrelevance.
I’m not sure if you’re aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?
The philosophical thinking is usually done in terms of the concepts that are later found irrelevant (or which are known to be irrelevant to begin with). What I object to is philosopher’s arrogance in form of gross overestimate of the relevance of the philosophical ‘problems’ and philosophical ‘solutions’ to anything.
If the philosophical notion of causality has a problem with abstracting away irrelevant low level details of the method of control of a manipulator, that is a problem with philosophical notion of causality, not a problem with the design of intelligent systems. Philosophy seems to be an incredibly difficult to avoid failure mode of intelligences—whereby the intelligence fails to establish relevant concepts and proceeds to reason in terms of faulty concepts.
What’s your opinion of von Neumann and Morgenstern’s work on decision theory? Do you also consider it to be “later found irrelevant” or do you think consider it an exception to “usually”? Or do you not consider it to be “philosophical”? What about philosophical work on logic (e.g., Aristotle’s first steps towards formalizing correct reasoning)?
The parallels to them seem to be a form of ‘but many scientists were ridiculed’. The times when philosophy is useful seem to be restricted to building high level concepts out of lower level concepts, adapting high level concepts to be relevant. Rather than this top down process starting from something potentially very confused.
What we see with the causality is that in the common discourse, it is normal to say things like ‘because , something’. Because algorithm returned 1 box, the predictor predicted 1 box, and the robot took 1 box, is perfectly normal, valid statement. It’s probably the only kind of causality there could be, lacking any physical law of causality. The philosophers take that notion of causality, confuse it with some properties of the world, and end up having ‘does not compute’ moments about particular problems like Newcomb’s.