I’m not sure if you’re aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?
The philosophical thinking is usually done in terms of the concepts that are later found irrelevant (or which are known to be irrelevant to begin with). What I object to is philosopher’s arrogance in form of gross overestimate of the relevance of the philosophical ‘problems’ and philosophical ‘solutions’ to anything.
If the philosophical notion of causality has a problem with abstracting away irrelevant low level details of the method of control of a manipulator, that is a problem with philosophical notion of causality, not a problem with the design of intelligent systems. Philosophy seems to be an incredibly difficult to avoid failure mode of intelligences—whereby the intelligence fails to establish relevant concepts and proceeds to reason in terms of faulty concepts.
What’s your opinion of von Neumann and Morgenstern’s work on decision theory? Do you also consider it to be “later found irrelevant” or do you think consider it an exception to “usually”? Or do you not consider it to be “philosophical”? What about philosophical work on logic (e.g., Aristotle’s first steps towards formalizing correct reasoning)?
The parallels to them seem to be a form of ‘but many scientists were ridiculed’. The times when philosophy is useful seem to be restricted to building high level concepts out of lower level concepts, adapting high level concepts to be relevant. Rather than this top down process starting from something potentially very confused.
What we see with the causality is that in the common discourse, it is normal to say things like ‘because , something’. Because algorithm returned 1 box, the predictor predicted 1 box, and the robot took 1 box, is perfectly normal, valid statement. It’s probably the only kind of causality there could be, lacking any physical law of causality. The philosophers take that notion of causality, confuse it with some properties of the world, and end up having ‘does not compute’ moments about particular problems like Newcomb’s.
I’m not sure if you’re aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?
The philosophical thinking is usually done in terms of the concepts that are later found irrelevant (or which are known to be irrelevant to begin with). What I object to is philosopher’s arrogance in form of gross overestimate of the relevance of the philosophical ‘problems’ and philosophical ‘solutions’ to anything.
If the philosophical notion of causality has a problem with abstracting away irrelevant low level details of the method of control of a manipulator, that is a problem with philosophical notion of causality, not a problem with the design of intelligent systems. Philosophy seems to be an incredibly difficult to avoid failure mode of intelligences—whereby the intelligence fails to establish relevant concepts and proceeds to reason in terms of faulty concepts.
What’s your opinion of von Neumann and Morgenstern’s work on decision theory? Do you also consider it to be “later found irrelevant” or do you think consider it an exception to “usually”? Or do you not consider it to be “philosophical”? What about philosophical work on logic (e.g., Aristotle’s first steps towards formalizing correct reasoning)?
The parallels to them seem to be a form of ‘but many scientists were ridiculed’. The times when philosophy is useful seem to be restricted to building high level concepts out of lower level concepts, adapting high level concepts to be relevant. Rather than this top down process starting from something potentially very confused.
What we see with the causality is that in the common discourse, it is normal to say things like ‘because , something’. Because algorithm returned 1 box, the predictor predicted 1 box, and the robot took 1 box, is perfectly normal, valid statement. It’s probably the only kind of causality there could be, lacking any physical law of causality. The philosophers take that notion of causality, confuse it with some properties of the world, and end up having ‘does not compute’ moments about particular problems like Newcomb’s.