Another way of saying it is that human beings can solve any problem that can be solved. Does that help?
What about the problem of building pyramids on alpha-centuri by 2012? We can’t, but aliens living there could.
More pressingly though, I don’t see why this is important. Have we been basing our arguments on an assumption that there are problems we can’t solve? Is there any evidence we can solve all problems without access to arbitrarily large amounts of computational power? Something like AIXI can solve pretty much anything, but not relevantly.
That would seem much much more difficult than creating a fully universal machine.
How about a neural network that can’t learn XOR?
When faced with a problem try to come up with conjectural explanations to solve the problem and then criticise them until you find one (and only one) that cannot be knocked down by any known criticism.
The manner in which explanations are knocked down seems under-specified, if you’re not doing Bayesian updating.
Are you looking for an explanation of how we generate “explanations”? Again, unsolved problem.
Nope, I just don’t know what in particular you mean by ‘explanation’. I know what the word means in general, but not your specific conception.
I just don’t think probability has a role in the realm of epistemology.
Well, that’s different from there being no such thing as a probability that a theory is true: your initial assertion implied that the concept wasn’t well defined, whereas now you just mean it’s irrelevant. Either way, you should probably produce some actual arguments against Jaynes’s conception of probability.
Meta: You want to reply directly to a post, not its descendants, or the other person won’t get a notification. I only saw your post via the Recent Posts list.
Also, it’s no good telling people that they can’t use evidence to support their position because it contradicts your theory when the other people haven’t been convinced of your theory.
The manner in which explanations are knocked down seems under-specified, if you’re not
doing Bayesian updating.
Criticism enables us to see flaws in explanations. What is under-specified about finding a flaw?
In your way, you need to come up with criticisms and also with probabilities associated with those criticisms. Criticisms of real world theories can be involved and complex. Isn’t it enough to expose a flaw in an explanatory theory? Must one also go to the trouble of calculating probabilities—a task that is surely fraught with difficulty for any realistic idea of criticism? You’re adding a huge amount of auxilliary theory and your evaluation is then also dependent on the truth of all this auxilliary theory.
I just don’t know what in particular you mean by ‘explanation’. I know what the word means
in general, but not your specific conception.
You don’t seem to be actually saying very much then; is LW really short on explanations, in the conventional sense? Explanation seems well evidenced by the last couple of top level posts. Similarly, do we really fail to criticise one another? A large number of the comments seem to be criticisms. If you’re essentially criticising us for not having learn rationality 101 - the sort of rationality you learn as a child of 12, arguing against god—then obviously it would be a problem if we didn’t bare in mind the stuff. But without providing evidence that we succumb to these faults, it’s hard to see what the problem is.
Your other points, however, are substantive. If humans could solve any problem, or it was impossible to design an agent which could learn some but not all things, or confirmation didn’t increase subjective plausibility, these would be important claims.
What about the problem of building pyramids on alpha-centuri by 2012? We can’t, but aliens living there could.
More pressingly though, I don’t see why this is important. Have we been basing our arguments on an assumption that there are problems we can’t solve? Is there any evidence we can solve all problems without access to arbitrarily large amounts of computational power? Something like AIXI can solve pretty much anything, but not relevantly.
How about a neural network that can’t learn XOR?
The manner in which explanations are knocked down seems under-specified, if you’re not doing Bayesian updating.
Nope, I just don’t know what in particular you mean by ‘explanation’. I know what the word means in general, but not your specific conception.
Well, that’s different from there being no such thing as a probability that a theory is true: your initial assertion implied that the concept wasn’t well defined, whereas now you just mean it’s irrelevant. Either way, you should probably produce some actual arguments against Jaynes’s conception of probability.
Meta: You want to reply directly to a post, not its descendants, or the other person won’t get a notification. I only saw your post via the Recent Posts list.
Also, it’s no good telling people that they can’t use evidence to support their position because it contradicts your theory when the other people haven’t been convinced of your theory.
Criticism enables us to see flaws in explanations. What is under-specified about finding a flaw?
In your way, you need to come up with criticisms and also with probabilities associated with those criticisms. Criticisms of real world theories can be involved and complex. Isn’t it enough to expose a flaw in an explanatory theory? Must one also go to the trouble of calculating probabilities—a task that is surely fraught with difficulty for any realistic idea of criticism? You’re adding a huge amount of auxilliary theory and your evaluation is then also dependent on the truth of all this auxilliary theory.
My conception is the same as the general one.
You don’t seem to be actually saying very much then; is LW really short on explanations, in the conventional sense? Explanation seems well evidenced by the last couple of top level posts. Similarly, do we really fail to criticise one another? A large number of the comments seem to be criticisms. If you’re essentially criticising us for not having learn rationality 101 - the sort of rationality you learn as a child of 12, arguing against god—then obviously it would be a problem if we didn’t bare in mind the stuff. But without providing evidence that we succumb to these faults, it’s hard to see what the problem is.
Your other points, however, are substantive. If humans could solve any problem, or it was impossible to design an agent which could learn some but not all things, or confirmation didn’t increase subjective plausibility, these would be important claims.