You’re missing the point, which is that we need to act—we need to use the information we have as best we can in order to achieve ‘the greatest good’. (The question of what ‘the greatest good’ means is non-trivial but it’s orthogonal to present concerns.)
The agent chooses an action, and then depending on the state of world, the effects of the action are ‘good’ or ‘bad’. Here, the expression “the state of the world” incorporates both contingent facts about ‘how things are’ and the ‘natural laws’ describing how present causes have future effects.
Now, one very broad strategy for answering the question “what should we do?” is to try to break it down as follows:
We assign ‘weights’ p[i] to a wide variety of different ‘states of the world’, to represent the incomplete (but real) information we have thus far acquired.
For each such state, we calculate the effects that each of our actions a[j] would have, and assign ‘weights’ u[i,j] to the outcomes to represent how desirable we think they are.
We choose the action a[j] such that Sum(over i) p[i] * u[i,j] is maximized.
As a matter of terminology, we refer to the weights in step 1 as “probabilities” and those in step 2 as “utilities”.
Here’s an important question: “To what extent is the above procedure inevitable if we are to make rational decisions?”
The standard Lesswrong ideology here is that the above procedure (supplemented by Bayes’ theorem for updating ‘probability weights’) is absolutely central to ‘rationality’ - that any rational decision-maker must be following it, whether explicitly or implicitly.
It’s important to understand that Lesswrong’s discussions of rationality take place in the context of ‘thinking about how to design an artificial intelligence’. One of the great virtues of the Bayesian approach is that it’s clear what it would mean to implement it, and we can actually put it into practice on a wide variety of problems.
Anyway, if you want to challenge Bayesianism then you need to show how it makes irrational choices. It’s not sufficient to present a philosophical view under which assigning probabilities to theories is itself irrational, because that’s just a means to an end. What matters is whether an agent makes clever or stupid decisions, not how it gets there.
And now something more specific:
The one I commented on, in which “hypotheses that assigned a higher likelihood to that evidence, gain probability” does not get them above infinitesimal or do anything very useful.
No-one but you ever assumed that the hypotheses would begin at infinitesimal probability. The idea that we need to “assign probabilities to infinite sets in the way [benelliot] mention[ed]” is so obvious and commonplace that you should assume it even if it’s not actually spelled out.
In your theory, do the probabilities of the infinitely many theories add up to 1?
Does increasing their probabilities ever change the ordering of theories which assigned the same probability to some evidence/event?
If all finite sets of evidence leave infinitely many theories unchanged in ordering, then would we basically be acting on the a priori conclusions built into our way of assigning the initial probabilities?
If we were, would that be rational, in your view?
And do you have anything to say about the regress problem?
Some very general remarks:
You’re missing the point, which is that we need to act—we need to use the information we have as best we can in order to achieve ‘the greatest good’. (The question of what ‘the greatest good’ means is non-trivial but it’s orthogonal to present concerns.)
The agent chooses an action, and then depending on the state of world, the effects of the action are ‘good’ or ‘bad’. Here, the expression “the state of the world” incorporates both contingent facts about ‘how things are’ and the ‘natural laws’ describing how present causes have future effects.
Now, one very broad strategy for answering the question “what should we do?” is to try to break it down as follows:
We assign ‘weights’ p[i] to a wide variety of different ‘states of the world’, to represent the incomplete (but real) information we have thus far acquired.
For each such state, we calculate the effects that each of our actions a[j] would have, and assign ‘weights’ u[i,j] to the outcomes to represent how desirable we think they are.
We choose the action a[j] such that Sum(over i) p[i] * u[i,j] is maximized.
As a matter of terminology, we refer to the weights in step 1 as “probabilities” and those in step 2 as “utilities”.
Here’s an important question: “To what extent is the above procedure inevitable if we are to make rational decisions?”
The standard Lesswrong ideology here is that the above procedure (supplemented by Bayes’ theorem for updating ‘probability weights’) is absolutely central to ‘rationality’ - that any rational decision-maker must be following it, whether explicitly or implicitly.
It’s important to understand that Lesswrong’s discussions of rationality take place in the context of ‘thinking about how to design an artificial intelligence’. One of the great virtues of the Bayesian approach is that it’s clear what it would mean to implement it, and we can actually put it into practice on a wide variety of problems.
Anyway, if you want to challenge Bayesianism then you need to show how it makes irrational choices. It’s not sufficient to present a philosophical view under which assigning probabilities to theories is itself irrational, because that’s just a means to an end. What matters is whether an agent makes clever or stupid decisions, not how it gets there.
And now something more specific:
No-one but you ever assumed that the hypotheses would begin at infinitesimal probability. The idea that we need to “assign probabilities to infinite sets in the way [benelliot] mention[ed]” is so obvious and commonplace that you should assume it even if it’s not actually spelled out.
In your theory, do the probabilities of the infinitely many theories add up to 1?
Does increasing their probabilities ever change the ordering of theories which assigned the same probability to some evidence/event?
If all finite sets of evidence leave infinitely many theories unchanged in ordering, then would we basically be acting on the a priori conclusions built into our way of assigning the initial probabilities?
If we were, would that be rational, in your view?
And do you have anything to say about the regress problem?