I’m a little surprised you haven’t commented on the randomization aspects of this model. As you’ve convincingly argued, if your intention is accurate prediction then you can’t improve your results by introducing randomness into your model. This model claims to improve its accuracy by introducing randomness in steps 2 and 4 which is a claim I am highly suspicious of after reading your sequence on the topic.
The model doesn’t incorporate randomness in the sense of saying “to predict the behavior of humans, roll a dice and predict behavior X on a result of 1-3 and predict behavior Y on a result of 4-6”, which is what Eliezer was objecting against. Instead, it says there is randomness involved in the subjects it’s modeling, and says the behavior of the subjects can be best modeled using a certain (deterministically derived) probability distribution.
Instead, it says there is randomness involved in the subjects it’s modeling
Does it say that? I didn’t get the impression they were making that claim. It seems higly likely to be false if they are. They model changes in attentional focus as a random variable but presumably those changes in attention are driven largely by complex events in the brain responding to complex features of the environment, not by random quantum fluctuation. They are using a random variable because the actual process is too complex too model and they have no simple better idea for how to model it than pure randomness.
Well, yes, “so complex and chaotic that you might as well call it random” is what I meant. That’s what’s usually meant by the term—the results of dice rolls aren’t mainly driven by quantum randomness either.
Complex yes, chaotic I doubt. I’m reasonably confident that there is some kind of meaningful pattern to attentional shifts that is correlated with features of the environment and that is adaptive to improve outcomes in our evolutionary environment. Randomness in this model reflects a lack of sufficient information about the environment or the process that drives attention rather than a belief that attention shifts do not have a meaningful correlation with the environment.
Depends on what you want to predict. I throw dice and have a model which says that number 5 is the result, deterministically. Now I will be right in 1⁄6 cases. If I am rewarded for each correct guess, then by introducing randomness into the model I will gain nothing—this is what Eliezer was arguing for. But if I am rewarded for correctly predicting the distribution of results after many throws, any random model is clearly superior to the five-only one.
The random model is better than the five-only one but a non-random model that directly predicts the distribution would be better still. If your goal is to predict the distribution then a model that does so by simulating random dice throws is inferior to one that simply predicts the distribution.
And if you want to do both, i.e. predict both the individual throws and the overall distribution? The “model” which directly states that the distribution is uniform doesn’t say anything about the individual events. Of course we can have model which says that the sequence will be e.g. 1 4 2 5 6 3 2 5 1 6 4 3 and then repeated, or that the sequence will follow the decimal expansion of pi. Both these models predict the distribution correctly, but they seem to be more complex than the random one and moreover they can produce false predictions of correlations (like 5 is always preceded by 2 in the first case).
A model that uses a sequence is simpler than one that uses a random number, as anyone who has implemented a pseudo random number generator will tell you. PRNGs are generally either simple or good, rarely both.
Depends on what hardware you have got. Having a computer with access to some quantum system (decaying nuclei, spin measurement in orthogonal directions) there is no need to specify in a complicated way the meaning of “random”. Or, of course, there is no need for the randomness to be “fundamental”, whatever it means. You can as well throw dice (though it would be a bit circular to use dice to explain dice, but it seems all right to use dice as the random generator for making predictions in economy).
A hardware random number generator isn’t part of an algorithm, it’s an input to an algorithm. You can’t argue that your model is algorithmically simpler by replacing part of the algorithm with a new input.
So, should quantum mechanics be modified by removing the randomness from it?
Now, having a two level spin system in state ( |0> + |1> ) /sqrt[2], QM says that the result of measurement is random and so we’ll find the particle in state |1> with probability 1⁄2.
A modified QM would say, that the first measurement reveals 1, the second (after recreating the original initial state, of course) 1, the third 0, etc., with sequence 110010010110100010101010010101011110010101...
I understand that you say that the second version of quantum mechanics would be simpler, and disagree.
I’m a little surprised you haven’t commented on the randomization aspects of this model. As you’ve convincingly argued, if your intention is accurate prediction then you can’t improve your results by introducing randomness into your model. This model claims to improve its accuracy by introducing randomness in steps 2 and 4 which is a claim I am highly suspicious of after reading your sequence on the topic.
The model doesn’t incorporate randomness in the sense of saying “to predict the behavior of humans, roll a dice and predict behavior X on a result of 1-3 and predict behavior Y on a result of 4-6”, which is what Eliezer was objecting against. Instead, it says there is randomness involved in the subjects it’s modeling, and says the behavior of the subjects can be best modeled using a certain (deterministically derived) probability distribution.
Does it say that? I didn’t get the impression they were making that claim. It seems higly likely to be false if they are. They model changes in attentional focus as a random variable but presumably those changes in attention are driven largely by complex events in the brain responding to complex features of the environment, not by random quantum fluctuation. They are using a random variable because the actual process is too complex too model and they have no simple better idea for how to model it than pure randomness.
Well, yes, “so complex and chaotic that you might as well call it random” is what I meant. That’s what’s usually meant by the term—the results of dice rolls aren’t mainly driven by quantum randomness either.
Complex yes, chaotic I doubt. I’m reasonably confident that there is some kind of meaningful pattern to attentional shifts that is correlated with features of the environment and that is adaptive to improve outcomes in our evolutionary environment. Randomness in this model reflects a lack of sufficient information about the environment or the process that drives attention rather than a belief that attention shifts do not have a meaningful correlation with the environment.
Depends on what you want to predict. I throw dice and have a model which says that number 5 is the result, deterministically. Now I will be right in 1⁄6 cases. If I am rewarded for each correct guess, then by introducing randomness into the model I will gain nothing—this is what Eliezer was arguing for. But if I am rewarded for correctly predicting the distribution of results after many throws, any random model is clearly superior to the five-only one.
The random model is better than the five-only one but a non-random model that directly predicts the distribution would be better still. If your goal is to predict the distribution then a model that does so by simulating random dice throws is inferior to one that simply predicts the distribution.
And if you want to do both, i.e. predict both the individual throws and the overall distribution? The “model” which directly states that the distribution is uniform doesn’t say anything about the individual events. Of course we can have model which says that the sequence will be e.g. 1 4 2 5 6 3 2 5 1 6 4 3 and then repeated, or that the sequence will follow the decimal expansion of pi. Both these models predict the distribution correctly, but they seem to be more complex than the random one and moreover they can produce false predictions of correlations (like 5 is always preceded by 2 in the first case).
Or do I misunderstand you somehow?
A model that uses a sequence is simpler than one that uses a random number, as anyone who has implemented a pseudo random number generator will tell you. PRNGs are generally either simple or good, rarely both.
Depends on what hardware you have got. Having a computer with access to some quantum system (decaying nuclei, spin measurement in orthogonal directions) there is no need to specify in a complicated way the meaning of “random”. Or, of course, there is no need for the randomness to be “fundamental”, whatever it means. You can as well throw dice (though it would be a bit circular to use dice to explain dice, but it seems all right to use dice as the random generator for making predictions in economy).
A hardware random number generator isn’t part of an algorithm, it’s an input to an algorithm. You can’t argue that your model is algorithmically simpler by replacing part of the algorithm with a new input.
So, should quantum mechanics be modified by removing the randomness from it?
Now, having a two level spin system in state ( |0> + |1> ) /sqrt[2], QM says that the result of measurement is random and so we’ll find the particle in state |1> with probability 1⁄2.
A modified QM would say, that the first measurement reveals 1, the second (after recreating the original initial state, of course) 1, the third 0, etc., with sequence 110010010110100010101010010101011110010101...
I understand that you say that the second version of quantum mechanics would be simpler, and disagree.