I feel that his rebuttal of the Lisbet-like experiments (paragraph 2.12) is strikingly weak, exactly where it should have been one of the strongest point. Scott says:
My own view is that the quantitative aspects are crucial when discussing these experiments.
What? Just because predicting human behaviour one minute before it’s happening with 99% accuracy is more impressive, it doesn’t mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness.
Admittedly, it might imply different kind, maybe even unachievable or uncomputable kind of process, but it also may be just a matter of better probes/more computational power.
Lack of impressiveness is not a refutation at all.
Also
So better-than-chance predictability is just too low a bar for clearing it to have any relevance to the free-will debate.
This is plainly wrong, as any Bayesian-minded person will know: it all depends on the prior information you are using.
Predicting with 99.99% accuracy that any person, put in front of the dilemma of tasting a pleasant cake or receive a kick in the teeth (or, to stay in the Portal metaphor, to be burned alive) will chose the cake, is clearly not relevant to the free will debate. At the same time, predicting what will be the next choice in pressing the button, exclusively from neurological data (and very broadly aggregated, as is the case of fMRI) with 60% accuracy, is in direct contrast with the Knightian unpredictability thesis.
Predicting with 99.99% accuracy that any person, put in front of the dilemma of tasting a pleasant cake or receive a kick in the teeth (or, to stay in the Portal metaphor, to be burned alive) will chose the cake, is clearly not relevant to the free will debate.
Is there an existing principled explanation for why this is not relevant to the free will debate, but predicting less obvious behaviors is?
Because any working system evolved from self-preservation would do that. It doesn’t add any bit of information, although it’s a prediction that has striking accuracy.
That seems to have already conceded the point by acknowledging that our behaviors are determined by systems. No?
It seems that the argument must be that some of our behaviors are determined and some are the result of free will—I’m wondering if there’s a principled defense of this distinction.
They way I see it is this: if pressing a button of your choice is not an expression of free-will, then nothing is, because otherwise you can just say that free-will determines whatever in the brain is determined by quantum noise, so that it becomes an empty concept. That said, it’s true that we don’t know very much about the inner working of the brain, but I believe that we know enough to say that it doesn’t store and uses quantum bits for elaboration. But even before invoking that, Lisbet-like experiments directly link free-will with available neuronal data: I’m not saying that it’s a direct refutation, but it’s a possible direct refutation. My pet-peeves is the author not acknowledging the conclusion, instead saying that the experiments were not impressive enough to constitute a refutation of his claim.
It is completely not about being more or less impressive.
So better-than-chance predictability is just too low a bar for clearing it to have any relevance to the free-will debate.
This is plainly wrong, as any Bayesian-minded person will know: it all depends on the prior information you are using.
If you can throw out the fMRI data and get better predictive power, something is wrong with the fMRI data.
At the same time, predicting what will be the next choice in pressing the button, exclusively from neurological data (and very broadly aggregated, as is the case of fMRI) with 60% accuracy, is in direct contrast with the Knightian unpredictability thesis.
The fMRI results are not relevant, because quantum effects in the brain are noise on an fMRI. Aaronson explicitly locates any Knightian noise left in the system to be at the microscopic level; see the third paragraph under section 3.
TL;DR: 2.12 is about forestalling a bad counterargument (that being the heading of 2.12) and does not give evidence against Knightian upredictability.
It is completely not about being more or less impressive.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
If you can throw out the fMRI data and get better predictive power, something is wrong with the fMRI data.
Absolutely not. You can always add the two and get even more predictive power. Notice in particular that the algorthm Scott uses looks at past entries in the button pressing game, while fMRI data concerns only the incoming entry. They are two very different kind of prior information, and of course they have different predictive power. It doesn’t mean that one or the other is wrong.
The fMRI results are not relevant, because quantum effects in the brain are noise on an fMRI.
That’s exactly the point: if (and I reckon it’s a big if) the noise is irrelevant in predicting the behaviour of a person, then it means that in the limit, it’s irrelevant in the uploading/emulation process. This is what the Lisbet-like experiments shows, and the fact that with very poor prior information, like an fMRI, a person can be predicted with 60% accuracy and 4 seconds in advance, is to me a very strong indication in that sense, but it is not such for the author (which reduces the argument to an issue of impressiveness, and why those experiments are not a direct refutation). As far as I can see, the correct conclusion should have been: the experiments shows that it’s possible to aggregate high level neuronal data, very far from quantum noise, to be used to predict people’s behaviour in advance, with higher than chance accuracy. This shows that, at least for this kind of task, quantum irreproducible noise is not relevant to the emulation or free will problem. Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.
However, if it is a minute earlier, we are forced to consider the possibility—even if we don’t want to—that something contradicting classical ideas of free will is at work (though we can’t throw out travel and processing time either).
It is completely not about being more or less impressive.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
That’s why it wasn’t the entirety of my comment. Sigh.
Absolutely not. You can always add the two and get even more predictive power.
This is plainly wrong, as any Bayesian-minded person will know. P(X|A, B) = P(X|A) is not a priori forbidden by the laws of probability.
Saying “absolutely not” when nobody’s actually done the experiment yet (AFAIK) is disingenuous.
Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.
If you actually believe this, then this conversation is completely pointless, and I’m annoyed that you’ve wasted my time.
What? Just because predicting human behaviour one minute before it’s happening with 99% accuracy is more impressive, it doesn’t mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe even unachievable or uncomputable kind of process, but it also may be just a matter of better probes/more computational power.
So would you have been wiling to draw the same conclusion from an experiment that predicted the button pushing 1 second before with 99.99999% probability by scanning the neurons in the arm?
As I said in another comment: no, because that doesn’t add information, since pushing the button = neurons in the arm firing. The threshold is when the elaboration leaves the brain.
I feel that his rebuttal of the Lisbet-like experiments (paragraph 2.12) is strikingly weak, exactly where it should have been one of the strongest point. Scott says:
What? Just because predicting human behaviour one minute before it’s happening with 99% accuracy is more impressive, it doesn’t mean that it involves any kind of different process than predicting human behaviour 5 seconds before with 60% accurateness. Admittedly, it might imply different kind, maybe even unachievable or uncomputable kind of process, but it also may be just a matter of better probes/more computational power. Lack of impressiveness is not a refutation at all. Also
This is plainly wrong, as any Bayesian-minded person will know: it all depends on the prior information you are using. Predicting with 99.99% accuracy that any person, put in front of the dilemma of tasting a pleasant cake or receive a kick in the teeth (or, to stay in the Portal metaphor, to be burned alive) will chose the cake, is clearly not relevant to the free will debate.
At the same time, predicting what will be the next choice in pressing the button, exclusively from neurological data (and very broadly aggregated, as is the case of fMRI) with 60% accuracy, is in direct contrast with the Knightian unpredictability thesis.
Is there an existing principled explanation for why this is not relevant to the free will debate, but predicting less obvious behaviors is?
Because any working system evolved from self-preservation would do that. It doesn’t add any bit of information, although it’s a prediction that has striking accuracy.
That seems to have already conceded the point by acknowledging that our behaviors are determined by systems. No?
It seems that the argument must be that some of our behaviors are determined and some are the result of free will—I’m wondering if there’s a principled defense of this distinction.
They way I see it is this: if pressing a button of your choice is not an expression of free-will, then nothing is, because otherwise you can just say that free-will determines whatever in the brain is determined by quantum noise, so that it becomes an empty concept.
That said, it’s true that we don’t know very much about the inner working of the brain, but I believe that we know enough to say that it doesn’t store and uses quantum bits for elaboration.
But even before invoking that, Lisbet-like experiments directly link free-will with available neuronal data: I’m not saying that it’s a direct refutation, but it’s a possible direct refutation.
My pet-peeves is the author not acknowledging the conclusion, instead saying that the experiments were not impressive enough to constitute a refutation of his claim.
It is completely not about being more or less impressive.
If you can throw out the fMRI data and get better predictive power, something is wrong with the fMRI data.
The fMRI results are not relevant, because quantum effects in the brain are noise on an fMRI. Aaronson explicitly locates any Knightian noise left in the system to be at the microscopic level; see the third paragraph under section 3.
TL;DR: 2.12 is about forestalling a bad counterargument (that being the heading of 2.12) and does not give evidence against Knightian upredictability.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
Absolutely not. You can always add the two and get even more predictive power.
Notice in particular that the algorthm Scott uses looks at past entries in the button pressing game, while fMRI data concerns only the incoming entry. They are two very different kind of prior information, and of course they have different predictive power. It doesn’t mean that one or the other is wrong.
That’s exactly the point: if (and I reckon it’s a big if) the noise is irrelevant in predicting the behaviour of a person, then it means that in the limit, it’s irrelevant in the uploading/emulation process.
This is what the Lisbet-like experiments shows, and the fact that with very poor prior information, like an fMRI, a person can be predicted with 60% accuracy and 4 seconds in advance, is to me a very strong indication in that sense, but it is not such for the author (which reduces the argument to an issue of impressiveness, and why those experiments are not a direct refutation).
As far as I can see, the correct conclusion should have been: the experiments shows that it’s possible to aggregate high level neuronal data, very far from quantum noise, to be used to predict people’s behaviour in advance, with higher than chance accuracy. This shows that, at least for this kind of task, quantum irreproducible noise is not relevant to the emulation or free will problem. Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.
Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.
However, if it is a minute earlier, we are forced to consider the possibility—even if we don’t want to—that something contradicting classical ideas of free will is at work (though we can’t throw out travel and processing time either).
That’s why it wasn’t the entirety of my comment. Sigh.
This is plainly wrong, as any Bayesian-minded person will know. P(X|A, B) = P(X|A) is not a priori forbidden by the laws of probability.
Saying “absolutely not” when nobody’s actually done the experiment yet (AFAIK) is disingenuous.
If you actually believe this, then this conversation is completely pointless, and I’m annoyed that you’ve wasted my time.
So would you have been wiling to draw the same conclusion from an experiment that predicted the button pushing 1 second before with 99.99999% probability by scanning the neurons in the arm?
As I said in another comment: no, because that doesn’t add information, since pushing the button = neurons in the arm firing. The threshold is when the elaboration leaves the brain.