It is completely not about being more or less impressive.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
If you can throw out the fMRI data and get better predictive power, something is wrong with the fMRI data.
Absolutely not. You can always add the two and get even more predictive power. Notice in particular that the algorthm Scott uses looks at past entries in the button pressing game, while fMRI data concerns only the incoming entry. They are two very different kind of prior information, and of course they have different predictive power. It doesn’t mean that one or the other is wrong.
The fMRI results are not relevant, because quantum effects in the brain are noise on an fMRI.
That’s exactly the point: if (and I reckon it’s a big if) the noise is irrelevant in predicting the behaviour of a person, then it means that in the limit, it’s irrelevant in the uploading/emulation process. This is what the Lisbet-like experiments shows, and the fact that with very poor prior information, like an fMRI, a person can be predicted with 60% accuracy and 4 seconds in advance, is to me a very strong indication in that sense, but it is not such for the author (which reduces the argument to an issue of impressiveness, and why those experiments are not a direct refutation). As far as I can see, the correct conclusion should have been: the experiments shows that it’s possible to aggregate high level neuronal data, very far from quantum noise, to be used to predict people’s behaviour in advance, with higher than chance accuracy. This shows that, at least for this kind of task, quantum irreproducible noise is not relevant to the emulation or free will problem. Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.
However, if it is a minute earlier, we are forced to consider the possibility—even if we don’t want to—that something contradicting classical ideas of free will is at work (though we can’t throw out travel and processing time either).
It is completely not about being more or less impressive.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
That’s why it wasn’t the entirety of my comment. Sigh.
Absolutely not. You can always add the two and get even more predictive power.
This is plainly wrong, as any Bayesian-minded person will know. P(X|A, B) = P(X|A) is not a priori forbidden by the laws of probability.
Saying “absolutely not” when nobody’s actually done the experiment yet (AFAIK) is disingenuous.
Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.
If you actually believe this, then this conversation is completely pointless, and I’m annoyed that you’ve wasted my time.
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
Absolutely not. You can always add the two and get even more predictive power.
Notice in particular that the algorthm Scott uses looks at past entries in the button pressing game, while fMRI data concerns only the incoming entry. They are two very different kind of prior information, and of course they have different predictive power. It doesn’t mean that one or the other is wrong.
That’s exactly the point: if (and I reckon it’s a big if) the noise is irrelevant in predicting the behaviour of a person, then it means that in the limit, it’s irrelevant in the uploading/emulation process.
This is what the Lisbet-like experiments shows, and the fact that with very poor prior information, like an fMRI, a person can be predicted with 60% accuracy and 4 seconds in advance, is to me a very strong indication in that sense, but it is not such for the author (which reduces the argument to an issue of impressiveness, and why those experiments are not a direct refutation).
As far as I can see, the correct conclusion should have been: the experiments shows that it’s possible to aggregate high level neuronal data, very far from quantum noise, to be used to predict people’s behaviour in advance, with higher than chance accuracy. This shows that, at least for this kind of task, quantum irreproducible noise is not relevant to the emulation or free will problem. Of course, nothing excludes (but at the same time, nothing warrants) that different kind of phoenomena will emerge in the investigation of higher resolution experiments.
Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.
However, if it is a minute earlier, we are forced to consider the possibility—even if we don’t want to—that something contradicting classical ideas of free will is at work (though we can’t throw out travel and processing time either).
That’s why it wasn’t the entirety of my comment. Sigh.
This is plainly wrong, as any Bayesian-minded person will know. P(X|A, B) = P(X|A) is not a priori forbidden by the laws of probability.
Saying “absolutely not” when nobody’s actually done the experiment yet (AFAIK) is disingenuous.
If you actually believe this, then this conversation is completely pointless, and I’m annoyed that you’ve wasted my time.