a) he’s overstating the level and kind of precision you would need when measuring a human for prediction; and
b) that the interesting philosophical implications of Newcomb’s problem follow from already-achievable predictor accuracies.
The other, about average-human performance on 3SAT, where I was skeptical the average person actually notices global symmetries like the pigeonhole principle. (And, to a lesser extent, whether the other in which you stack objects affects their height...)
Eh, I don’t think I count as a luminary, but thanks :-)
Aaronson’s crediting me is mostly due to our exchanges on the blog for his paper/class about philosophy and theoretical computer science.
One of them, about Newcomb’s problem where my main criticisms were
a) he’s overstating the level and kind of precision you would need when measuring a human for prediction; and
b) that the interesting philosophical implications of Newcomb’s problem follow from already-achievable predictor accuracies.
The other, about average-human performance on 3SAT, where I was skeptical the average person actually notices global symmetries like the pigeonhole principle. (And, to a lesser extent, whether the other in which you stack objects affects their height...)