Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them. Otherwise, how can you tell whether they’re any good or not?
Whether the evidence that persuaded you is sharable or not doesn’t affect this. For example, you might have a prior that a new psychotherapy technique won’t outperform a control because you’ve read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn’t train anyone else to get the same results he did. That’s my prior, and I suspect it’s Eliezer’s, but if I wanted to convince you of it I’d have a tough time because there’s not really a single crux, just those 30 different cases that slowly accumulated. And yet, even though I can’t share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won’t outperform the control.
Geoff-in-Eliezer’s-ancedote has not reached this point. This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one? Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he’s charted.
you should be able to make testable predictions with them
Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some “evidence”, and in other places he gives additional “evidence”. It’s not really convincing to me, but it at least has the type signature of evidence. Eliezer responds:
Which sounds a lot like standard cognitive dissonance theory
This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I’m saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn’t have. (Again, not saying he should have done that or anything.)
Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them. Otherwise, how can you tell whether they’re any good or not?
Whether the evidence that persuaded you is sharable or not doesn’t affect this. For example, you might have a prior that a new psychotherapy technique won’t outperform a control because you’ve read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn’t train anyone else to get the same results he did. That’s my prior, and I suspect it’s Eliezer’s, but if I wanted to convince you of it I’d have a tough time because there’s not really a single crux, just those 30 different cases that slowly accumulated. And yet, even though I can’t share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won’t outperform the control.
Geoff-in-Eliezer’s-ancedote has not reached this point. This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one? Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he’s charted.
Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some “evidence”, and in other places he gives additional “evidence”. It’s not really convincing to me, but it at least has the type signature of evidence. Eliezer responds:
This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I’m saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn’t have. (Again, not saying he should have done that or anything.)