. It seems to me like Eliezer is running a probabilistic strategy
It sounds like this describes every strategy? I guess you mean, he’s explicitly taking into account that he’ll make errors, and playing the probabilities to get good expected value. So this makes sense, like I’m not saying he was making a strategic mistake by not, say, working with Geoff. I’m saying:
(internally) Well this is obviously wrong. Minds just don’t work by those sorts of bright-line psychoanalytic rules written out in English, and proposing them doesn’t get you anywhere near the level of an interesting cognitive algorithm.[...]
(out loud) What does CT say I should experience seeing, that existing cognitive science wouldn’t tell me to expect?
Geoff: (Something along the lines of “CT isn’t there yet”[...])
(out loud) Okay, then I don’t believe in CT because without evidence there’s no way you could know it even if it was true.
sounds like he’s conflating shareable and non-shareable evidence. Geoff could have seen a bunch of stuff and learned heuristics that he couldn’t articulately express other than with silly-seeming “bright-line psychoanalytic rules written out in English”. Again, it can make sense to treat this as “for my purposes, equivalent to being obviously wrong”. But like, it’s not really equivalent, you just *don’t know* whether the person has hidden evidence.
Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them. Otherwise, how can you tell whether they’re any good or not?
Whether the evidence that persuaded you is sharable or not doesn’t affect this. For example, you might have a prior that a new psychotherapy technique won’t outperform a control because you’ve read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn’t train anyone else to get the same results he did. That’s my prior, and I suspect it’s Eliezer’s, but if I wanted to convince you of it I’d have a tough time because there’s not really a single crux, just those 30 different cases that slowly accumulated. And yet, even though I can’t share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won’t outperform the control.
Geoff-in-Eliezer’s-ancedote has not reached this point. This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one? Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he’s charted.
you should be able to make testable predictions with them
Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some “evidence”, and in other places he gives additional “evidence”. It’s not really convincing to me, but it at least has the type signature of evidence. Eliezer responds:
Which sounds a lot like standard cognitive dissonance theory
This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I’m saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn’t have. (Again, not saying he should have done that or anything.)
It sounds like this describes every strategy? I guess you mean, he’s explicitly taking into account that he’ll make errors, and playing the probabilities to get good expected value. So this makes sense, like I’m not saying he was making a strategic mistake by not, say, working with Geoff. I’m saying:
sounds like he’s conflating shareable and non-shareable evidence. Geoff could have seen a bunch of stuff and learned heuristics that he couldn’t articulately express other than with silly-seeming “bright-line psychoanalytic rules written out in English”. Again, it can make sense to treat this as “for my purposes, equivalent to being obviously wrong”. But like, it’s not really equivalent, you just *don’t know* whether the person has hidden evidence.
Even if all you have is a bunch of stuff and learned heuristics, you should be able to make testable predictions with them. Otherwise, how can you tell whether they’re any good or not?
Whether the evidence that persuaded you is sharable or not doesn’t affect this. For example, you might have a prior that a new psychotherapy technique won’t outperform a control because you’ve read like 30 different cases where a leading psychiatrist invented a new therapy technique, reported great results, and then couldn’t train anyone else to get the same results he did. That’s my prior, and I suspect it’s Eliezer’s, but if I wanted to convince you of it I’d have a tough time because there’s not really a single crux, just those 30 different cases that slowly accumulated. And yet, even though I can’t share the source of my belief, I can use it to make concrete testable predictions: when they do an RCT for the 31st therapy technique, it won’t outperform the control.
Geoff-in-Eliezer’s-ancedote has not reached this point. This is especially bad for a developing theory: if Geoff makes a change to CT, how will he tell if the new CT is better or worse than the old one? Geoff-replying-to-Eliezer takes this criticism seriously, and says he can make concrete, if narrow, predictions about specific people he’s charted.
Certainly. But you might not be able to make testable predictions for which others will readily agree with your criteria for judgement. In the exchange, Geoff gives some “evidence”, and in other places he gives additional “evidence”. It’s not really convincing to me, but it at least has the type signature of evidence. Eliezer responds:
This is eliding that Geoff probably has significant skill in identifying more detail of how beliefs and goals interact, beyond just what someone would know if they heard about cognitive dissonance theory. Like basically I’m saying that if Eliezer sat with Geoff for a few hours through a few sessions of Geoff doing his thing with some third person, Eliezer would see Geoff behave in a way that suggests falsifiable understanding that Eliezer doesn’t have. (Again, not saying he should have done that or anything.)