On first blush this looks like a success story, but it’s not. I was only able to catch the mistake because I had a bunch of background knowledge about the state of the world. If I didn’t already know mid-millenium China was better than Europe at almost everything (and I remember a time when I didn’t), I could easily have drawn the wrong conclusion about that claim. And following a procedure that would catch issues like this every time would take much more time than ESCs currently get.
Re this particular point, I guess one thing you might be able to do is to check arguments, as opposed to statements of facts. Sometimes, one can evaluate whether arguments are valid even when one isn’t too knowledgeable about the particular topic. I previously did some work on argument-checking of political debates. (Though the rationale for that wasn’t that argument-checking can require less knowledge than fact-checking, but rather that fact-checking of political debates already exists, whereas argument-checking does not).
I never did any systematic epistemic spot checks, but if a book contains a lots of arguments that appear fallacious or sketchy, I usually stop reading it. I guess that’s related.
I guess one thing you might be able to do is to check arguments, as opposed to statements of facts
First, let me say I think that would be interesting to experiment with. But the reasons to be dubious are more interesting, so I’m going to spend more time on those.
This can definitely rule people out. I don’t think it can totally rule people in, because there’s always a risk someone made a sound argument based on faulty assumptions. In fact this is a large, sticky genre that I’m very worried about
But assuming that was solved, there’s something I find harder to express that might be at the core of why I’m doing this… I don’t want to collect a bunch of other people’s arguments I can apply as tools, and be confused if two of them conflict. I want a gears-level model of the world such that, if I was left with amnesia on an intellectual deserted island, I could re-derive my beliefs. Argument-checking as I conceive of it now more does the former. I can’t explain why, exactly what I’m picturing when I say argument checking or what kind if amnesia I mean, but there’s something there. My primary interest with argument-checking would be to find a way to engage with arguments in a way that develops that amnesia-proof knowledge.
I agree that the problem of sound arguments based on bad assumptions is a sticky one. I also agree with the gears-level world model objective.
My view of argument checking is that if we eschew it, how can we detect the amount of noise poor arguments are generating? It seems to me the clearest way of handling it is to treat the arguments as a separate information channel. Otherwise it will be difficult to identify the presence or absence of value with any confidence.
This is a good point. I think the epistemic ability to predict and evaluate arguments independently of the truth of the conclusion is something we want to heavily select for and reward, see e.g. Eliezer’s writing on that here.
If Elizabeth is interested, I’m definitely interested in funding and experimenting with prediction markets on argument validity for the next round of amplifying epistemic spot checks.
Re this particular point, I guess one thing you might be able to do is to check arguments, as opposed to statements of facts. Sometimes, one can evaluate whether arguments are valid even when one isn’t too knowledgeable about the particular topic. I previously did some work on argument-checking of political debates. (Though the rationale for that wasn’t that argument-checking can require less knowledge than fact-checking, but rather that fact-checking of political debates already exists, whereas argument-checking does not).
I never did any systematic epistemic spot checks, but if a book contains a lots of arguments that appear fallacious or sketchy, I usually stop reading it. I guess that’s related.
First, let me say I think that would be interesting to experiment with. But the reasons to be dubious are more interesting, so I’m going to spend more time on those.
This can definitely rule people out. I don’t think it can totally rule people in, because there’s always a risk someone made a sound argument based on faulty assumptions. In fact this is a large, sticky genre that I’m very worried about
But assuming that was solved, there’s something I find harder to express that might be at the core of why I’m doing this… I don’t want to collect a bunch of other people’s arguments I can apply as tools, and be confused if two of them conflict. I want a gears-level model of the world such that, if I was left with amnesia on an intellectual deserted island, I could re-derive my beliefs. Argument-checking as I conceive of it now more does the former. I can’t explain why, exactly what I’m picturing when I say argument checking or what kind if amnesia I mean, but there’s something there. My primary interest with argument-checking would be to find a way to engage with arguments in a way that develops that amnesia-proof knowledge.
I agree that the problem of sound arguments based on bad assumptions is a sticky one. I also agree with the gears-level world model objective.
My view of argument checking is that if we eschew it, how can we detect the amount of noise poor arguments are generating? It seems to me the clearest way of handling it is to treat the arguments as a separate information channel. Otherwise it will be difficult to identify the presence or absence of value with any confidence.
This is a good point. I think the epistemic ability to predict and evaluate arguments independently of the truth of the conclusion is something we want to heavily select for and reward, see e.g. Eliezer’s writing on that here.
If Elizabeth is interested, I’m definitely interested in funding and experimenting with prediction markets on argument validity for the next round of amplifying epistemic spot checks.