I guess one thing you might be able to do is to check arguments, as opposed to statements of facts
First, let me say I think that would be interesting to experiment with. But the reasons to be dubious are more interesting, so I’m going to spend more time on those.
This can definitely rule people out. I don’t think it can totally rule people in, because there’s always a risk someone made a sound argument based on faulty assumptions. In fact this is a large, sticky genre that I’m very worried about
But assuming that was solved, there’s something I find harder to express that might be at the core of why I’m doing this… I don’t want to collect a bunch of other people’s arguments I can apply as tools, and be confused if two of them conflict. I want a gears-level model of the world such that, if I was left with amnesia on an intellectual deserted island, I could re-derive my beliefs. Argument-checking as I conceive of it now more does the former. I can’t explain why, exactly what I’m picturing when I say argument checking or what kind if amnesia I mean, but there’s something there. My primary interest with argument-checking would be to find a way to engage with arguments in a way that develops that amnesia-proof knowledge.
I agree that the problem of sound arguments based on bad assumptions is a sticky one. I also agree with the gears-level world model objective.
My view of argument checking is that if we eschew it, how can we detect the amount of noise poor arguments are generating? It seems to me the clearest way of handling it is to treat the arguments as a separate information channel. Otherwise it will be difficult to identify the presence or absence of value with any confidence.
First, let me say I think that would be interesting to experiment with. But the reasons to be dubious are more interesting, so I’m going to spend more time on those.
This can definitely rule people out. I don’t think it can totally rule people in, because there’s always a risk someone made a sound argument based on faulty assumptions. In fact this is a large, sticky genre that I’m very worried about
But assuming that was solved, there’s something I find harder to express that might be at the core of why I’m doing this… I don’t want to collect a bunch of other people’s arguments I can apply as tools, and be confused if two of them conflict. I want a gears-level model of the world such that, if I was left with amnesia on an intellectual deserted island, I could re-derive my beliefs. Argument-checking as I conceive of it now more does the former. I can’t explain why, exactly what I’m picturing when I say argument checking or what kind if amnesia I mean, but there’s something there. My primary interest with argument-checking would be to find a way to engage with arguments in a way that develops that amnesia-proof knowledge.
I agree that the problem of sound arguments based on bad assumptions is a sticky one. I also agree with the gears-level world model objective.
My view of argument checking is that if we eschew it, how can we detect the amount of noise poor arguments are generating? It seems to me the clearest way of handling it is to treat the arguments as a separate information channel. Otherwise it will be difficult to identify the presence or absence of value with any confidence.