3 seems like it would sneak unjustified premises into everything, and make it prohibitively expensive to challenge them. Philosophers already tried this, and all we got was analytic philosophy, which is not very interesting, and still doesn’t do anything to solve problems of emphasis, which are another way in which words can be wrong (your schema might treat rare events as central cases and common ones as exceptions, for instance).
Obviously, I wouldn’t know how to do formal reasoning correctly, even if I seriously tried to do it. I’m sure there are many problems with the idea that don’t have known solutions. I believe that complete and correct formal reasoning is easier than full AI, but not by much. Having that in mind, it’s hard to make claims about what this reasoning would look like.
I’m not sure what you mean by unjustified premises and problems of emphasis, so I’ll make a guess. You might worry that some people would dedicate a lot of time and effort into constructing increasingly convoluted proofs showing how, e.g. flat earth, is consistent with various observations and experiments. Such proofs might be admitted into the Long List of True Statements. However, as long as these proofs lead to no implications about what NASA should be working on, they are not a problem. Another possibility is that the proofs are of the form “if lizard people run NASA, then it’s most likely that the earth is flat”. Again, if you don’t share the assumptions, there is no harm from such proofs, they might even be beneficial in some ways (e.g. displaying our sensitivity to bad priors). In this framework, building perverse proofs for “the outgroup is stupid” might actually be a productive activity.
Well, that’s a long list, but I don’t see why formal logic would make any of those problems worse, and it seems many could be solved. Do you have some specific worries?
3 seems like it would sneak unjustified premises into everything, and make it prohibitively expensive to challenge them. Philosophers already tried this, and all we got was analytic philosophy, which is not very interesting, and still doesn’t do anything to solve problems of emphasis, which are another way in which words can be wrong (your schema might treat rare events as central cases and common ones as exceptions, for instance).
Obviously, I wouldn’t know how to do formal reasoning correctly, even if I seriously tried to do it. I’m sure there are many problems with the idea that don’t have known solutions. I believe that complete and correct formal reasoning is easier than full AI, but not by much. Having that in mind, it’s hard to make claims about what this reasoning would look like.
I’m not sure what you mean by unjustified premises and problems of emphasis, so I’ll make a guess. You might worry that some people would dedicate a lot of time and effort into constructing increasingly convoluted proofs showing how, e.g. flat earth, is consistent with various observations and experiments. Such proofs might be admitted into the Long List of True Statements. However, as long as these proofs lead to no implications about what NASA should be working on, they are not a problem. Another possibility is that the proofs are of the form “if lizard people run NASA, then it’s most likely that the earth is flat”. Again, if you don’t share the assumptions, there is no harm from such proofs, they might even be beneficial in some ways (e.g. displaying our sensitivity to bad priors). In this framework, building perverse proofs for “the outgroup is stupid” might actually be a productive activity.
I’m worried about what happens before people start putting time and effort into proofs.
Related: 37 Ways That Words Can Be Wrong
Well, that’s a long list, but I don’t see why formal logic would make any of those problems worse, and it seems many could be solved. Do you have some specific worries?