This is really interesting. I had the idea to do something like this about 10 years ago (when I was 15). I never did get anywhere with it though, and I haven’t spent very much time at all working on it. Seems that you’ve gotten a lot farther than I have. In fact, I think this article, about practical guidelines for debating productively, is much more useful than anything I’ve done on formalization.
I think the central idea of the project is reducing arguments to their constituent parts. I call the basic parts “atoms”. Practically, there’s no way to completely reduce them to first principles, the way it’s done in formalizations of mathematical systems (like the libraries of Isabelle, Coq, or Metamath). Doing that would probably be GAI-complete. So the best we can do is come up with some practical rules of thumb, some set of criteria that can be applied relatively unambiguously to determine whether some assertion or whatnot makes a good atom, or needs to be decomposed further. And further analysis, looking deeper into the issue, can always uncover the need for decomposition where it was previously not recognized. In fact, such further decomposition would probably be a central part of explicating new arguments in the system.
I think the first main hurdle, and maybe the last one, is determining what types of atoms are necessary, and what types of relationships between them, and how they correspond to the rules of probability. If you want automated checking, each atom has to be related by a rather basic, simple relationship to other atoms. Otherwise you can’t formalize it. So what are the relationship types? What are the atom types?
The system has to have some way of representing facts/beliefs, but those can be disputed. People assign different probabilities to them. So how do you deal with that? You can’t automate that away. Maybe you have some way of letting the user fill in their own belief ratios for all the tree’s leaf nodes (i.e. the premises) and then propagate those up the tree. Perhaps you give the premises default values, given by the argument’s author, or a consensus number, or both.
How do you handle the extra-logical parts of the argument, those relating to semantics? For instance, special definitions of terms used in other atoms. A lot of argument comes down to arguing over definitions.
This is really interesting. I had the idea to do something like this about 10 years ago (when I was 15). I never did get anywhere with it though, and I haven’t spent very much time at all working on it. Seems that you’ve gotten a lot farther than I have. In fact, I think this article, about practical guidelines for debating productively, is much more useful than anything I’ve done on formalization.
I think the central idea of the project is reducing arguments to their constituent parts. I call the basic parts “atoms”. Practically, there’s no way to completely reduce them to first principles, the way it’s done in formalizations of mathematical systems (like the libraries of Isabelle, Coq, or Metamath). Doing that would probably be GAI-complete. So the best we can do is come up with some practical rules of thumb, some set of criteria that can be applied relatively unambiguously to determine whether some assertion or whatnot makes a good atom, or needs to be decomposed further. And further analysis, looking deeper into the issue, can always uncover the need for decomposition where it was previously not recognized. In fact, such further decomposition would probably be a central part of explicating new arguments in the system.
I think the first main hurdle, and maybe the last one, is determining what types of atoms are necessary, and what types of relationships between them, and how they correspond to the rules of probability. If you want automated checking, each atom has to be related by a rather basic, simple relationship to other atoms. Otherwise you can’t formalize it. So what are the relationship types? What are the atom types?
The system has to have some way of representing facts/beliefs, but those can be disputed. People assign different probabilities to them. So how do you deal with that? You can’t automate that away. Maybe you have some way of letting the user fill in their own belief ratios for all the tree’s leaf nodes (i.e. the premises) and then propagate those up the tree. Perhaps you give the premises default values, given by the argument’s author, or a consensus number, or both.
How do you handle the extra-logical parts of the argument, those relating to semantics? For instance, special definitions of terms used in other atoms. A lot of argument comes down to arguing over definitions.