As you say, do the impossible. I’m reasonably sure that checking for fallacies isn’t possible without understanding natural language. I’m only reasonably sure because google has done more with translation than I would have thought possible without using understanding.
Perhaps there’s some way of using google’s resources to catch at least a useful proportion of fallacies and biases.
I’m not sure this it would require a full understanding of natural language. There’s got to be an 80⁄20 rule method by which this can be done. Really, there are only so many logical fallacies, and there might be some way to “hack” out fallacious statements by looking for certain patterns in the sentence structure, as opposed to actual sentence interpretation.
For example:
“All gerbils are purple.”
The computer only needs to understand:
“All (gibberish) are (different gibberish).”
Hasty generalization pattern recognized.
For another example:
Gerbils are purple because purple is the color of gerbils.
The computer understands:
“(gibberish 1) are (gibberish 2) because (gibberish 2) is (blah blah blah) (gibberish 1)”
Circular reasoning pattern recognized.
Yes, it would get more complicated than that, especially when people use complex or run-on sentences, or when the fallacy occurs after numerous sentences stack together. But, I still think it could do this with pattern recognition.
Hmmm… it would also have to detect statements where points are being made (looking for the words “is”, “are” and “because” might help) and avoid sentences that are pure matters of opinion (I love ice cream because it’s delicious! - this might look something like (blah blah blah) because it’s (subjective term)).
I somehow doubt Google would appreciate the leeching of their resources—unless you mean they’ve made it open source or something. Making it dependent on them would be a liability—if they notice the leeching of their resources, they’d surely create a new limit that would probably break the program.
As you say, do the impossible. I’m reasonably sure that checking for fallacies isn’t possible without understanding natural language. I’m only reasonably sure because google has done more with translation than I would have thought possible without using understanding.
Perhaps there’s some way of using google’s resources to catch at least a useful proportion of fallacies and biases.
I’m not sure this it would require a full understanding of natural language. There’s got to be an 80⁄20 rule method by which this can be done. Really, there are only so many logical fallacies, and there might be some way to “hack” out fallacious statements by looking for certain patterns in the sentence structure, as opposed to actual sentence interpretation.
For example: “All gerbils are purple.” The computer only needs to understand: “All (gibberish) are (different gibberish).” Hasty generalization pattern recognized.
For another example: Gerbils are purple because purple is the color of gerbils. The computer understands: “(gibberish 1) are (gibberish 2) because (gibberish 2) is (blah blah blah) (gibberish 1)” Circular reasoning pattern recognized.
Yes, it would get more complicated than that, especially when people use complex or run-on sentences, or when the fallacy occurs after numerous sentences stack together. But, I still think it could do this with pattern recognition.
Hmmm… it would also have to detect statements where points are being made (looking for the words “is”, “are” and “because” might help) and avoid sentences that are pure matters of opinion (I love ice cream because it’s delicious! - this might look something like (blah blah blah) because it’s (subjective term)).
I somehow doubt Google would appreciate the leeching of their resources—unless you mean they’ve made it open source or something. Making it dependent on them would be a liability—if they notice the leeching of their resources, they’d surely create a new limit that would probably break the program.