As you said, very often a justification-based conversation is looking to answer a question, and stops when it’s answered using knowledge and reasoning methods shared by the participants. For example, Alice wonders why a character in a movie did something, and then has a conversation with Bob about it. Bob shares some facts and character-motivations that Alice didn’t know, they figure out the character’s motivation together, and the conversation ends. This relied on a lot of shared knowledge (about the movie universe plus the real universe), but there’s no reason for them to question their shared knowledge. You get to shared ground, and then you stop.
If you insist on questioning everything, you are liable to get to nodes without justification:
“The lawn’s wet.” / “Why?” / “It rained last night.” / “Why’d that make it wet?” / “Because rain is when water falls from the sky.” / “But why’d that make it wet?” / “Because water is wet.” / “Why?” / “Water’s just wet, sweetie.”. A sequence of is-questions, bottoming out at a definition. (Well, close to a definition: the parent could talk about the chemical properties of liquid water, but that probably wouldn’t be helpful for anyone involved. And they might not know why water is wet.)
“Aren’t you going to eat your ice cream? It’s starting to melt.” / “It sure is!” / “But melted ice cream is awful.” / “No, it’s the best.” / “Gah!”. This conversation comes to an end when the participants realize that they have fundamentally different preferences. There isn’t really a justification for “I dislike melted ice cream”. (There’s an is-ought distinction here, though it’s about preferences rather than morality.)
Ultimately, all ought-question-chains end at a node without justification. Suffering is just bad, period.
And I think if you dig too deep, you’ll get to unjustified-ish nodes in is-question-chains too. For example, direct experience, or the belief that the past informs the future, or that reasoning works. You can question these things, but you’re liable to end up on shakier ground than the thing you’re trying to justify, and to enter a cycle. So, IDK, you can not count those flimsy edges and get a dead end, or count them and get a cycle, whichever you prefer?
We would just go and go and go until we lost all energy, and neither of us would notice that we’re in a cycle?
There’s an important shift here: you’re not wondering how the justification graph is shaped, but rather how we would navigate it. I am confident that the proof applies to the shape of the justification graph. I’m less confident you can apply it to our navigation of that graph.
“huh, it looks like we are on a path with the following generator functions”
Not all infinite paths are so predictable / recognizable.
As you said, very often a justification-based conversation is looking to answer a question, and stops when it’s answered using knowledge and reasoning methods shared by the participants. For example, Alice wonders why a character in a movie did something, and then has a conversation with Bob about it. Bob shares some facts and character-motivations that Alice didn’t know, they figure out the character’s motivation together, and the conversation ends. This relied on a lot of shared knowledge (about the movie universe plus the real universe), but there’s no reason for them to question their shared knowledge. You get to shared ground, and then you stop.
If you insist on questioning everything, you are liable to get to nodes without justification:
“The lawn’s wet.” / “Why?” / “It rained last night.” / “Why’d that make it wet?” / “Because rain is when water falls from the sky.” / “But why’d that make it wet?” / “Because water is wet.” / “Why?” / “Water’s just wet, sweetie.”. A sequence of is-questions, bottoming out at a definition. (Well, close to a definition: the parent could talk about the chemical properties of liquid water, but that probably wouldn’t be helpful for anyone involved. And they might not know why water is wet.)
“Aren’t you going to eat your ice cream? It’s starting to melt.” / “It sure is!” / “But melted ice cream is awful.” / “No, it’s the best.” / “Gah!”. This conversation comes to an end when the participants realize that they have fundamentally different preferences. There isn’t really a justification for “I dislike melted ice cream”. (There’s an is-ought distinction here, though it’s about preferences rather than morality.)
Ultimately, all ought-question-chains end at a node without justification. Suffering is just bad, period.
And I think if you dig too deep, you’ll get to unjustified-ish nodes in is-question-chains too. For example, direct experience, or the belief that the past informs the future, or that reasoning works. You can question these things, but you’re liable to end up on shakier ground than the thing you’re trying to justify, and to enter a cycle. So, IDK, you can not count those flimsy edges and get a dead end, or count them and get a cycle, whichever you prefer?
There’s an important shift here: you’re not wondering how the justification graph is shaped, but rather how we would navigate it. I am confident that the proof applies to the shape of the justification graph. I’m less confident you can apply it to our navigation of that graph.
Not all infinite paths are so predictable / recognizable.