Am I still misunderstanding something big about the kind of argument you are trying to make?
I don’t think so, but to formalize the argument a bit more, let’s define this new version of the WFC:
Special-Tree WFC: For any question Q with correct answer A, there exists a tree of decompositions T arguing this such that:
Every internal node has exactly one child leaf of the form “What is the best defeater to X?” whose answer is auto-verified,
For every other leaf node, a human can verify that the answer to the question at that node is correct,
For every internal node, a human can verify that the answer to the question is correct, assuming that the subanswers are correct.
(As before, we assume that the human never verifies something incorrect, unless the subanswers they were given were incorrect.)
Claim 1: (What I thought was) your assumption ⇒ Special-Tree WFC, using the construction I gave.
Claim 2: Special-Tree WFC + assumption of optimal play ⇒ honesty is an equilibrium, using the same argument that applies to regular WFC + assumption of optimal play.
Idk whether this is still true under the assumptions you’re using; I think claim 1 in particular is probably not true under your model.
Ah, OK, so you were essentially assuming that humans had access to an oracle which could verify optimal play.
This sort of makes sense, as a human with access to a debate system in equilibrium does have such an oracle. I still don’t yet buy your whole argument, for reasons being discussed in another branch of our conversation, but this part makes enough sense.
Your argument also has some leaf nodes which use the terminology “fully defeat”, in contrast to “defeat”. I assume this means that in the final analysis (after expanding the chain of defeaters) this refutation was a true one, not something ultimately refuted.
If so, it seems you also need an oracle for that, right? Unless you think that can be inferred from some fact about optimal play. EG, that a player bothered to say it rather than concede.
In any case it seems like you could just make the tree out of the claim “A is never fully defeated”:
Node(Q, A, [Leaf("Is A ever fully defeated?", "No")])
Your argument also has some leaf nodes which use the terminology “fully defeat”, in contrast to “defeat”.
I don’t think I ever use “fully defeat” in a leaf? It’s always in a Node, or in a Tree (which is a recursive call to the procedure that creates the tree).
I assume this means that in the final analysis (after expanding the chain of defeaters) this refutation was a true one, not something ultimately refuted.
I don’t think I ever use “fully defeat” in a leaf? It’s always in a Node, or in a Tree (which is a recursive call to the procedure that creates the tree).
Ahhhhh, OK. I missed that that was supposed to be a recursive call, and interpreted it as a leaf node based on the overall structure. So I was still missing an important part of your argument. I thought you were trying to offer a static tree in that last part, rather than a procedure.
I don’t think so, but to formalize the argument a bit more, let’s define this new version of the WFC:
Special-Tree WFC: For any question Q with correct answer A, there exists a tree of decompositions T arguing this such that:
Every internal node has exactly one child leaf of the form “What is the best defeater to X?” whose answer is auto-verified,
For every other leaf node, a human can verify that the answer to the question at that node is correct,
For every internal node, a human can verify that the answer to the question is correct, assuming that the subanswers are correct.
(As before, we assume that the human never verifies something incorrect, unless the subanswers they were given were incorrect.)
Claim 1: (What I thought was) your assumption ⇒ Special-Tree WFC, using the construction I gave.
Claim 2: Special-Tree WFC + assumption of optimal play ⇒ honesty is an equilibrium, using the same argument that applies to regular WFC + assumption of optimal play.
Idk whether this is still true under the assumptions you’re using; I think claim 1 in particular is probably not true under your model.
Ah, OK, so you were essentially assuming that humans had access to an oracle which could verify optimal play.
This sort of makes sense, as a human with access to a debate system in equilibrium does have such an oracle. I still don’t yet buy your whole argument, for reasons being discussed in another branch of our conversation, but this part makes enough sense.
Your argument also has some leaf nodes which use the terminology “fully defeat”, in contrast to “defeat”. I assume this means that in the final analysis (after expanding the chain of defeaters) this refutation was a true one, not something ultimately refuted.
If so, it seems you also need an oracle for that, right? Unless you think that can be inferred from some fact about optimal play. EG, that a player bothered to say it rather than concede.
In any case it seems like you could just make the tree out of the claim “A is never fully defeated”:
Node(Q, A, [Leaf("Is A ever fully defeated?", "No")])
I don’t think I ever use “fully defeat” in a leaf? It’s always in a
Node
, or in aTree
(which is a recursive call to the procedure that creates the tree).Yes, that’s what I mean by “fully defeat”.
Ahhhhh, OK. I missed that that was supposed to be a recursive call, and interpreted it as a leaf node based on the overall structure. So I was still missing an important part of your argument. I thought you were trying to offer a static tree in that last part, rather than a procedure.