Here is a description of how it could work for peano arithmatic, other proof systens are similar.
First I define an expression to consist of a number, a variable, or a function of several other expressions.
Fixed expressions are ones in which any variables are associated with some function.
eg (3×infx((x×(x+5))+2)) is a valid fixed expression. But (y+4)×3 isn’t fixed.
Semantically, all fixed expressions have a meaning. Syntactically, local manipulations on the parse tree can turn one expression into another. eg (a+b)×c going to a×b+a×c for arbitrary expressions a,b,c.
I think that with some set of basic functions and manipulations, this system can be as powerful as PA.
I now have an infinite network with all fixed expressions as nodes, and basic transformations as edges. eg the associativity transform links the nodes (3+4)+5 and 3+(4+5).
These graphs form connected components for each number, as well as components that are not evaluatable using the rules. (there is a path from (3+4) to 7. There is not a path from 3+4 to 9. ) now
You now define a spread as an infinite positive sequence that sums to 1. (this is kind of like a probability distribution over numbers.) If you were doing counterfactual ZFC, it would be a function from sets to reals.
Each node is assigned a spread. This spread represents how much the expression is considered to have each value in a counterfactual.
Assign the node (3) a spread that assigns 1.0 to 3 and 0.0 to the rest. (even in a logical counterfactual, 3 is definitely 3). Assign all other fixed expressions a spread that is the weighted (smaller expressions are more heavy) average of its neighbours. (the spreads of the nodes it shares an edge with). To take the counterfactual of A is B, for A and B expressions with the same free variables, merge any node which has A as a subexpression, with the version that has B as a subexpression and solve for the spreads.
You’ve explained the system. But what’s the motivation behind this?
Even though I only have a high level understanding of what you’re doing, I generally disagree with this kind of approach on a philosophical level. It seems like you’re reifying logical counterfactuals, when I see them more like an analogy, ie. positing a logical counterfactual is an operation that takes place on the level of the map, not the territory.
The general philosophy is deconfusion. Logical counterfactuals show up in several relevant looking places, like functional decision theory. It seems that a formal model of logical counterfactuals would let more properties of these algorithms be proved. There is an important step in going from an intuitive fealing of uncertainty, into a formalized theory of probability. It might also suggest other techniques based on it. I am not sure what you mean by logical counterfactuals being part of the map? Are you saying that they are something an algorithm might use to understand the world, not features of the world itself, like probabilities?
Using this, I think that self understanding, two boxing embedded FDT agents can be fully formally understood, in a universe that contains the right type of hyper-computation.
I mean that there isn’t a property of logical counterfactuals in the universe itself. However, once we’ve created a model (/map) of the universe, we can then define logical counterfactuals as about asking a particular question about this model. We just need to figure out what that question is.
Here is a description of how it could work for peano arithmatic, other proof systens are similar.
First I define an expression to consist of a number, a variable, or a function of several other expressions.
Fixed expressions are ones in which any variables are associated with some function.
eg (3×infx((x×(x+5))+2)) is a valid fixed expression. But (y+4)×3 isn’t fixed.
Semantically, all fixed expressions have a meaning. Syntactically, local manipulations on the parse tree can turn one expression into another. eg (a+b)×c going to a×b+a×c for arbitrary expressions a,b,c.
I think that with some set of basic functions and manipulations, this system can be as powerful as PA.
I now have an infinite network with all fixed expressions as nodes, and basic transformations as edges. eg the associativity transform links the nodes (3+4)+5 and 3+(4+5).
These graphs form connected components for each number, as well as components that are not evaluatable using the rules. (there is a path from (3+4) to 7. There is not a path from 3+4 to 9. ) now
You now define a spread as an infinite positive sequence that sums to 1. (this is kind of like a probability distribution over numbers.) If you were doing counterfactual ZFC, it would be a function from sets to reals.
Each node is assigned a spread. This spread represents how much the expression is considered to have each value in a counterfactual.
Assign the node (3) a spread that assigns 1.0 to 3 and 0.0 to the rest. (even in a logical counterfactual, 3 is definitely 3). Assign all other fixed expressions a spread that is the weighted (smaller expressions are more heavy) average of its neighbours. (the spreads of the nodes it shares an edge with). To take the counterfactual of A is B, for A and B expressions with the same free variables, merge any node which has A as a subexpression, with the version that has B as a subexpression and solve for the spreads.
I know this is rough, Im still working on it.
You’ve explained the system. But what’s the motivation behind this?
Even though I only have a high level understanding of what you’re doing, I generally disagree with this kind of approach on a philosophical level. It seems like you’re reifying logical counterfactuals, when I see them more like an analogy, ie. positing a logical counterfactual is an operation that takes place on the level of the map, not the territory.
The general philosophy is deconfusion. Logical counterfactuals show up in several relevant looking places, like functional decision theory. It seems that a formal model of logical counterfactuals would let more properties of these algorithms be proved. There is an important step in going from an intuitive fealing of uncertainty, into a formalized theory of probability. It might also suggest other techniques based on it. I am not sure what you mean by logical counterfactuals being part of the map? Are you saying that they are something an algorithm might use to understand the world, not features of the world itself, like probabilities?
Using this, I think that self understanding, two boxing embedded FDT agents can be fully formally understood, in a universe that contains the right type of hyper-computation.
I mean that there isn’t a property of logical counterfactuals in the universe itself. However, once we’ve created a model (/map) of the universe, we can then define logical counterfactuals as about asking a particular question about this model. We just need to figure out what that question is.