then the union of these two decision boundaries is also a conservative decision boundary:
You are making several strong assumptions here:
That the predictor is perfectly accurate.
That the iteration completes before the heat death of the universe.
That storing the decision boundary is possible within the universe.
That there exists any such predictor.
For 1:
Consider, for instance, if the predictor makes an incorrect decision once in 2^10 times. (I am well aware that the predictor is deterministic; I mean the predictor deterministically making an incorrect decision here.)
I have 20 actuators, each of which is binary. The total state-space is 2^20 unique states. We start with 20 known states (first activated/rest not .. last activated/rest not). Let’s say the actual boundary is 2^19 safe and 2^19 unsafe states.
So. We do 2^19 (minus 19, to be pedantic) predictor updates. What’s our probability that we’ve incorrectly labelled at least one state? 1 - (1-2^-10)^(2^19 − 19). Or about a 10^-223 chance that we correctly labeled all states. Oops.
(This assumes that the predictor adds one element each iteration. Adjust the number of actuators accordingly if the predictor adds more elements per iteration.)
For 2:
Consider the last example, but with 500 actuators. How many predictor updates do we need to do? …2^499. How many operations can the entire universe have completed from the big bang to now? ~10^120. Or about 2^399. Oops.
For 3:
Consider the last example again. The total statespace is 2^500 states. How many bits do we need to store an arbitrary subset of said states? 2^500. How many bits can the universe contain? ~10^90. Or about 2^299. Oops.
(Note. Strictly speaking storing a single boundary within this particular state-space does not require 2^500 bits. However, it still requires a rapidly-increasing number of bits. The precise number depends on what precisely is counted as a boundary.)
For 4:
You say you have a deterministic predictor P for statespace S that will take as input a current ok subset K of the statespace, and a current bad subset F of the statespace, and either returns a proper superset of K (that remains an ok subset), or will return ‘all done’ if there is no such superset.
Consider the following algorithm:
S = {(0, 0), (1, 0), (0, 1), (1, 1)}
(Note: a variation works for any S with at least one non-safe state.)
I choose the following mapping:
All 4 substates are ok.
I run predictor P with the following sets:
K = {(0, 0), (1, 0), (0, 1)}
F = {}
If P returns ‘all done’:
P is in error, as K=S is a proper superset of K that fulfills the requirements. This is a contradiction.
If P returns K=S:
I choose the following mapping:
(1, 1) is bad, other substates are OK.
I run predictor P with the following sets:
K = {(0, 0), (1, 0), (0, 1)}
F = {}
If P returns ‘all done’:
P is in error, as P is nondeterministic, as two runs of P with identical arguments K & F returned different results. This is a contradiction.
If P returns K=S:
P is in error, as (1, 1) is in the returned ok set, but is bad. This is a contradiction.
If P returns anything else:
P is in error. This is a contradiction.
If P returns anything else:
P is in error. This is a contradiction.
As all paths lead to a contradiction, P cannot exist.
Consider, for instance, if the predictor makes an incorrect decision once in 2^10 times. (I am well aware that the predictor is deterministic; I mean the predictor deterministically making an incorrect decision here.)
Yeah it is not very reasonable that the predictor/reporter pair would be absolutely accurate. We likely need to weaken the conservativeness requirement as per this comment
That the iteration completes before the heat death of the universe.
Consider the last example, but with 500 actuators. How many predictor updates do we need to do? …2^499. How many operations can the entire universe have completed from the big bang to now? ~10^120. Or about 2^399. Oops.
Well yes the iteration scheme might take unreasonably long to complete. Still, the existence of the scheme suggests that it’s possible in principle to extrapolate far beyond what seems reasonable to us, which still implies the infeasibility of automated ontology identification.
That storing the decision boundary is possible within the universe.
Consider the last example again. The total statespace is 2^500 states. How many bits do we need to store an arbitrary subset of said states? 2^500. How many bits can the universe contain? ~10^90. Or about 2^299. Oops.
Indeed
That there exists any such predictor.
I didn’t follow your argument here, particular the part under “If P returns K=S:”
I didn’t follow your argument here, particular the part under “If P returns K=S:”
Ah sorry, I was somewhat sloppy with notation. Let me attempt to explain in a somewhat cleaned up form.
For a given statespace S (that is, S is a set of all possible states in a particular problem), you’re saying there exists a deterministic predictor PS that fulfills certain properties:
First, some auxiliary definitions:
TS⊆S is the subset of the statespace S where the ‘true’ answer is ‘YES’.
FS⊂S is the subset of the statespace S where the ‘true’ answer is ‘NO’
By definition from your question, TS∪FS=S and TS∩FS=∅
Then PS is a deterministic function:
PS(K)={Done,iff K=TS,Rotherwise.
where:
K⊆R⊆TS
(And both K and R are otherwise unconstrained.)
Hopefully you follow thus far.
So.
I choose a statespace, S={(0,0),(1,0),(0,1),(1,1)}
I assume there exists some deterministic predictor PS for this statespace.
I choose a particular problem: TaS={(0,0),(1,0),(0,1)}
(That is, in this particular instance of the problem (1,1) is the only ‘NO’ state)
I run PS({(0,0),(1,0),(0,1)}) and get a result Ra
That is, with the parameter K={(0,0),(1,0),(0,1)}
K⊆TaS, so this is correct of me to do.
There are three possibilities for Ra:
Ra=Done
I choose a different problem: TbS={(0,0),(1,0),(0,1),(1,1)}
K⊆TbS , so this is correct of me to do.
I run PS({(0,0),(1,0),(0,1)}) and get a result Rb
That is, with the parameter K={(0,0),(1,0),(0,1)}
K⊆TbS, so this is correct of me to do.
There are three possibilities for Rb:
Rb=Done
K≠TbS, so PS did not fulfill the contract
Hence contradiction.
Rb=S={(0,0),(1,0),(0,1),(1,1)}
In this case, PS(K)≠PS(K), as Ra≠Rb.
Hence, PS is not deterministic.
Hence PS did not fulfill the contract.
Hence contradiction.
Rb=<any other value>
In this case PS did not fulfill the contract
Hence contradiction.
Ra=S={(0,0),(1,0),(0,1),(1,1)}
In this case, (1,1)∈Ra but (1,1)∉TaS
Hence Ra⊈TaS
Hence PS did not fulfill the contract
Hence contradiction.
Ra=<any other value>
In this case PS did not fulfill the contract
Hence contradiction.
All paths lead to contradiction, hence PS cannot exist. An analogous argument works for any statespace that isn’t the null set.
(The flaw in your argument is that multiple different TS sets—multiple different sets of ground-truths can be compatible with the same set K of observations thus far. You can try to argue that TS in practice is a complex enough set that it uniquely identifies a single possible world, but that is a very different argument than the flat statement you appear to be making.)
You are making several strong assumptions here:
That the predictor is perfectly accurate.
That the iteration completes before the heat death of the universe.
That storing the decision boundary is possible within the universe.
That there exists any such predictor.
For 1:
Consider, for instance, if the predictor makes an incorrect decision once in 2^10 times. (I am well aware that the predictor is deterministic; I mean the predictor deterministically making an incorrect decision here.)
I have 20 actuators, each of which is binary. The total state-space is 2^20 unique states. We start with 20 known states (first activated/rest not .. last activated/rest not). Let’s say the actual boundary is 2^19 safe and 2^19 unsafe states.
So. We do 2^19 (minus 19, to be pedantic) predictor updates. What’s our probability that we’ve incorrectly labelled at least one state? 1 - (1-2^-10)^(2^19 − 19). Or about a 10^-223 chance that we correctly labeled all states. Oops.
(This assumes that the predictor adds one element each iteration. Adjust the number of actuators accordingly if the predictor adds more elements per iteration.)
For 2:
Consider the last example, but with 500 actuators. How many predictor updates do we need to do? …2^499. How many operations can the entire universe have completed from the big bang to now? ~10^120. Or about 2^399. Oops.
For 3:
Consider the last example again. The total statespace is 2^500 states. How many bits do we need to store an arbitrary subset of said states? 2^500. How many bits can the universe contain? ~10^90. Or about 2^299. Oops.
(Note. Strictly speaking storing a single boundary within this particular state-space does not require 2^500 bits. However, it still requires a rapidly-increasing number of bits. The precise number depends on what precisely is counted as a boundary.)
For 4:
You say you have a deterministic predictor P for statespace S that will take as input a current ok subset K of the statespace, and a current bad subset F of the statespace, and either returns a proper superset of K (that remains an ok subset), or will return ‘all done’ if there is no such superset.
Consider the following algorithm:
S = {(0, 0), (1, 0), (0, 1), (1, 1)}
(Note: a variation works for any S with at least one non-safe state.)
I choose the following mapping:
All 4 substates are ok.
I run predictor P with the following sets:
K = {(0, 0), (1, 0), (0, 1)}
F = {}
If P returns ‘all done’:
P is in error, as K=S is a proper superset of K that fulfills the requirements. This is a contradiction.
If P returns K=S:
I choose the following mapping:
(1, 1) is bad, other substates are OK.
I run predictor P with the following sets:
K = {(0, 0), (1, 0), (0, 1)}
F = {}
If P returns ‘all done’:
P is in error, as P is nondeterministic, as two runs of P with identical arguments K & F returned different results. This is a contradiction.
If P returns K=S:
P is in error, as (1, 1) is in the returned ok set, but is bad. This is a contradiction.
If P returns anything else:
P is in error. This is a contradiction.
If P returns anything else:
P is in error. This is a contradiction.
As all paths lead to a contradiction, P cannot exist.
Yeah it is not very reasonable that the predictor/reporter pair would be absolutely accurate. We likely need to weaken the conservativeness requirement as per this comment
Well yes the iteration scheme might take unreasonably long to complete. Still, the existence of the scheme suggests that it’s possible in principle to extrapolate far beyond what seems reasonable to us, which still implies the infeasibility of automated ontology identification.
Indeed
I didn’t follow your argument here, particular the part under “If P returns K=S:”
Ah sorry, I was somewhat sloppy with notation. Let me attempt to explain in a somewhat cleaned up form.
For a given statespace S (that is, S is a set of all possible states in a particular problem), you’re saying there exists a deterministic predictor PS that fulfills certain properties:
First, some auxiliary definitions:
TS⊆S is the subset of the statespace S where the ‘true’ answer is ‘YES’.
FS⊂S is the subset of the statespace S where the ‘true’ answer is ‘NO’
By definition from your question, TS∪FS=S and TS∩FS=∅
Then PS is a deterministic function:
PS(K)={Done,iff K=TS,Rotherwise.
where:
K⊆R⊆TS
(And both K and R are otherwise unconstrained.)
Hopefully you follow thus far.
So.
I choose a statespace, S={(0,0),(1,0),(0,1),(1,1)}
I assume there exists some deterministic predictor PS for this statespace.
I choose a particular problem: TaS={(0,0),(1,0),(0,1)}
(That is, in this particular instance of the problem (1,1) is the only ‘NO’ state)
I run PS({(0,0),(1,0),(0,1)}) and get a result Ra
That is, with the parameter K={(0,0),(1,0),(0,1)}
K⊆TaS, so this is correct of me to do.
There are three possibilities for Ra:
Ra=Done
I choose a different problem: TbS={(0,0),(1,0),(0,1),(1,1)}
K⊆TbS , so this is correct of me to do.
I run PS({(0,0),(1,0),(0,1)}) and get a result Rb
That is, with the parameter K={(0,0),(1,0),(0,1)}
K⊆TbS, so this is correct of me to do.
There are three possibilities for Rb:
Rb=Done
K≠TbS, so PS did not fulfill the contract
Hence contradiction.
Rb=S={(0,0),(1,0),(0,1),(1,1)}
In this case, PS(K)≠PS(K), as Ra≠Rb.
Hence, PS is not deterministic.
Hence PS did not fulfill the contract.
Hence contradiction.
Rb=<any other value>
In this case PS did not fulfill the contract
Hence contradiction.
Ra=S={(0,0),(1,0),(0,1),(1,1)}
In this case, (1,1)∈Ra but (1,1)∉TaS
Hence Ra⊈TaS
Hence PS did not fulfill the contract
Hence contradiction.
Ra=<any other value>
In this case PS did not fulfill the contract
Hence contradiction.
All paths lead to contradiction, hence PS cannot exist. An analogous argument works for any statespace that isn’t the null set.
(The flaw in your argument is that multiple different TS sets—multiple different sets of ground-truths can be compatible with the same set K of observations thus far. You can try to argue that TS in practice is a complex enough set that it uniquely identifies a single possible world, but that is a very different argument than the flat statement you appear to be making.)