If your argument is, “if it is possible for humans to produce some (verbal or mechanical) output, then it is possible for a program/machine to produce that output”, then, that’s true I suppose?
I don’t see why you specified “finite depth boolean circuit”.
While it does seem like the number of states for a given region of space is bounded, I’m not sure how relevant this is. Not all possible functions from states to {0,1} (or to some larger discrete set) are implementable as some possible state, for cardinality reasons.
I guess maybe that’s why you mentioned the thing along the lines of “assume that some amount of wiggle room that is tolerated” ?
One thing you say is that the set of superintelligences is a subset of the set of finite-depth boolean circuits. Later, you say that a lookup table is implementable as a finite-depth boolean circuit, and say that some such lookup table is the aligned superintelligence. But, just because it can be expressed as a finite-depth boolean circuit, it does not follow that it is in the set of possible superintelligences. How are you concluding that such a lookup table constitutes a superintelligence? It seems
Now, I don’t think that “aligned superintelligence” is logically impossible, or anything like that, and so I expect that there mathematically-exists a possible aligned-superintelligence (if it isn’t logically impossible, then by model existence theorem, there exists a model in which one exists… I guess that doesn’t establish that we live in such a model, but whatever).
But I don’t find this argument a compelling proof(-sketch).
Not all possible functions from states to {0,1} (or to some larger discrete set) are implementable as some possible state, for cardinality reasons
All cardinalities here are finite. The set of generically realizable states is a finite set because they each have a finite and bounded information content description (a list of instructions to realize that state, which is not greater in bits than the number of neurons in all the human brains on Earth).
If your argument is, “if it is possible for humans to produce some (verbal or mechanical) output, then it is possible for a program/machine to produce that output”, then, that’s true I suppose?
I don’t see why you specified “finite depth boolean circuit”.
While it does seem like the number of states for a given region of space is bounded, I’m not sure how relevant this is. Not all possible functions from states to {0,1} (or to some larger discrete set) are implementable as some possible state, for cardinality reasons.
I guess maybe that’s why you mentioned the thing along the lines of “assume that some amount of wiggle room that is tolerated” ?
One thing you say is that the set of superintelligences is a subset of the set of finite-depth boolean circuits. Later, you say that a lookup table is implementable as a finite-depth boolean circuit, and say that some such lookup table is the aligned superintelligence. But, just because it can be expressed as a finite-depth boolean circuit, it does not follow that it is in the set of possible superintelligences. How are you concluding that such a lookup table constitutes a superintelligence? It seems
Now, I don’t think that “aligned superintelligence” is logically impossible, or anything like that, and so I expect that there mathematically-exists a possible aligned-superintelligence (if it isn’t logically impossible, then by model existence theorem, there exists a model in which one exists… I guess that doesn’t establish that we live in such a model, but whatever).
But I don’t find this argument a compelling proof(-sketch).
Until I wrote this proof, it was a live possibility that aligned superintelligence is in fact logically impossible.
All cardinalities here are finite. The set of generically realizable states is a finite set because they each have a finite and bounded information content description (a list of instructions to realize that state, which is not greater in bits than the number of neurons in all the human brains on Earth).
Isn’t it enough that it achieves the best possible outcome? What other criteria do you want a “superintelligence” to have?