This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.
I like how you made this comment, and then emailed me the link to the article, asking whether it actually represents something for self-modifying systems.
Now, as to whether this actually represents an advance… let’s go read the LtU thread. My guess is that the answer is, “this is an advancement for self-modifying reasoning systems iff we can take System U as a logic in which some subset of programs prove things in the Prop type, and those Prop-typed programs always terminate.”
However, System U is not strongly normalizing and is inconsistent as a logic.
“Our techniques could have applications… to growable typed languages...”
Curry-Howard tells us types are logical propositions. Mathematicians from Bayes to Laplace to Keynes to Cox to Jaynes have told us probability is an extension of logic or (as I prefer) logical truth/falsehood are limiting cases of probability. Alternatively, probability/logic are measures of information, in the algorithmic information theory sense. So a “growable typed language” seems it would have obvious benefits to machine learning and probabilistic programming generally. In other words, it’s not inherently a big deal not to be strongly normalizing or have decidable type checking by default—that just means the system doesn’t yet have enough information. But when the system acquires new information, it can synthesize new types and their relations, “growing” the typed language.
I read the LtU thread as well, and began to wonder if Snively was stealing from me.
But no, you need a normalizing fragment to have a Prop sort. Or at least, you need more than this, particularly a solid notion of a probabilistic type theory (which linguists are at the beginning of forming) and Calude’s work in Algorithmic Information Theory, to actually build self-verifying theorem-proving type-theories this way.
This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don’t understand how the AI reasons about itself you can’t control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don’t rely on types but these are presumably either equivalent to such types—and thus the result is helpful in that area too—or more general—in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.
This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.
I like how you made this comment, and then emailed me the link to the article, asking whether it actually represents something for self-modifying systems.
Now, as to whether this actually represents an advance… let’s go read the LtU thread. My guess is that the answer is, “this is an advancement for self-modifying reasoning systems iff we can take System U as a logic in which some subset of programs prove things in the
Prop
type, and thoseProp
-typed programs always terminate.”So, no.
But also:
I read the LtU thread as well, and began to wonder if Snively was stealing from me.
But no, you need a normalizing fragment to have a
Prop
sort. Or at least, you need more than this, particularly a solid notion of a probabilistic type theory (which linguists are at the beginning of forming) and Calude’s work in Algorithmic Information Theory, to actually build self-verifying theorem-proving type-theories this way.This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don’t understand how the AI reasons about itself you can’t control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don’t rely on types but these are presumably either equivalent to such types—and thus the result is helpful in that area too—or more general—in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.