Self-modification is to be interpreted to include ‘directly editing one’s own low-level algorithms using high-level deliberative process’ but not include ‘changing one’s diet to change one’s thought processes’. If you are uncomfortable using the word ‘self-modification’ for this please substitute a new word ‘fzoom’ which means only that and consider everything I said about self-modification to be about fzoom.
Humans wouldn’t look at their own source code and say, “Oh dear, a Lobian obstacle”, on this I agree, but this is because humans would look at their own source code and say “What?”. Humans have no idea under what exact circumstances they will believe something, which comes with its own set of problems. The Lobian obstacle shows up when you approach things from the end we can handle, namely weak but well-defined systems which can well-define what they will believe, whereas human mathematicians are stronger than ZF plus large cardinals but we don’t know how they work or what might go wrong or what might change if we started editing neural circuit #12,730,889,136.
As Christiano’s work shows, allowing for tiny finite variances of probability might well dissipate the Lobian obstacle, but that’s the sort of thing you find out by knowing what a Lobian obstacle is.
Self-modification is to be interpreted to include ‘directly editing one’s own low-level algorithms using high-level deliberative process’ but not include ‘changing one’s diet to change one’s thought processes’. If you are uncomfortable using the word ‘self-modification’ for this please substitute a new word ‘fzoom’ which means only that and consider everything I said about self-modification to be about fzoom.
Very helpful. This seems like something that could lead to a satisfying answer to my question. And don’t worry, I won’t engage in a terminological dispute about “self-modification.”
Can you clarify a bit what you mean by “low-level algorithms”? I’ll give you a couple of examples related to what I’m wondering about.
Suppose I am working with a computer to make predictions about the the weather, and we consider the operations of the computer along with my brain as a single entity for the purposes testing whether the Lobian obstacles you are thinking of arise in practice. Now suppose I make basic modifications to the computer, expecting that the joint operation of my brain with the computer will yield improved output. This will not cause me to trip over Lobian obstacles. Why does whatever concern you have about the Lob problem predict that it would not, but also predict that future AIs might stumble over the Lob problem?
Another example. Humans learn different mental habits without stumbling over Lobian obstacles, and they can convince themselves that adopting the new mental habits is an improvement. Some of these are more derivative (“Don’t do X when I have emotion Y”) and others are perhaps more basic (“Try to update through explicit reasoning via Bayes’ Rule in circumstances C”). Why does whatever concern you have about the Lob problem predict that humans can make these modifications without stumbling, but also predict that future AIs might stumble over the Lob problem?
If the answer to both examples is “those are not cases of directly editing one’s low-level algorithms using high-level deliberative processes,” can you explain why your concern about Lobian issues only arises in that type of case? This is not me questioning your definition of “fzoom,” it is my asking why Lobian issues only arise when you are worrying about fzoom.
The first example is related to what I had in mind when I talked about fundamental epistemic standards in a previous comment:
Part of where I’m coming from on the first question is that Lobian issues only seem relevant to me if you want to argue that one set of fundamental epistemic standards is better than another, not for proving that other types of software and hardware alterations (such as building better arms, building faster computers, finding more efficient ways to compress your data, finding more efficient search algorithms, or even finding better mid-level statistical techniques) would result in more expected utility. But I would guess that once you have an agent operating with a minimally decent fundamental epistemic standards, you just can’t prove that altering the agent’s fundamental epistemic standards would result in an improvement. My intuition is that you can only do that when you have an inconsistent agent, and in that situation it’s unclear to me how Lobian issues apply.
Self-modification is to be interpreted to include ‘directly editing one’s own low-level algorithms using high-level deliberative process’ but not include ‘changing one’s diet to change one’s thought processes’. If you are uncomfortable using the word ‘self-modification’ for this please substitute a new word ‘fzoom’ which means only that and consider everything I said about self-modification to be about fzoom.
Humans wouldn’t look at their own source code and say, “Oh dear, a Lobian obstacle”, on this I agree, but this is because humans would look at their own source code and say “What?”. Humans have no idea under what exact circumstances they will believe something, which comes with its own set of problems. The Lobian obstacle shows up when you approach things from the end we can handle, namely weak but well-defined systems which can well-define what they will believe, whereas human mathematicians are stronger than ZF plus large cardinals but we don’t know how they work or what might go wrong or what might change if we started editing neural circuit #12,730,889,136.
As Christiano’s work shows, allowing for tiny finite variances of probability might well dissipate the Lobian obstacle, but that’s the sort of thing you find out by knowing what a Lobian obstacle is.
Very helpful. This seems like something that could lead to a satisfying answer to my question. And don’t worry, I won’t engage in a terminological dispute about “self-modification.”
Can you clarify a bit what you mean by “low-level algorithms”? I’ll give you a couple of examples related to what I’m wondering about.
Suppose I am working with a computer to make predictions about the the weather, and we consider the operations of the computer along with my brain as a single entity for the purposes testing whether the Lobian obstacles you are thinking of arise in practice. Now suppose I make basic modifications to the computer, expecting that the joint operation of my brain with the computer will yield improved output. This will not cause me to trip over Lobian obstacles. Why does whatever concern you have about the Lob problem predict that it would not, but also predict that future AIs might stumble over the Lob problem?
Another example. Humans learn different mental habits without stumbling over Lobian obstacles, and they can convince themselves that adopting the new mental habits is an improvement. Some of these are more derivative (“Don’t do X when I have emotion Y”) and others are perhaps more basic (“Try to update through explicit reasoning via Bayes’ Rule in circumstances C”). Why does whatever concern you have about the Lob problem predict that humans can make these modifications without stumbling, but also predict that future AIs might stumble over the Lob problem?
If the answer to both examples is “those are not cases of directly editing one’s low-level algorithms using high-level deliberative processes,” can you explain why your concern about Lobian issues only arises in that type of case? This is not me questioning your definition of “fzoom,” it is my asking why Lobian issues only arise when you are worrying about fzoom.
The first example is related to what I had in mind when I talked about fundamental epistemic standards in a previous comment: