This is only meaningful under the assumption that the intelligence of an AI depends on the strength of its proof system. Since the space of intelligent AI systems is not limited those that are dependent on proof systems, the entire argument has narrow scope and importance. And since, arguably, the majority of AIs capable of human-level intelligence in the real world are indeed not those that are dependent on proof systems (but are, instead, complex systems), the argument’s importance diminishes to a vanishingly small level.
I can never quite tell to what extent you are being deliberately inflammatory; is there a history I’m missing?
I agree that this work is only relevant to systems of a certain kind, i.e. those which rely on formal logical manipulations. It seems unjustifiably strong to say that the work is therefore of vanishing importance. Mostly, because you can’t justify such confident statements without a much deeper understanding of AGI than anyone can realistically claim to have right now.
But moreover, we don’t yet have any alternatives to first order logic for formalizing and understanding general reasoning, and the only possibilities seem to be: (1) make significant new developments that mathematicians as a whole don’t expect us to make, or (2) build systems whose reasoning we don’t understand except as an empirical fact (e.g. human brains).
I don’t deny that (1) and (2) are plausible, but I think that, if those are the best bets we have, we should think that FOL has a good chance of being the right formalism for understanding general reasoning. Which part of this picture do you disagree with? The claim that (1) and (2) are all we have, the inference that FOL is therefore a plausible formalism, or the claim that the OP is relevant to a generic system whose reasoning is characterized by FOL?
I’m also generally skeptical of the sentiment “build an intelligence which mimics a human as closely as possible.” This is competing with the principle “build things you understand,” and I think the arguments in favor of the second are currently much stronger than those in favor of the first (though this situation could change with more evidence or argument). I think that work that improves our ability to understand reasoning (of which the OP is a tiny piece) is a good investment, for that reason.
Your characterization of the OP is also not quite fair; we don’t necessarily care about the strength of the underlying proof system, we are talking about the semantic issue: can you think thoughts like “Everything I think is pretty likely to be true” or can’t you? We would like to have a formalism in which you can articulate such thoughts, but in which the normal facts about FOL, completeness theorems, etc. still apply. That is a much more general issue than the problem of boosting a system’s proof-theoretic strength in order to boost its reasoning power.
IIRC, Eliezer banned Richard from SL4 several years ago. I can’t find the thread in which Eliezer banned him, but here is a thread in which Eliezer writes (to Richard) “I am wondering whether to ban you from SL4...”
After a few counter-productive discussions with Richard, I’ve personally stopped communicating with him.
Note that I’ve personally had many productive discussions with Richard: he does have a bit of a temper, which is compounded by a history of bad experiences with communities such as SL4 and this one, but he’s a very reasonable debate partner when treated with courtesy and respect.
It says something profound about the LessWrong community that:
(a) Whenever I post a remark, no matter how innocent, Luke Muehlhauser makes a point of coming to the thread to make defamatory remarks against me personally ….. and his comment is upvoted;
(b) When I, the victim of Muehlhauser’s attack, point out that there is objective evidence to show that the defamation is baseless and unjustified …. my comment is immediately downvoted.
I was not aware of the prior history, but I tend to downvote anyone coming across as a bitter asshole with an ax to grind.
Ditto. I hypothesise that if Richard had used a few different words here and there to take out the overt barbs he may have far more effectively achieved his objective of gaining the moral high ground and making his adversaries look bad.
I’d rather not phrase it in terms of adversaries but the basic point that people would be more inclined to listen to Richard if he was less combative is probably accurate.
I’m also generally skeptical of the sentiment “build an intelligence which mimics a human as closely as possible.” This is competing with the principle “build things you understand,”
Do you really think that one can build an AGI without first getting a good understanding of human intelligence, to the degree where one can be reproduced (but possibly shouldn’t be)?
Do you really think that one can build an AGI without first getting a good understanding of human intelligence, to the degree where one can be reproduced
It was possibly to achieve heavier than air flight without reproducing the flexible wings of birds.
Good understanding of the design principles may be enough, or of the organisation into cortical columns and the like. The rest is partly a mess of evolutionary hacks, such as “let’s put the primary visual cortex in the back of the brain” (excuse the personification), and probably not integral for sufficient understanding. So I guess my question would be what granularity of “understanding” you’re referring to. ‘So that it can be reproduced’ seems too low a barrier: Consider we found some alien technology that we could reproduce strictly by copying it, without having any idea how it actually worked.
Do you ‘understand’ large RNNs that exhibit strange behavior because you understand the underlying mechanisms and could use them to create other RNNs?
There is a sort of trade-off, you can’t go too basic and still consider yourself to understand the higher-level abstractions in a meaningful way, just as the physical layer of the TCP/IP stack in principle encapsulates all necessary information, but is still … user-unfriendly. Otherwise we could say we understand a human brain perfectly just because we know the laws that governs it on a physical level.
I shouldn’t comment when sleep deprived … ignore at your leisure.
And since, arguably, the majority of AIs capable of human-level intelligence in the real world are indeed not those that are dependent on proof systems (but are, instead, complex systems), the argument’s importance diminishes to a vanishingly small level.
I must be missing something here, but you are saying that a plausible argument about a technology we don’t yet have makes a statement about limits of a different form of that technology completely unimportant. It seems like there’s a big jump here from “arguable” and “majority” to therefore this doesn’t matter.
This is only meaningful under the assumption that the intelligence of an AI depends on the strength of its proof system.
Edit:
The intelligence of the AI? The proof system is necessary to provably keep the AI’s goals invariant. Its epistemic prowess (“intelligence”) need not be dependent on the proof system. The AI could use much weaker proof systems—or even just probabilistic tests such as “this will probably make me more powerful” for most of its self-modifying purposes, just as you don’t have a proof system that reading a certain book will increase your intelligence.
However, if we want to keep crucial properties such as the utility function as provable invariants, that’s what we’d need such a strong proof system for, by definition.
A pity that you cannot be more eloquent, or produce any argument to support your claim that “No”.
I have done both, in published papers (cf Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence and Theoretical Psychology. In B. Goertzel and P. Wang (Eds.) Proceedings of the 2006 AGI Workshop. IOS Press, Amsterdam, and Loosemore, R.P.W. (2011b). The Complex Cognitive Systems Manifesto. In The Yearbook of Nanotechnology, Volume III: Nanotechnology, the Brain, and the Future, Eds. Sean Hays, Jason Scott Robert, Clark A. Miller, and Ira Bennett. New York, NY: Springer.
But don’t mind me. A voice of sanity can hardly be expected to be listened to under these circumstances.
This is only meaningful under the assumption that the intelligence of an AI depends on the strength of its proof system. Since the space of intelligent AI systems is not limited those that are dependent on proof systems, the entire argument has narrow scope and importance. And since, arguably, the majority of AIs capable of human-level intelligence in the real world are indeed not those that are dependent on proof systems (but are, instead, complex systems), the argument’s importance diminishes to a vanishingly small level.
I can never quite tell to what extent you are being deliberately inflammatory; is there a history I’m missing?
I agree that this work is only relevant to systems of a certain kind, i.e. those which rely on formal logical manipulations. It seems unjustifiably strong to say that the work is therefore of vanishing importance. Mostly, because you can’t justify such confident statements without a much deeper understanding of AGI than anyone can realistically claim to have right now.
But moreover, we don’t yet have any alternatives to first order logic for formalizing and understanding general reasoning, and the only possibilities seem to be: (1) make significant new developments that mathematicians as a whole don’t expect us to make, or (2) build systems whose reasoning we don’t understand except as an empirical fact (e.g. human brains).
I don’t deny that (1) and (2) are plausible, but I think that, if those are the best bets we have, we should think that FOL has a good chance of being the right formalism for understanding general reasoning. Which part of this picture do you disagree with? The claim that (1) and (2) are all we have, the inference that FOL is therefore a plausible formalism, or the claim that the OP is relevant to a generic system whose reasoning is characterized by FOL?
I’m also generally skeptical of the sentiment “build an intelligence which mimics a human as closely as possible.” This is competing with the principle “build things you understand,” and I think the arguments in favor of the second are currently much stronger than those in favor of the first (though this situation could change with more evidence or argument). I think that work that improves our ability to understand reasoning (of which the OP is a tiny piece) is a good investment, for that reason.
Your characterization of the OP is also not quite fair; we don’t necessarily care about the strength of the underlying proof system, we are talking about the semantic issue: can you think thoughts like “Everything I think is pretty likely to be true” or can’t you? We would like to have a formalism in which you can articulate such thoughts, but in which the normal facts about FOL, completeness theorems, etc. still apply. That is a much more general issue than the problem of boosting a system’s proof-theoretic strength in order to boost its reasoning power.
IIRC, Eliezer banned Richard from SL4 several years ago. I can’t find the thread in which Eliezer banned him, but here is a thread in which Eliezer writes (to Richard) “I am wondering whether to ban you from SL4...”
After a few counter-productive discussions with Richard, I’ve personally stopped communicating with him.
The “bannination” is here.
EDIT: and here is Eliezer’s explanation.
Note that I’ve personally had many productive discussions with Richard: he does have a bit of a temper, which is compounded by a history of bad experiences with communities such as SL4 and this one, but he’s a very reasonable debate partner when treated with courtesy and respect.
It says something profound about the LessWrong community that:
(a) Whenever I post a remark, no matter how innocent, Luke Muehlhauser makes a point of coming to the thread to make defamatory remarks against me personally ….. and his comment is upvoted;
(b) When I, the victim of Muehlhauser’s attack, point out that there is objective evidence to show that the defamation is baseless and unjustified …. my comment is immediately downvoted.
It says something profound about the ten or so people who have voted on your recent comments, assuming none of the votes come from the same person.
I was not aware of the prior history, but I tend to downvote anyone coming across as a bitter asshole with an ax to grind.
Ditto. I hypothesise that if Richard had used a few different words here and there to take out the overt barbs he may have far more effectively achieved his objective of gaining the moral high ground and making his adversaries look bad.
I’d rather not phrase it in terms of adversaries but the basic point that people would be more inclined to listen to Richard if he was less combative is probably accurate.
The papers that Richard mentioned in the downstream comment are useful for understanding his view: [1, 2].
Do you really think that one can build an AGI without first getting a good understanding of human intelligence, to the degree where one can be reproduced (but possibly shouldn’t be)?
It was possibly to achieve heavier than air flight without reproducing the flexible wings of birds.
Right, an excellent point. Biology can be unnecessarily messy.
Good understanding of the design principles may be enough, or of the organisation into cortical columns and the like. The rest is partly a mess of evolutionary hacks, such as “let’s put the primary visual cortex in the back of the brain” (excuse the personification), and probably not integral for sufficient understanding. So I guess my question would be what granularity of “understanding” you’re referring to. ‘So that it can be reproduced’ seems too low a barrier: Consider we found some alien technology that we could reproduce strictly by copying it, without having any idea how it actually worked.
Do you ‘understand’ large RNNs that exhibit strange behavior because you understand the underlying mechanisms and could use them to create other RNNs?
There is a sort of trade-off, you can’t go too basic and still consider yourself to understand the higher-level abstractions in a meaningful way, just as the physical layer of the TCP/IP stack in principle encapsulates all necessary information, but is still … user-unfriendly. Otherwise we could say we understand a human brain perfectly just because we know the laws that governs it on a physical level.
I shouldn’t comment when sleep deprived … ignore at your leisure.
I must be missing something here, but you are saying that a plausible argument about a technology we don’t yet have makes a statement about limits of a different form of that technology completely unimportant. It seems like there’s a big jump here from “arguable” and “majority” to therefore this doesn’t matter.
Edit:
The intelligence of the AI? The proof system is necessary to provably keep the AI’s goals invariant. Its epistemic prowess (“intelligence”) need not be dependent on the proof system. The AI could use much weaker proof systems—or even just probabilistic tests such as “this will probably make me more powerful” for most of its self-modifying purposes, just as you don’t have a proof system that reading a certain book will increase your intelligence.
However, if we want to keep crucial properties such as the utility function as provable invariants, that’s what we’d need such a strong proof system for, by definition.
A pity that you cannot be more eloquent, or produce any argument to support your claim that “No”.
I have done both, in published papers (cf Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence and Theoretical Psychology. In B. Goertzel and P. Wang (Eds.) Proceedings of the 2006 AGI Workshop. IOS Press, Amsterdam, and Loosemore, R.P.W. (2011b). The Complex Cognitive Systems Manifesto. In The Yearbook of Nanotechnology, Volume III: Nanotechnology, the Brain, and the Future, Eds. Sean Hays, Jason Scott Robert, Clark A. Miller, and Ira Bennett. New York, NY: Springer.
But don’t mind me. A voice of sanity can hardly be expected to be listened to under these circumstances.