I’d just forget the majoritarian argument altogether, it’s a distraction.
The second question does seem important to me, I too am skeptical that an AI would “obviously” have the capacity to recursively self-improve.
The counter-argument is summarized here, whereas we humans are stuck with an implementation substrate which was never designed for understandability, an AI could be endowed with both a more manageable internal representation of its own capacities and a specifically designed capacity for self-modification.
It’s possible—and I find it intuitively plausible—that there is some inherent general limit to a mind’s capacity for self-knowledge, self-understanding and self-modification. But an intuition isn’t an argument.
I see Yoreth’s version of the majoritarian argument as ahistorical. The US Government did put a lot of money into AI research and became disillusioned. Daniel Crevier wrote a book AI: The tumultuous history of the search for artificial intelligence. It is a history book. It was published in 1993, 17 years ago.
There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today’s belief that AI is around the corner from yesterday’s belief that AI is around the corner. Wrong then, right now, because...
Alternatively one might argue that scaling died at 90 nanometers, practical computer science is just turning out Java monkeys, the low hanging fruit has been picked, there is no road map, theoretical computer science is a tedious sub-field of pure mathematics, partial evaluation remains an esoteric backwater, theorem provers remain an esoteric backwater, the theorem proving community is building the wrong kind of theorem provers and will not rejuvenate research into partial evaluation,...
The lack of mainstream interest in explosive developments in AI is due to getting burned in the past. Noticing that the scars are not fading is very different from being unaware of AI.
There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today’s belief that AI is around the corner from yesterday’s belief that AI is around the corner. Wrong then, right now, because...
I’m reminded of a historical analogy from reading Artificial Addition. Think of it this way: a society that believes addition is the result of adherence to a specific process (or a process isomorphic thereto), and understands part of that process, is closer to creating “general artificial addition” than one that tries to achieve “GAA” by cleverly avoiding the need to discover this process.
We can judge our own distance to artificial general intelligence, then, by the extent to which we have identified constraints that intelligent processes must adhere to. And I think we’ve seen progress on this in terms of more refined understanding of e.g. how to apply Bayesian inference. For example, the work by Sebastian Thrun on how to seamlessly aggregate knowledge across sensors to create a coherent picture of the environment, which has produced tangible results (navigating the desert).
Can you point me to an overview of this understanding? I would like to apply it to the problem of detecting different types of data in a raw binary file.
I don’t know of a good one. You could try this, but it’s light on the math. I’m looking through Thrun’s papers to find a good one that gives a simple overview of the concepts, and through the CES documentation.
I was introduced to this advancement in EY’s Selling nonapples article.
And I’m not sure how this helps for detecting file types. I mean, I understand generally how they’re related, but not how it would help with the specifics of that problem.
Thanks I’ll have a look. I’m looking for general purpose insights. Otherwise you could use the same sorts of reasoning to argue that the technology behind deep blue was on the right track.
True, the specific demonstration of Thrun’s that referred to was specific to navigating a terrestrial desert environment, but it was a much more general problem than chess, and had to deal with probabilistic data and uncertainty. The techniques detailed in Thrun’s papers easily generalize beyond robotics.
I’ve had a look, and I don’t see anything much that will make the techniques easily generalize to my problems (or any problem that has similar characteristics to mine, such as very large amounts of possibly relevant data). Oh, I am planning to use bayesian techniques. But easy is not how I would characterize the translating of the problem.
Now that you mention it, one of the reasons I’m trying to get acquainted with the methods Thrun uses is to see how much they rely on advance knowledge of exactly how the sensor works (i.e. its true likelihood function). Then, I want to see if it’s possible to infer enough relevant information about the likelihood function (such as through unsupervised learning) so that I can design a program that doesn’t have to be given this information about the sensors.
And that’s starting to sound more similar to what you would want to do.
That’d be interesting. More posts on the real world use of bayesian models would be good for lesswrong I think.
But I’m not sure how relevant to my problem. I’m in the process of writing up my design deliberations and you can judge better once you have read them.
The reason I say that our problems are related is that inferring the relevant properties of a sensor’s likelihood function looks like a standard case of finding out how the probability distribution clusters. Your problem, that of identifying a file type from its binary bitstream, is doing something similar—finding what file types have what PD clusters.
I know of partial evaluation in the context of optimization, but I hadn’t previously heard of much connection between that and AI or theorem provers. What do you see as the connection?
Or, more concretely: what do you think would be the right kind of theorem provers?
I think I made a mistake in mentioning partial evaluation. It distracts from my main point. The point I’m making a mess of is that Yoreth asks two questions:
If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI?
I read (mis-read?) the rhetoric here as containing assumptions that I disagree with. When I read/mis-read it I feel that I’m being slipped the idea that governments have never been interested in AI. I also pick up a whiff of “the mainstream doesn’t know, we must alert them.” But mainstream figures such as John McCarthy and Peter Norvig know and are refraining from sounding the alarm.
So partial evaluation is a distraction and I only made the mistake of mentioning it because it obsesses me. But it does! So I’ll answer anyway ;-)
Why am I obsessed? My Is Lisp a Blub post suggests one direction for computer programming language research. Less speculatively, three important parts of computer science are compiling (ie hand compiling), writing compilers, and tools such as Yacc for compiling compilers. The three Futamura projections provide a way of looking at these three topics. I suspect it is the right way to look at them.
Lambda-the-ultimate had an interesting thread on the type-system feature-creep death-spiral. Look for the comment By Jacques Carette at Sun, 2005-10-30 14:10 linking to Futamura’s papers. So there is the link to having a theorem proving inside a partial evaluator.
Now partial evaluating looks like it might really help with self-improving AI. The AI might look at its source, realise that the compiler that it is using to compile itself is weak because it is a Futamura projection based compiler with an underpowered theorem prover, prove some of the theorems itself, re-compile, and start running faster.
Well, maybe, but the overviews I’ve read of the classic text by Jones, Gomard, and Sestoft, make me think that the start of the art only offers linear speed ups. If you write a bubble sort and use partial evaluation to compile it, it stays order n squared. The theorem prover will never transform to an n log n algorithm.
I’m trying to learn ACL2. It is a theorem prover and you can do things such as proving that quicksort and bubble sort agree. That is a nice result and you can imagine that fitting into a bigger picture. The partial evaluator wants to transform a bubble sort into something better, and the theorem prover can annoint the transformation as correct. I see two problems.
First, the state of the art is a long way from being automatic. You have to lead the theorem prover by the hand. It is really just a proof checker. Indeed the ACL2 book says
You are responsible for guiding it, usually by getting it to prove the necessary lemmas. Get used to thinking that it rarely proves anything substantial by itself.
it is a long way from proving (bubble sort = quick sort) on its own.
Second that doesn’t actually help. There is no sense of performance here. It only says that they agree, without saying which is faster. I can see a way to fix this. ACL2 can be used to prove that interpreters conform to their semantics. Perhaps it can be used to prove that an instrumented interpreter performs a calculation in fewer than n log n cycles. Thus lifting the proofs from proofs about programs to proofs about interpreters running programs would allow ACL2 to talk about performance.
This solution to problem two strikes me as infeasible. ACL2 cannot cope with the base level without hand holding, which I have not managed to learn to give. I see no prospect of lifting the proofs to include performance without adding unmanageable complications.
Could performance issues be built in to a theorem prover, so that it natively knows that quicksort is faster than bubble sort, without having to pass its proofs through a layer of interpretation? I’ve no idea. I think this is far ahead of the current state of computer science. I think it is preliminary to, and much simple than, any kind of self-improving artificial intelligence. But that is what I had in mind as the right kind of theorem prover.
There is a research area of static analysis and performance modelling. One of my Go playing buddies has just finished a PhD in it. I think that he hopes to use the techniques to tune up the performance of the TCP/IP stack. I think he is unaware of and uninterested in theorem provers. I see computer science breaking up into lots of little specialities, each of which takes half a life time to master. I cannot see the threads being pulled together until the human lifespan is 700 years instead of 70.
Ah, thanks, I see where you’re coming from now. So ACL2 is pretty much state-of-the-art from your point of view, but as you point out, it needs too much handholding to be widely useful. I agree, and I’m hoping to build something that can perform fully automatic verification of nontrivial code (though I’m not focusing on code optimization).
You are right of course that proving quicksort is faster than bubble sort, is even considerably more difficult than proving it is equivalent.
But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests. To be sure, that approach is fallible, but what of it? The optimized version only needs to be probably faster than the original. A formal guarantee is only needed for equivalence.
“But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests.”
“no need”? Sadly, it’s hard to use such simple methods as anything like a complete replacement for proofs. As an example which is simultaneously extreme and simple to state, naive quicksort has good expected asymptotic performance, but its (very unlikely) worst-case performance falls back to bubble sort. Thus, if you use quicksort naively (without, e.g., randomizing the input in some way) somewhere where an adversary has strong influence over the input seen by quicksort, you can create a vulnerability to a denial-of-service attack. This is easy to understand with proofs, not so easy either to detect or to quantify with random sampling. Also, the pathological input has low Kolmogorov complexity, so the universe might well happen give it to your system accidentally even in situations where your aren’t faced by an actual malicious intelligent “adversary.”
Also sadly, we don’t seem to have very good standard technology for performance proofs. Some years ago I made a horrendous mistake in an algorithm preprint, and later came up with a revised algorithm. I also spent more than a full-time week studying and implementing a published class of algorithms and coming to the conclusion that I had wasted my time because the published claimed performance is provably incorrect. Off and on since then I’ve put some time into looking at automated proof systems and the practicalities of proving asymptotic performance bounds. The original poster mentioned ACL2; I’ve looked mostly at HOL Light (for ordinary math proofs) and to a lesser extent Coq (for program/algorithm proofs). The state of the art for program/algorithm proofs doesn’t seem terribly encouraging. Maybe someday it will be a routine master’s thesis to, e.g., gloss Okasaki’s Purely Functional Data Structures with corresponding performance proofs, but we don’t seem to be quite there yet.
Part of the problem with these is that there are limits to how much can be proven about correctness of programs. In particular, the general question of whether two programs will give the same output on all inputs is undecidable.
Proposition: There is no Turing machine which when given the description of two Turing machines accepts iff both the machines will agree on all inputs.
Proof sketch: Consider our hypothetical machine A that accepts descriptions iff they correspond to two Turing machines which agree on all inputs. We shall show that how we can construct a machine H from A which would solve the halting problem. Note that for any given machine D we can construct a machine [D, s] which mimics D when fed input string s (simply append states to D so that the machine first erases everything on the tape, writes out s on the tape and then executed the normal procedure for D). Then, to determine whether a given machine T accepts a given input s, ask machine A whether [T,s] agrees with the machine that always accepts. Since we’ve now constructed a Turing machine which solves the haling problem, our original assumption, the existence of A must be false.
There are other theorems of a similar nature that can be proven with more work. The upshot is that in general, there are very few things that a program can say about all programs.
True. Test inputs suffice for an optimizer that on average wins more than it loses, which is good enough to be useful, but if you want guaranteed efficiency, that comes back to proof, and the current state-of-the-art is a good way short of doing that in typical cases.
Right, that’s optimization again. Basically the reason I’m asking about this is that I’m working on a theorem prover (with the intent of applying it to software verification), and if Alan Crowe considers current designs the wrong kind, I’m interested in ideas about what the right kind might be, and why. (The current state of the art does need to be extended, and I have some ideas of my own about to do that, but I’m sure there are things I’m missing.)
Because I am not just saying it’s not obvious an AI would recursively self-improve, I’m also referring to Eliezer’s earlier claims that such recursive self-improvement (aka FOOM) is what we’d expect given our shared assumptions about intelligence. I’m sort-of quoting Eliezer as saying FOOM obviously falls out of these assumptions.
I’m worried about the “sort-of quoting” part. I get nervous when people put quote marks around things that aren’t actually quotations of specific claims.
Noted, and thanks for asking. I’m also somewhat over-fond of scare quotes to denote my using a term I’m not totally sure is appropriate. Still, I believe my clarification above is sufficient that there isn’t any ambiguity left now as to what I meant.
I’d just forget the majoritarian argument altogether, it’s a distraction.
The second question does seem important to me, I too am skeptical that an AI would “obviously” have the capacity to recursively self-improve.
The counter-argument is summarized here, whereas we humans are stuck with an implementation substrate which was never designed for understandability, an AI could be endowed with both a more manageable internal representation of its own capacities and a specifically designed capacity for self-modification.
It’s possible—and I find it intuitively plausible—that there is some inherent general limit to a mind’s capacity for self-knowledge, self-understanding and self-modification. But an intuition isn’t an argument.
I see Yoreth’s version of the majoritarian argument as ahistorical. The US Government did put a lot of money into AI research and became disillusioned. Daniel Crevier wrote a book AI: The tumultuous history of the search for artificial intelligence. It is a history book. It was published in 1993, 17 years ago.
There are two possible responses. One might argue that time has moved on, things are different now, and there are serious reasons to distinguish today’s belief that AI is around the corner from yesterday’s belief that AI is around the corner. Wrong then, right now, because...
Alternatively one might argue that scaling died at 90 nanometers, practical computer science is just turning out Java monkeys, the low hanging fruit has been picked, there is no road map, theoretical computer science is a tedious sub-field of pure mathematics, partial evaluation remains an esoteric backwater, theorem provers remain an esoteric backwater, the theorem proving community is building the wrong kind of theorem provers and will not rejuvenate research into partial evaluation,...
The lack of mainstream interest in explosive developments in AI is due to getting burned in the past. Noticing that the scars are not fading is very different from being unaware of AI.
I’m reminded of a historical analogy from reading Artificial Addition. Think of it this way: a society that believes addition is the result of adherence to a specific process (or a process isomorphic thereto), and understands part of that process, is closer to creating “general artificial addition” than one that tries to achieve “GAA” by cleverly avoiding the need to discover this process.
We can judge our own distance to artificial general intelligence, then, by the extent to which we have identified constraints that intelligent processes must adhere to. And I think we’ve seen progress on this in terms of more refined understanding of e.g. how to apply Bayesian inference. For example, the work by Sebastian Thrun on how to seamlessly aggregate knowledge across sensors to create a coherent picture of the environment, which has produced tangible results (navigating the desert).
Can you point me to an overview of this understanding? I would like to apply it to the problem of detecting different types of data in a raw binary file.
I don’t know of a good one. You could try this, but it’s light on the math. I’m looking through Thrun’s papers to find a good one that gives a simple overview of the concepts, and through the CES documentation.
I was introduced to this advancement in EY’s Selling nonapples article.
And I’m not sure how this helps for detecting file types. I mean, I understand generally how they’re related, but not how it would help with the specifics of that problem.
Thanks I’ll have a look. I’m looking for general purpose insights. Otherwise you could use the same sorts of reasoning to argue that the technology behind deep blue was on the right track.
True, the specific demonstration of Thrun’s that referred to was specific to navigating a terrestrial desert environment, but it was a much more general problem than chess, and had to deal with probabilistic data and uncertainty. The techniques detailed in Thrun’s papers easily generalize beyond robotics.
I’ve had a look, and I don’t see anything much that will make the techniques easily generalize to my problems (or any problem that has similar characteristics to mine, such as very large amounts of possibly relevant data). Oh, I am planning to use bayesian techniques. But easy is not how I would characterize the translating of the problem.
Now that you mention it, one of the reasons I’m trying to get acquainted with the methods Thrun uses is to see how much they rely on advance knowledge of exactly how the sensor works (i.e. its true likelihood function). Then, I want to see if it’s possible to infer enough relevant information about the likelihood function (such as through unsupervised learning) so that I can design a program that doesn’t have to be given this information about the sensors.
And that’s starting to sound more similar to what you would want to do.
That’d be interesting. More posts on the real world use of bayesian models would be good for lesswrong I think.
But I’m not sure how relevant to my problem. I’m in the process of writing up my design deliberations and you can judge better once you have read them.
Looking forward to it!
The reason I say that our problems are related is that inferring the relevant properties of a sensor’s likelihood function looks like a standard case of finding out how the probability distribution clusters. Your problem, that of identifying a file type from its binary bitstream, is doing something similar—finding what file types have what PD clusters.
I know of partial evaluation in the context of optimization, but I hadn’t previously heard of much connection between that and AI or theorem provers. What do you see as the connection?
Or, more concretely: what do you think would be the right kind of theorem provers?
I think I made a mistake in mentioning partial evaluation. It distracts from my main point. The point I’m making a mess of is that Yoreth asks two questions:
I read (mis-read?) the rhetoric here as containing assumptions that I disagree with. When I read/mis-read it I feel that I’m being slipped the idea that governments have never been interested in AI. I also pick up a whiff of “the mainstream doesn’t know, we must alert them.” But mainstream figures such as John McCarthy and Peter Norvig know and are refraining from sounding the alarm.
So partial evaluation is a distraction and I only made the mistake of mentioning it because it obsesses me. But it does! So I’ll answer anyway ;-)
Why am I obsessed? My Is Lisp a Blub post suggests one direction for computer programming language research. Less speculatively, three important parts of computer science are compiling (ie hand compiling), writing compilers, and tools such as Yacc for compiling compilers. The three Futamura projections provide a way of looking at these three topics. I suspect it is the right way to look at them.
Lambda-the-ultimate had an interesting thread on the type-system feature-creep death-spiral. Look for the comment By Jacques Carette at Sun, 2005-10-30 14:10 linking to Futamura’s papers. So there is the link to having a theorem proving inside a partial evaluator.
Now partial evaluating looks like it might really help with self-improving AI. The AI might look at its source, realise that the compiler that it is using to compile itself is weak because it is a Futamura projection based compiler with an underpowered theorem prover, prove some of the theorems itself, re-compile, and start running faster.
Well, maybe, but the overviews I’ve read of the classic text by Jones, Gomard, and Sestoft, make me think that the start of the art only offers linear speed ups. If you write a bubble sort and use partial evaluation to compile it, it stays order n squared. The theorem prover will never transform to an n log n algorithm.
I’m trying to learn ACL2. It is a theorem prover and you can do things such as proving that quicksort and bubble sort agree. That is a nice result and you can imagine that fitting into a bigger picture. The partial evaluator wants to transform a bubble sort into something better, and the theorem prover can annoint the transformation as correct. I see two problems.
First, the state of the art is a long way from being automatic. You have to lead the theorem prover by the hand. It is really just a proof checker. Indeed the ACL2 book says
it is a long way from proving (bubble sort = quick sort) on its own.
Second that doesn’t actually help. There is no sense of performance here. It only says that they agree, without saying which is faster. I can see a way to fix this. ACL2 can be used to prove that interpreters conform to their semantics. Perhaps it can be used to prove that an instrumented interpreter performs a calculation in fewer than n log n cycles. Thus lifting the proofs from proofs about programs to proofs about interpreters running programs would allow ACL2 to talk about performance.
This solution to problem two strikes me as infeasible. ACL2 cannot cope with the base level without hand holding, which I have not managed to learn to give. I see no prospect of lifting the proofs to include performance without adding unmanageable complications.
Could performance issues be built in to a theorem prover, so that it natively knows that quicksort is faster than bubble sort, without having to pass its proofs through a layer of interpretation? I’ve no idea. I think this is far ahead of the current state of computer science. I think it is preliminary to, and much simple than, any kind of self-improving artificial intelligence. But that is what I had in mind as the right kind of theorem prover.
There is a research area of static analysis and performance modelling. One of my Go playing buddies has just finished a PhD in it. I think that he hopes to use the techniques to tune up the performance of the TCP/IP stack. I think he is unaware of and uninterested in theorem provers. I see computer science breaking up into lots of little specialities, each of which takes half a life time to master. I cannot see the threads being pulled together until the human lifespan is 700 years instead of 70.
Ah, thanks, I see where you’re coming from now. So ACL2 is pretty much state-of-the-art from your point of view, but as you point out, it needs too much handholding to be widely useful. I agree, and I’m hoping to build something that can perform fully automatic verification of nontrivial code (though I’m not focusing on code optimization).
You are right of course that proving quicksort is faster than bubble sort, is even considerably more difficult than proving it is equivalent.
But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests. To be sure, that approach is fallible, but what of it? The optimized version only needs to be probably faster than the original. A formal guarantee is only needed for equivalence.
“But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests.”
“no need”? Sadly, it’s hard to use such simple methods as anything like a complete replacement for proofs. As an example which is simultaneously extreme and simple to state, naive quicksort has good expected asymptotic performance, but its (very unlikely) worst-case performance falls back to bubble sort. Thus, if you use quicksort naively (without, e.g., randomizing the input in some way) somewhere where an adversary has strong influence over the input seen by quicksort, you can create a vulnerability to a denial-of-service attack. This is easy to understand with proofs, not so easy either to detect or to quantify with random sampling. Also, the pathological input has low Kolmogorov complexity, so the universe might well happen give it to your system accidentally even in situations where your aren’t faced by an actual malicious intelligent “adversary.”
Also sadly, we don’t seem to have very good standard technology for performance proofs. Some years ago I made a horrendous mistake in an algorithm preprint, and later came up with a revised algorithm. I also spent more than a full-time week studying and implementing a published class of algorithms and coming to the conclusion that I had wasted my time because the published claimed performance is provably incorrect. Off and on since then I’ve put some time into looking at automated proof systems and the practicalities of proving asymptotic performance bounds. The original poster mentioned ACL2; I’ve looked mostly at HOL Light (for ordinary math proofs) and to a lesser extent Coq (for program/algorithm proofs). The state of the art for program/algorithm proofs doesn’t seem terribly encouraging. Maybe someday it will be a routine master’s thesis to, e.g., gloss Okasaki’s Purely Functional Data Structures with corresponding performance proofs, but we don’t seem to be quite there yet.
Part of the problem with these is that there are limits to how much can be proven about correctness of programs. In particular, the general question of whether two programs will give the same output on all inputs is undecidable.
Proposition: There is no Turing machine which when given the description of two Turing machines accepts iff both the machines will agree on all inputs.
Proof sketch: Consider our hypothetical machine A that accepts descriptions iff they correspond to two Turing machines which agree on all inputs. We shall show that how we can construct a machine H from A which would solve the halting problem. Note that for any given machine D we can construct a machine [D, s] which mimics D when fed input string s (simply append states to D so that the machine first erases everything on the tape, writes out s on the tape and then executed the normal procedure for D). Then, to determine whether a given machine T accepts a given input s, ask machine A whether [T,s] agrees with the machine that always accepts. Since we’ve now constructed a Turing machine which solves the haling problem, our original assumption, the existence of A must be false.
There are other theorems of a similar nature that can be proven with more work. The upshot is that in general, there are very few things that a program can say about all programs.
Wouldn’t it have been easier to just link to Rice’s theorem?
I didn’t remember the name of the theorem and my Googlefu is weak,
True. Test inputs suffice for an optimizer that on average wins more than it loses, which is good enough to be useful, but if you want guaranteed efficiency, that comes back to proof, and the current state-of-the-art is a good way short of doing that in typical cases.
Partial evaluation is interesting to me in a AI sense. If you haven’t have a look at the 3 projections of Futamura.
But instead of compilers and language specifications you have learning systems and problem specifications. Or something along those lines.
Right, that’s optimization again. Basically the reason I’m asking about this is that I’m working on a theorem prover (with the intent of applying it to software verification), and if Alan Crowe considers current designs the wrong kind, I’m interested in ideas about what the right kind might be, and why. (The current state of the art does need to be extended, and I have some ideas of my own about to do that, but I’m sure there are things I’m missing.)
Why is the word obviously in quotes?
Because I am not just saying it’s not obvious an AI would recursively self-improve, I’m also referring to Eliezer’s earlier claims that such recursive self-improvement (aka FOOM) is what we’d expect given our shared assumptions about intelligence. I’m sort-of quoting Eliezer as saying FOOM obviously falls out of these assumptions.
I’m worried about the “sort-of quoting” part. I get nervous when people put quote marks around things that aren’t actually quotations of specific claims.
Noted, and thanks for asking. I’m also somewhat over-fond of scare quotes to denote my using a term I’m not totally sure is appropriate. Still, I believe my clarification above is sufficient that there isn’t any ambiguity left now as to what I meant.