I’m not connected to the Singularity Institute or anything, so this is my idiosyncratic view.
Think about theorem provers such as Isabelle or ACL2. They are typically structured a bit like an expert system with a rule base and an inference engine. The axioms play the role of rule base and the theorem prover plays the role of the inference engine. While it is easy to change the axioms, this implies a degree of interpretive overhead when it come to trying to prove a theorem.
One way to reduce the interpretative overhead is to use a partial evaluator to specialize the prover to the particular set of axioms.
Indeed, if one has a self-applicable partial evaluator one could use the second Futamura projection and, specializing the partial evaluator to the theorem prover, produce a theorem prover compiler. Axioms go in, an efficient theorem prover for those axioms comes out.
Self-applicable partial evaluators are bleeding-edge software technology and current ambitions are limited to stripping out interpretive overhead. They only give linear speed ups. In principle a partial evaluator could recognise algorithmic inefficiencies and, rewriting the code more aggressively produce super-linear speed ups.
This is my example of a critical event in AI: using a self-applicative partial evaluator and the second Futamura projection to obtain a theorem prover compiler with a super-linear speed up compared to proving theorems in interpretive mode. This would convince me that there was progress on self-improving AI and that the clock had started counting down towards an intelligence explosion that changes everything.
How long would be on the clock? A year? A decade? A century? Guessing wildly I’d put my critical event at the halfway point. AI research started in 1960, so if the critical event happens in 2020 that puts the singularity at 2080.
Notice how I am both more optimistic and more pessimistic about the prospects for AI than most commentators.
I’m more pessimistic because I don’t see the current crop of wonderful, hand crafted, AI achievements, such as playing chess and driving cars as lying on the path towards recursively improving AI. These are the Faberge egg’s of AI. They will not hatch into chickens that lay even more fabulous eggs...
I’m more optimistic because I’m willing to accept a technical achievement, internal to AI research, as a critical event. It could show that things are really moving, and that we can start to expect earth-shattering consequences, even before we’ve seen real-world impacts from the internal technical developments.
Vampire uses specialisation according to wikipedia:
A number of efficient indexing techniques are used to implement all major operations on sets of terms and clauses. Run-time algorithm specialisation is used to accelerate forward matching.
I’m not connected to the Singularity Institute or anything, so this is my idiosyncratic view.
Think about theorem provers such as Isabelle or ACL2. They are typically structured a bit like an expert system with a rule base and an inference engine. The axioms play the role of rule base and the theorem prover plays the role of the inference engine. While it is easy to change the axioms, this implies a degree of interpretive overhead when it come to trying to prove a theorem.
One way to reduce the interpretative overhead is to use a partial evaluator to specialize the prover to the particular set of axioms.
Indeed, if one has a self-applicable partial evaluator one could use the second Futamura projection and, specializing the partial evaluator to the theorem prover, produce a theorem prover compiler. Axioms go in, an efficient theorem prover for those axioms comes out.
Self-applicable partial evaluators are bleeding-edge software technology and current ambitions are limited to stripping out interpretive overhead. They only give linear speed ups. In principle a partial evaluator could recognise algorithmic inefficiencies and, rewriting the code more aggressively produce super-linear speed ups.
This is my example of a critical event in AI: using a self-applicative partial evaluator and the second Futamura projection to obtain a theorem prover compiler with a super-linear speed up compared to proving theorems in interpretive mode. This would convince me that there was progress on self-improving AI and that the clock had started counting down towards an intelligence explosion that changes everything.
How long would be on the clock? A year? A decade? A century? Guessing wildly I’d put my critical event at the halfway point. AI research started in 1960, so if the critical event happens in 2020 that puts the singularity at 2080.
Notice how I am both more optimistic and more pessimistic about the prospects for AI than most commentators.
I’m more pessimistic because I don’t see the current crop of wonderful, hand crafted, AI achievements, such as playing chess and driving cars as lying on the path towards recursively improving AI. These are the Faberge egg’s of AI. They will not hatch into chickens that lay even more fabulous eggs...
I’m more optimistic because I’m willing to accept a technical achievement, internal to AI research, as a critical event. It could show that things are really moving, and that we can start to expect earth-shattering consequences, even before we’ve seen real-world impacts from the internal technical developments.
Vampire uses specialisation according to wikipedia: