I’m a 43 year old software Developer in New Zealand. I’ve found this site through the Quantum Physics sequence, which has given me an enormous improvement in my understanding of the subject, so a huge thank you to Eliezer. (I’d like to know the detailed maths, but I don’t hold much hope of that happening). I’ve since managed to do the double-slit experiment using a laser pointer, Blu-tack and staples, which was great fun. I’m currently trying to think through the Schrödinger’s cat experiment, which seems to me to be described slightly incorrectly. I may try to write up a page or so about that some time.
The Bayes’ Theorem stuff was also a great topic, although I’ve not been able to think of practical ways to apply it yet.
I’m a pessimist on the Singularity: I think that various resource, time and complexity constraints will flatten exponential curves into linear ones (and some curves will even decline).
I’ve always valued accuracy in the sense that we should try to find out what’s really happening and understand our evidence and assumptions. I find one of my main tools for thinking is the “level of confidence”, e.g. when people say “you can’t prove that” I like to re-state the issue in terms of “this evidence gives us an extremely high level of confidence”.
I’m currently reading the Methods of Rationality story and loving it.
I’ve since managed to do the double-slit experiment using a laser pointer, Blu-tack and staples, which was great fun.
Neat! Details?
I’m a pessimist on the Singularity: I think that various resource, time and complexity constraints will flatten exponential curves into linear ones (and some curves will even decline). It sounds like you’re referring to the Ray Kurzweil version of the singularity. This idea gets the most press of all ideas that call themselves “Singularitarian,” but AFAIK it’s not the most popular among AI scientists. It’s certainly not the most popular on this site. The Kurzweil version goes something like, “eventually Moore’s law will give us huge tons of computing power, which is all we’ll need to upload everyone and make a Singularity.”
The I.J. Good/Yudkowsky/Singularity Institute version, aka the “Intelligence Explosion,” doesn’t require Moore’s law. It requires enough understanding of intelligence and decision theory to write up a self-modifying algorithm of human intelligence or higher. This algorithm can then write better ones, a process which can be repeated up to some high level of intelligence. The main things one needs to believe to believe the Intelligence Explosion hypothesis are:
Artificial General Intelligence (a piece of software as intelligent as a person) is possible and will be invented
An AGI able to rewrite its own code can improve its intelligence, including its ability to find ways to improve itself
This process can be repeated enough times to result in a superintelligent AI
A superintelligent AI will be able to make major changes to the world to satisfy its goals
Obviously, this is a very brief summary. Try here for a better and more detailed explanation.
I think achieving Human level intelligence is tough but doable. I suspect that self-improvement may be very difficult. But either way I strongly suspect that the power required to keep society ticking along will not be sustained. I think an AGI is 30 years away and that society does not have 30 years up its sleeve. I hope I am wrong.
“I think an AGI is 30 years away and that society does not have 30 years up its sleeve.”
The outside view, treating your prediction as an instance of the class of similar predictions made for centuries, suggests this is false. Do you have compelling reasons to override the outside view in this case?
The compelling reason is that this is what geologists believe, i.e. Peak Oil. Previous centuries of predictions are not relevant as they do not relate to decline (or not) in the production rate of the today’s dominant power sources.
Hi there!
I’m a 43 year old software Developer in New Zealand. I’ve found this site through the Quantum Physics sequence, which has given me an enormous improvement in my understanding of the subject, so a huge thank you to Eliezer. (I’d like to know the detailed maths, but I don’t hold much hope of that happening). I’ve since managed to do the double-slit experiment using a laser pointer, Blu-tack and staples, which was great fun. I’m currently trying to think through the Schrödinger’s cat experiment, which seems to me to be described slightly incorrectly. I may try to write up a page or so about that some time.
The Bayes’ Theorem stuff was also a great topic, although I’ve not been able to think of practical ways to apply it yet.
I’m a pessimist on the Singularity: I think that various resource, time and complexity constraints will flatten exponential curves into linear ones (and some curves will even decline).
I’ve always valued accuracy in the sense that we should try to find out what’s really happening and understand our evidence and assumptions. I find one of my main tools for thinking is the “level of confidence”, e.g. when people say “you can’t prove that” I like to re-state the issue in terms of “this evidence gives us an extremely high level of confidence”.
I’m currently reading the Methods of Rationality story and loving it.
Hi, welcome to LW!
Neat! Details?
The I.J. Good/Yudkowsky/Singularity Institute version, aka the “Intelligence Explosion,” doesn’t require Moore’s law. It requires enough understanding of intelligence and decision theory to write up a self-modifying algorithm of human intelligence or higher. This algorithm can then write better ones, a process which can be repeated up to some high level of intelligence. The main things one needs to believe to believe the Intelligence Explosion hypothesis are:
Artificial General Intelligence (a piece of software as intelligent as a person) is possible and will be invented
An AGI able to rewrite its own code can improve its intelligence, including its ability to find ways to improve itself
This process can be repeated enough times to result in a superintelligent AI
A superintelligent AI will be able to make major changes to the world to satisfy its goals
Obviously, this is a very brief summary. Try here for a better and more detailed explanation.
Here’s a picture of the double slit experiment http://imgur.com/a/2Uyux
I think achieving Human level intelligence is tough but doable. I suspect that self-improvement may be very difficult. But either way I strongly suspect that the power required to keep society ticking along will not be sustained. I think an AGI is 30 years away and that society does not have 30 years up its sleeve. I hope I am wrong.
“I think an AGI is 30 years away and that society does not have 30 years up its sleeve.”
The outside view, treating your prediction as an instance of the class of similar predictions made for centuries, suggests this is false. Do you have compelling reasons to override the outside view in this case?
The compelling reason is that this is what geologists believe, i.e. Peak Oil. Previous centuries of predictions are not relevant as they do not relate to decline (or not) in the production rate of the today’s dominant power sources.