I’ve since managed to do the double-slit experiment using a laser pointer, Blu-tack and staples, which was great fun.
Neat! Details?
I’m a pessimist on the Singularity: I think that various resource, time and complexity constraints will flatten exponential curves into linear ones (and some curves will even decline). It sounds like you’re referring to the Ray Kurzweil version of the singularity. This idea gets the most press of all ideas that call themselves “Singularitarian,” but AFAIK it’s not the most popular among AI scientists. It’s certainly not the most popular on this site. The Kurzweil version goes something like, “eventually Moore’s law will give us huge tons of computing power, which is all we’ll need to upload everyone and make a Singularity.”
The I.J. Good/Yudkowsky/Singularity Institute version, aka the “Intelligence Explosion,” doesn’t require Moore’s law. It requires enough understanding of intelligence and decision theory to write up a self-modifying algorithm of human intelligence or higher. This algorithm can then write better ones, a process which can be repeated up to some high level of intelligence. The main things one needs to believe to believe the Intelligence Explosion hypothesis are:
Artificial General Intelligence (a piece of software as intelligent as a person) is possible and will be invented
An AGI able to rewrite its own code can improve its intelligence, including its ability to find ways to improve itself
This process can be repeated enough times to result in a superintelligent AI
A superintelligent AI will be able to make major changes to the world to satisfy its goals
Obviously, this is a very brief summary. Try here for a better and more detailed explanation.
I think achieving Human level intelligence is tough but doable. I suspect that self-improvement may be very difficult. But either way I strongly suspect that the power required to keep society ticking along will not be sustained. I think an AGI is 30 years away and that society does not have 30 years up its sleeve. I hope I am wrong.
“I think an AGI is 30 years away and that society does not have 30 years up its sleeve.”
The outside view, treating your prediction as an instance of the class of similar predictions made for centuries, suggests this is false. Do you have compelling reasons to override the outside view in this case?
The compelling reason is that this is what geologists believe, i.e. Peak Oil. Previous centuries of predictions are not relevant as they do not relate to decline (or not) in the production rate of the today’s dominant power sources.
Hi, welcome to LW!
Neat! Details?
The I.J. Good/Yudkowsky/Singularity Institute version, aka the “Intelligence Explosion,” doesn’t require Moore’s law. It requires enough understanding of intelligence and decision theory to write up a self-modifying algorithm of human intelligence or higher. This algorithm can then write better ones, a process which can be repeated up to some high level of intelligence. The main things one needs to believe to believe the Intelligence Explosion hypothesis are:
Artificial General Intelligence (a piece of software as intelligent as a person) is possible and will be invented
An AGI able to rewrite its own code can improve its intelligence, including its ability to find ways to improve itself
This process can be repeated enough times to result in a superintelligent AI
A superintelligent AI will be able to make major changes to the world to satisfy its goals
Obviously, this is a very brief summary. Try here for a better and more detailed explanation.
Here’s a picture of the double slit experiment http://imgur.com/a/2Uyux
I think achieving Human level intelligence is tough but doable. I suspect that self-improvement may be very difficult. But either way I strongly suspect that the power required to keep society ticking along will not be sustained. I think an AGI is 30 years away and that society does not have 30 years up its sleeve. I hope I am wrong.
“I think an AGI is 30 years away and that society does not have 30 years up its sleeve.”
The outside view, treating your prediction as an instance of the class of similar predictions made for centuries, suggests this is false. Do you have compelling reasons to override the outside view in this case?
The compelling reason is that this is what geologists believe, i.e. Peak Oil. Previous centuries of predictions are not relevant as they do not relate to decline (or not) in the production rate of the today’s dominant power sources.