Very interesting story. Since I’m born in an atheist family and never believed in God, I lack any similar experience, and somehow, I regret it, because that experience must definitely be of a great help to change your mind about other topics. The closest experience I have to this is the Santa Claus thing, but I was such a young child that I only have confuse memory about how I started to doubt. But the process looks similar : there is nice Santa Claus person that gives me present, I start to doubt it’s real and feel bad because I don’t want the “magic of chirstmas” to go away, and then I realize that it’s something even more “magical” than elves and flying Santa Claus going faster than light : it’s the love of my parents, who spent days going from shop to shop to find the silly present I asked for in my letter to Santa Claus that the teacher gave them… it has the three phases : belief in something supernatural that makes you happy, doubt and feeling sad, and then realizing that reality makes you even more happy. But it’s so lost in the mist of early childhood that it doesn’t have the potency you describe.
Oh, on other topic, I’m still doubtful about “Singularity”, « an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’» sounds a logical jump with no foundation to me, let me try to explain : let’s assume we have a measure of intelligence into a single real number, I(M). An intelligent machine can design a better version of itself, so we have I(M_n+1) > I(M_n). That’s a strictly monotonically increasing sequence. That’s all we know. A strictly monotonically increasing sequence can have a finite limit (like 1+1/2+1/4+1/8+… has a limit of 2), or can grow towards infinity very slowly (like log(n)). How do we know that designing a better intelligence is not an exponentially difficult task ? How do we know that above a given level, the formula doesn’t look like I(M_n+1) = I(M_n) + 1/n, because every increase in intelligence is so much harder to make ? I guess there is an answer to that, but I couldn’t find it in siginst FAQ… does any of you have a pointer to an answer to that question ?
How do we know that designing a better intelligence is not an exponentially difficult task ?
Well, the answer could simply be, “you’re right; we don’t know that”. However, I think there is evidence that an ultraintelligent machine could make itself very intelligent indeed.
The human mind, though better at reasoning than anything else that currently exists, still has a multitude of flaws. We can’t symbolically reason at even a millionth the speed of a $15 cell phone (and even if we could, there are still unanswered questions about how to reason), and our intuition is loaded with biases. If you could eliminate all human flaws, you would end up with something more intelligent than the most intelligent human that has ever lived.
Also, I could be mistaken, but I think people who study rationality and mathematics (among other things?) tend to report increasing marginal utility: once they understand a concept, it becomes easier to understand other concepts. A machine capable of understanding trillions of concepts might be able to learn new ones very easily compared to a human.
There are lots of words on the subject in the FOOM debate but that’s (1) full of lots of “intuition, examples, and hand waving” on both sides, (2) ended with neither side convincing the other, and (3) produced no formal coherent treatise on the subject where evidence could be dropped into place to give an unambiguous answer that a third party could see was obviously true. It is worth a read if you’re looking for an intuition pump, not if you want a summary answer.
If you want to examine it from another angle to think about timing and details and so on, you might try using The Uncertain Future modeling tool. If you have the time to feed it input, I’m curious to know what output you get :-)
It seems to me that I’m both pessimistic and optimisc (or anyway, not well calibrated). I got :
Catastrophe by 2070 : 65.75%
AI by 2070 : 98.3%
I would have given much less to both (around 25%-33% for catastrophe, and around 50-75% for AI) if you directly asked me… so I’m badly calibrated, either in the way I answered to the individual questions, or to my final estimate (most likely to both...). I’ll have to read the FOOM debate and think more about the issue. Thanks for the pointers anyway.
(Btw, it’s painful, the applet doesn’t support copy/paste...)
Very interesting story. Since I’m born in an atheist family and never believed in God, I lack any similar experience, and somehow, I regret it, because that experience must definitely be of a great help to change your mind about other topics. The closest experience I have to this is the Santa Claus thing, but I was such a young child that I only have confuse memory about how I started to doubt. But the process looks similar : there is nice Santa Claus person that gives me present, I start to doubt it’s real and feel bad because I don’t want the “magic of chirstmas” to go away, and then I realize that it’s something even more “magical” than elves and flying Santa Claus going faster than light : it’s the love of my parents, who spent days going from shop to shop to find the silly present I asked for in my letter to Santa Claus that the teacher gave them… it has the three phases : belief in something supernatural that makes you happy, doubt and feeling sad, and then realizing that reality makes you even more happy. But it’s so lost in the mist of early childhood that it doesn’t have the potency you describe.
Oh, on other topic, I’m still doubtful about “Singularity”, « an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’» sounds a logical jump with no foundation to me, let me try to explain : let’s assume we have a measure of intelligence into a single real number, I(M). An intelligent machine can design a better version of itself, so we have I(M_n+1) > I(M_n). That’s a strictly monotonically increasing sequence. That’s all we know. A strictly monotonically increasing sequence can have a finite limit (like 1+1/2+1/4+1/8+… has a limit of 2), or can grow towards infinity very slowly (like log(n)). How do we know that designing a better intelligence is not an exponentially difficult task ? How do we know that above a given level, the formula doesn’t look like I(M_n+1) = I(M_n) + 1/n, because every increase in intelligence is so much harder to make ? I guess there is an answer to that, but I couldn’t find it in siginst FAQ… does any of you have a pointer to an answer to that question ?
Well, the answer could simply be, “you’re right; we don’t know that”. However, I think there is evidence that an ultraintelligent machine could make itself very intelligent indeed.
The human mind, though better at reasoning than anything else that currently exists, still has a multitude of flaws. We can’t symbolically reason at even a millionth the speed of a $15 cell phone (and even if we could, there are still unanswered questions about how to reason), and our intuition is loaded with biases. If you could eliminate all human flaws, you would end up with something more intelligent than the most intelligent human that has ever lived.
Also, I could be mistaken, but I think people who study rationality and mathematics (among other things?) tend to report increasing marginal utility: once they understand a concept, it becomes easier to understand other concepts. A machine capable of understanding trillions of concepts might be able to learn new ones very easily compared to a human.
You might end up with nothing. You really have to start over and build an inference machine vastly different from ours.
This seems true...but it doesn’t argue against a bounded intelligence, just that the bound is very far.
There are lots of words on the subject in the FOOM debate but that’s (1) full of lots of “intuition, examples, and hand waving” on both sides, (2) ended with neither side convincing the other, and (3) produced no formal coherent treatise on the subject where evidence could be dropped into place to give an unambiguous answer that a third party could see was obviously true. It is worth a read if you’re looking for an intuition pump, not if you want a summary answer.
If you want to examine it from another angle to think about timing and details and so on, you might try using The Uncertain Future modeling tool. If you have the time to feed it input, I’m curious to know what output you get :-)
It seems to me that I’m both pessimistic and optimisc (or anyway, not well calibrated). I got :
Catastrophe by 2070 : 65.75%
AI by 2070 : 98.3%
I would have given much less to both (around 25%-33% for catastrophe, and around 50-75% for AI) if you directly asked me… so I’m badly calibrated, either in the way I answered to the individual questions, or to my final estimate (most likely to both...). I’ll have to read the FOOM debate and think more about the issue. Thanks for the pointers anyway.
(Btw, it’s painful, the applet doesn’t support copy/paste...)