My estimate of the core technology working would be “it simply looks like it should work”, which in terms of calibration should probably go to 90% or 80% or something like that.
Estimates of cryonics organizations staying alive are outside the range of my comparative advantage in predictions, but I’ll note that I tend to think in terms of them staying around for 30 years, not 300 years.
The weakest link in the chain is humankind’s overall probability of surviving. This is generally something I’ve refused to put a number on, with the excuse that I don’t know how to estimate the probability of doing the “impossible”—though for those who insist on using silly reference classes, I should note that my success rate on the AI-Box Experiment is 60%. (It’s at least possible, though, that once you’re frozen, you would have no way of noticing all the Everett branches where you died—there wouldn’t be anyone who experienced that death.)
It’s at least possible, though, that once you’re frozen, you would have no way of noticing all the Everett branches where you died—there wouldn’t be anyone who experienced that death.
Ha, Cryonics as an outcome lens for quantum immortality? I find that surprisingly intuitive.
I place the odds of humans actually being able to resuspend a frozen corpse near zero.
Therefore, in order for cryonics to work, we would need some form of information capture technology that would scan the in tact frozen brain and model the synaptic information in a form that could be ‘played.’ This is equivalent to the technology needed for uploading.
Given the complicated nature of whole brain simulations, some form of ‘easier’ quick and dirty AI is vastly more likely to come into being before this could take place.
I place the odds of this AI being friendly near zero. This might be where our calculations diverge.
In terms of ‘evertt branches’, one can never ‘experience’ being dead, so if we’re going to go that route, we might as well say that we all live on in some branch where FAI was developed in time to save us… needless to say, this gets a bit silly as an argument for real decisions.
By default AI isn’t friendly, but independent of SIAI succeeding does it really make sense to have 99% confidence in humanity as a whole not doing a given thing which is critical for our survival correctly or in FAI being impossibly difficult not merely for humans but for the gradually enhanced transhumans which humanity could technologically self-modify into if we don’t wipe ourselves out?
If we knew how to cheaply synthetically create ‘clicks’ of the type discussed in this post we would already have the tech to avoid UFAI indefinitely, enabling massive self-enhancement prior to work on FAI.
I actually did reflect after posting that my probability estimate was ‘overconfident,’ but since I don’t mind being embarrassed if I’m wrong, I’m placing it at where I actually believe it to be. Many posts on this blog have been dedicated to explaining how completely difficult the task of FAI is and how few people are capable of making meaningful contributions to the problem. There seems to be a panoply of ways for things to go horribly wrong in even minute ways. I think 1 in 10,000, or even 1 in a million is being generous enough with the odds that the problem is still worth looking at (given what’s at stake). Perhaps you have a problem with the mind-set of low probabilities, like it’s pessimistic and self-defeating? Also, do you really believe uploading could occur before AI?
I would be very surprised if uploading was easier than AI, maybe slightly more surprised than I would be by cold fusion being real, but with the sort of broad probabilities I use that’s still a bit over 1%. AGI is terribly difficult too. It’s not FAI or uploading but very high caliber people have failed over and over.
The status quo points to AGI before FAI, but the status quo continually changes, both due to trends and due to radical surprises. The world wouldn’t have to change more radically than it has numerous times in the past for the sanity waterline to rise far enough that people capable of making significant progress towards AGI reliably understood that they needed to aim for FAI or for uploading instead. Once Newton could unsurprisingly be a Christian theist and an Alchemist. By the mid 20th century the priors against Einstein being a theist were phenomenal and in fact he wasn’t one. (his Spinozaism is closer to what we call atheism than what most people call atheism is). I don’t think that extreme low probabilities are self defeating to me, though they might be for some people, I just disagree with them.
I would be very surprised if uploading was easier than AI
Do you mean “easier than AGI”? Why? With enough computing power, the hardest thing to do would probably be to supply the sensory inputs and do something useful with the motor outputs. With destructive uploading you don’t even need nanotech. It doesn’t seem like it requires any incredible new insights into the brain or intelligence in general.
Uploading is likely to require a lot of basic science, though not the depth of insight required for AGI. That same science will also make AGI much easier while most progress towards AGI contributes less though not nothing to uploading.
With all the science done there is still a HUGE engineering project. Engineering is done in near mode but very easy to talk about in far mode. People hand-wave the details and assume that it’s a matter of throwing money at a problem, but large technically demanding engineering projects fail or are greatly delayed all the time even if they have money and large novel projects have a great deal of difficulty attracting large amounts of funding.
GOFAI is like trying to fly by flapping giant bird wings with your arms. Magical thinking.
Evolutionary approaches to AI are like platinum jet-packs. Simple, easy to make, inordinately expensive and stupidly hard to control.
Uploading is like building a bird from scratch. It would Definitely work really well if people could just get all the bugs out, but it’s a big, complicated, insanely expensive, and judging by history there will be lots of bugs.
Neuromorphic AI is like trying to build a bird while looking for insights and then building an airplane when you understand how birds work.
FAI is like trying to build a floating magnetic airship. It sounds casually like something that is significantly more likely to be possible than not but we have very little idea in practice how its done, nothing in nature to imitate, and no promise that the necessary high-level insights required to pull it off are humanly achievable. OTOH, since we haven’t looked very hard as a species, we also have no good reason to think they aren’t so it basically falls to your priors.
I think the primary point overlooked when thinking about uploads is that there are milestones along the way that will greatly increase funding and overall motivation. I’m confident that if a rough mouse brain could be uploaded then the response from governments and the private sector would be tremendous. There are plenty of smart people and organizations in the world that would understand the potential of human uploads once basic feasibility had been demonstrated. The engineering project would still be daunting, of course, but the economic incentive would plainly be seen as the greatest in history.
Sorry, but with today’s industry and government sectors I don’t buy it. Not for uploads, not for aging. This awareness already happened with MNT, but it didn’t have the effect in question.
Successfully uploading a mouse brain—and possibly also the radical extension of the lifespan of a mouse—would seem to me like it’d get as much media attention as Dolly the Sheep did. Has there been some MNT demonstration that would’ve gotten an equivalent amount of publicity?
Though judging from the reaction to Dolly, the reaction might be an anti-uploading backlash just as well as a positive one.
There’s awareness of MNT, but feasibility of the more extreme possibilities hasn’t been demonstrated adequately for heavy investment. The roadmap from mouse brain to human brain is also much, much clearer than the roadmap from here to full fledged MNT.
It doesn’t seem like it requires any incredible new insights into the brain or intelligence in general.
I think that’s why Vassar is betting on AGI: it requires insight, but the rest of the necessary technology is already here. Uploading requires an engineering project involving advances in cryobiology, ultramicrotomes, scanning electron microscopes, and computer processors. There’s no need for new insight, but the required technology advances are significant.
Who are the people capable of making significant progress on AGI who aren’t already aware of (and indeed working on) FAI? My impression was that the really smart “MIT-type” AI people were basically all working on narrow AI.
Your argument is interesting, but I’m not sure if you arrived at your 1% estimate by specific reasoning about uploading/AI, or by simply arguing that paradigmatic ‘surprises’ occur frequently enough that we should never assign more than a 99% chance to something (theoretically possible) not happening.
I can conceive of many possible worlds (given AGI does not occur) in which the individual technologies needed to achieve uploading are all in place, and yet are never put together for that purpose due to general human revulsion. I can also conceive of global-political reasons that will throw a wrench in tech-development in general. Should I assign each of those a 1% probability just because they are possible?
Also, no offense meant to you or anyone else here, but I frequently wonder how much bias there is in this in-group of people who like to think about uploading/FAI towards believing that it will actually occur. It’s a difficult thing to gage, since it seems the people best qualified to answer questions about these topics are the ones most excited/invested in the positive outcomes. I mean, if someone looks at the evidence and becomes convinced that the situation is hopeless, they are much less likely to get involved in bringing about a positive outcome and more likely to rationalize all this away as either crazy or likely to occur so far in the future that it won’t bother them. Where do you go for an outside view?
Paradigmatic surprises vary a lot in how dramatic they are. X-rays and double slit deserved WAY lower probabilities than 1%. I’m basically going on how convincing I find the arguments for uploading first and trying to maintain calibrated confidence intervals. I would not bet 99:1 against uploading happening first. I would bet 9:1 without qualm. I would probably bet 49:1 I find it very easy to tell personally credible stories (no outlandish steps) where uploading happens first for good reasons. The probability of any of those stories happening may be much less than 1%, but they probably constitute exemplars of a large class.
Assigning a 1% probability to uploading not happening in a given decade when it could happen, due to politics and/or revulsion, seems much too low. Decade-to-decade correlations could be pretty high but not plausibly near 1, so given civilization’s long term survival uploading is inevitable once the required tech is in place, but it’s silly to assume civilization’s long-term survival.
I don’t really think that outside views are that widely applicable a methodology and if there isn’t an obvious place to look for one there probably isn’t one. The buck for judgment and decision-making has to stop somewhere, and stopping with deciding on reference classes seems silly in most situations. That said, I share your concern. I’m sure that there is a bias in the community of interested people, but I think that the community’s most careful thinkers can and do largely avoid it. I certainly think bad outcomes are more likely than good ones, but I think that the odds are around 2:1 rather than 100:1.
I’d be interested in seeing your reasoning written out in a top-level post. 2:1 seems beyond optimistic to me, especially if you give AI before uploading 9:1, but I’m sure you have your reasons. Explaining a few of these ‘personally credible stories,’ and what classes you place them in such that they sum to 10% total may be helpful. This goes for why you think FAI has such a high chance or succeeding as well.
Also, I believe I used the phrase ‘outside view’ incorrectly, since I didn’t mean reference classes. I was interested to know if there are people who are not part of your community that help you with number crunching on the tech-side. An ‘unbiased’ source of probabilities, if you will.
I think of my community as essentially consisting of the people who are willing to do this sort of analysis, so almost axiomatically no.
The simplest reason for thinking that FAI is (relatively) likely to succeed is the same reason for thinking that slavery ending or world peace are more likely than one might assume from psychology or from economics, namely that people who think about them are unusually motivated to try to bring them about.
Don’t know where you got your numbers, there were two experiments with small AI handicaps ($10 and $20, 2 wins) and three experiments for $2K-$5K with 1 win and 2 losses.
My estimate of the core technology working would be “it simply looks like it should work”, which in terms of calibration should probably go to 90% or 80% or something like that.
I don’t think this probability is too high if by ‘core technology working’ you mean ever working. However, would you modify this probability if we’re talking specifically about people virtified in the next ten years? As we learn more about how to unvitrify people, we’ll learn more about the right way to vitrify them.
Alcor writes cryonics should work
If foreseeable technology can repair injuries of the preservation process;
so that’s probably the probability I’m talking about.
My estimate of the core technology working would be “it simply looks like it should work”, which in terms of calibration should probably go to 90% or 80% or something like that.
Estimates of cryonics organizations staying alive are outside the range of my comparative advantage in predictions, but I’ll note that I tend to think in terms of them staying around for 30 years, not 300 years.
The weakest link in the chain is humankind’s overall probability of surviving. This is generally something I’ve refused to put a number on, with the excuse that I don’t know how to estimate the probability of doing the “impossible”—though for those who insist on using silly reference classes, I should note that my success rate on the AI-Box Experiment is 60%. (It’s at least possible, though, that once you’re frozen, you would have no way of noticing all the Everett branches where you died—there wouldn’t be anyone who experienced that death.)
Ha, Cryonics as an outcome lens for quantum immortality? I find that surprisingly intuitive.
Well, I look at it this way:
I place the odds of humans actually being able to resuspend a frozen corpse near zero.
Therefore, in order for cryonics to work, we would need some form of information capture technology that would scan the in tact frozen brain and model the synaptic information in a form that could be ‘played.’ This is equivalent to the technology needed for uploading.
Given the complicated nature of whole brain simulations, some form of ‘easier’ quick and dirty AI is vastly more likely to come into being before this could take place.
I place the odds of this AI being friendly near zero. This might be where our calculations diverge.
In terms of ‘evertt branches’, one can never ‘experience’ being dead, so if we’re going to go that route, we might as well say that we all live on in some branch where FAI was developed in time to save us… needless to say, this gets a bit silly as an argument for real decisions.
By default AI isn’t friendly, but independent of SIAI succeeding does it really make sense to have 99% confidence in humanity as a whole not doing a given thing which is critical for our survival correctly or in FAI being impossibly difficult not merely for humans but for the gradually enhanced transhumans which humanity could technologically self-modify into if we don’t wipe ourselves out? If we knew how to cheaply synthetically create ‘clicks’ of the type discussed in this post we would already have the tech to avoid UFAI indefinitely, enabling massive self-enhancement prior to work on FAI.
I actually did reflect after posting that my probability estimate was ‘overconfident,’ but since I don’t mind being embarrassed if I’m wrong, I’m placing it at where I actually believe it to be. Many posts on this blog have been dedicated to explaining how completely difficult the task of FAI is and how few people are capable of making meaningful contributions to the problem. There seems to be a panoply of ways for things to go horribly wrong in even minute ways. I think 1 in 10,000, or even 1 in a million is being generous enough with the odds that the problem is still worth looking at (given what’s at stake). Perhaps you have a problem with the mind-set of low probabilities, like it’s pessimistic and self-defeating? Also, do you really believe uploading could occur before AI?
I would be very surprised if uploading was easier than AI, maybe slightly more surprised than I would be by cold fusion being real, but with the sort of broad probabilities I use that’s still a bit over 1%. AGI is terribly difficult too. It’s not FAI or uploading but very high caliber people have failed over and over.
The status quo points to AGI before FAI, but the status quo continually changes, both due to trends and due to radical surprises. The world wouldn’t have to change more radically than it has numerous times in the past for the sanity waterline to rise far enough that people capable of making significant progress towards AGI reliably understood that they needed to aim for FAI or for uploading instead. Once Newton could unsurprisingly be a Christian theist and an Alchemist. By the mid 20th century the priors against Einstein being a theist were phenomenal and in fact he wasn’t one. (his Spinozaism is closer to what we call atheism than what most people call atheism is). I don’t think that extreme low probabilities are self defeating to me, though they might be for some people, I just disagree with them.
Do you mean “easier than AGI”? Why? With enough computing power, the hardest thing to do would probably be to supply the sensory inputs and do something useful with the motor outputs. With destructive uploading you don’t even need nanotech. It doesn’t seem like it requires any incredible new insights into the brain or intelligence in general.
Uploading is likely to require a lot of basic science, though not the depth of insight required for AGI. That same science will also make AGI much easier while most progress towards AGI contributes less though not nothing to uploading.
With all the science done there is still a HUGE engineering project. Engineering is done in near mode but very easy to talk about in far mode. People hand-wave the details and assume that it’s a matter of throwing money at a problem, but large technically demanding engineering projects fail or are greatly delayed all the time even if they have money and large novel projects have a great deal of difficulty attracting large amounts of funding.
GOFAI is like trying to fly by flapping giant bird wings with your arms. Magical thinking.
Evolutionary approaches to AI are like platinum jet-packs. Simple, easy to make, inordinately expensive and stupidly hard to control.
Uploading is like building a bird from scratch. It would Definitely work really well if people could just get all the bugs out, but it’s a big, complicated, insanely expensive, and judging by history there will be lots of bugs.
Neuromorphic AI is like trying to build a bird while looking for insights and then building an airplane when you understand how birds work.
FAI is like trying to build a floating magnetic airship. It sounds casually like something that is significantly more likely to be possible than not but we have very little idea in practice how its done, nothing in nature to imitate, and no promise that the necessary high-level insights required to pull it off are humanly achievable. OTOH, since we haven’t looked very hard as a species, we also have no good reason to think they aren’t so it basically falls to your priors.
I think the primary point overlooked when thinking about uploads is that there are milestones along the way that will greatly increase funding and overall motivation. I’m confident that if a rough mouse brain could be uploaded then the response from governments and the private sector would be tremendous. There are plenty of smart people and organizations in the world that would understand the potential of human uploads once basic feasibility had been demonstrated. The engineering project would still be daunting, of course, but the economic incentive would plainly be seen as the greatest in history.
Sorry, but with today’s industry and government sectors I don’t buy it. Not for uploads, not for aging. This awareness already happened with MNT, but it didn’t have the effect in question.
Successfully uploading a mouse brain—and possibly also the radical extension of the lifespan of a mouse—would seem to me like it’d get as much media attention as Dolly the Sheep did. Has there been some MNT demonstration that would’ve gotten an equivalent amount of publicity?
Though judging from the reaction to Dolly, the reaction might be an anti-uploading backlash just as well as a positive one.
MNT == molecular nanotechnology?
Ayup.
There’s awareness of MNT, but feasibility of the more extreme possibilities hasn’t been demonstrated adequately for heavy investment. The roadmap from mouse brain to human brain is also much, much clearer than the roadmap from here to full fledged MNT.
If you want to learn more about WBE and the challenges ahead, this is probably the best place to start:
Whole Brain Emulation: A Roadmap by Nick Bostrom and Anders Sandberg
I think that’s why Vassar is betting on AGI: it requires insight, but the rest of the necessary technology is already here. Uploading requires an engineering project involving advances in cryobiology, ultramicrotomes, scanning electron microscopes, and computer processors. There’s no need for new insight, but the required technology advances are significant.
Who are the people capable of making significant progress on AGI who aren’t already aware of (and indeed working on) FAI? My impression was that the really smart “MIT-type” AI people were basically all working on narrow AI.
Your argument is interesting, but I’m not sure if you arrived at your 1% estimate by specific reasoning about uploading/AI, or by simply arguing that paradigmatic ‘surprises’ occur frequently enough that we should never assign more than a 99% chance to something (theoretically possible) not happening.
I can conceive of many possible worlds (given AGI does not occur) in which the individual technologies needed to achieve uploading are all in place, and yet are never put together for that purpose due to general human revulsion. I can also conceive of global-political reasons that will throw a wrench in tech-development in general. Should I assign each of those a 1% probability just because they are possible?
Also, no offense meant to you or anyone else here, but I frequently wonder how much bias there is in this in-group of people who like to think about uploading/FAI towards believing that it will actually occur. It’s a difficult thing to gage, since it seems the people best qualified to answer questions about these topics are the ones most excited/invested in the positive outcomes. I mean, if someone looks at the evidence and becomes convinced that the situation is hopeless, they are much less likely to get involved in bringing about a positive outcome and more likely to rationalize all this away as either crazy or likely to occur so far in the future that it won’t bother them. Where do you go for an outside view?
Paradigmatic surprises vary a lot in how dramatic they are. X-rays and double slit deserved WAY lower probabilities than 1%. I’m basically going on how convincing I find the arguments for uploading first and trying to maintain calibrated confidence intervals. I would not bet 99:1 against uploading happening first. I would bet 9:1 without qualm. I would probably bet 49:1 I find it very easy to tell personally credible stories (no outlandish steps) where uploading happens first for good reasons. The probability of any of those stories happening may be much less than 1%, but they probably constitute exemplars of a large class.
Assigning a 1% probability to uploading not happening in a given decade when it could happen, due to politics and/or revulsion, seems much too low. Decade-to-decade correlations could be pretty high but not plausibly near 1, so given civilization’s long term survival uploading is inevitable once the required tech is in place, but it’s silly to assume civilization’s long-term survival.
I don’t really think that outside views are that widely applicable a methodology and if there isn’t an obvious place to look for one there probably isn’t one. The buck for judgment and decision-making has to stop somewhere, and stopping with deciding on reference classes seems silly in most situations. That said, I share your concern. I’m sure that there is a bias in the community of interested people, but I think that the community’s most careful thinkers can and do largely avoid it. I certainly think bad outcomes are more likely than good ones, but I think that the odds are around 2:1 rather than 100:1.
I think that was probably the greatest single surprise in the entire history of time.
Outside of pure math at least. Irrational numbers were a big deal.
Measured in the prior probability that was assigned or could justly have been assigned beforehand, I don’t think irrational numbers come close.
I’d be interested in seeing your reasoning written out in a top-level post. 2:1 seems beyond optimistic to me, especially if you give AI before uploading 9:1, but I’m sure you have your reasons. Explaining a few of these ‘personally credible stories,’ and what classes you place them in such that they sum to 10% total may be helpful. This goes for why you think FAI has such a high chance or succeeding as well.
Also, I believe I used the phrase ‘outside view’ incorrectly, since I didn’t mean reference classes. I was interested to know if there are people who are not part of your community that help you with number crunching on the tech-side. An ‘unbiased’ source of probabilities, if you will.
I think of my community as essentially consisting of the people who are willing to do this sort of analysis, so almost axiomatically no.
The simplest reason for thinking that FAI is (relatively) likely to succeed is the same reason for thinking that slavery ending or world peace are more likely than one might assume from psychology or from economics, namely that people who think about them are unusually motivated to try to bring them about.
AI-Box: 60% success? I have it that you lost twice, won twice.
Don’t know where you got your numbers, there were two experiments with small AI handicaps ($10 and $20, 2 wins) and three experiments for $2K-$5K with 1 win and 2 losses.
I don’t think this probability is too high if by ‘core technology working’ you mean ever working. However, would you modify this probability if we’re talking specifically about people virtified in the next ten years? As we learn more about how to unvitrify people, we’ll learn more about the right way to vitrify them.
Alcor writes cryonics should work
so that’s probably the probability I’m talking about.