Just to make sure I understand: you interpret EY to be saying that the Earth will last more than a hundred years, not saying that the Earth will fail to last more than a hundred years. Yes?
If so, can you clarify how you arrive at that interpretation?
“If you told me the Earth would only last a hundred years (i.e. won’t last longer than that) …. It’s a moot point since the Earth won’t only last a hundred year (i.e. it will last longer).” At least that’s what I got on the first reading.
I think I could kind-of make sense “it would increase the immediate priority of CFAR and decrease that of SIAI” under either hypothesis about what he means, though one interpretation would need to be more strained than the other.
The idea is that if Earth lasts at least a hundred years, (if that’s a given), then the possibility of a uFAI in that timespan severely decreases—so SIAI (which seeks to prevent a uFAI by building a FAI) is less of an immediate priority and it becomes a higher priority to develop CFAR that will increase the public’s rationality for the future generations, so that the future generations don’t launch a uFAI.
(The other interpretation would be “If the Earth is going to only last a hundred years, then there’s not much point in trying to make a FAI since in the long-term we’re screwed anyway, and raising the sanity waterline will make us enjoy more what time there is left.)
EDIT: Also, if your interpretation is correct, by saying that the Earth won’t last 100 years he’s either admitting defeat (i.e. saying that an uFAI will be built) or saying that even a FAI would destroy the Earth within 100 years (which sounds unlikely to me—even if the CEV of humanity would eventually want to do that, I guess it would take more than 100 years to terraform another place for us to live and for us all to move there).
I think I disagree; care to make it precise enough to bet on? I’m expecting life still around, Earth the main population center, most humans not uploaded, some people dying of disease or old age or in wars, most people performing dispreferred activities in exchange for scarce resources at least a couple months in their lives, most children coming out of a biological parent and not allowed to take major decisions for themselves for at least a decade.
I’m offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you’re going to transfer it to SIAI/CFAR tell me and I’ll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we’re both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.
How’s that sound? All of the above is up for negotiation.
As wedifrid says, this is a no-brainer “accept” (including the purchasing-power-adjusted caveat). If you are inside the US and itemize deductions, please donate to SIAI, otherwise I’ll accept via Paypal. Your implied annual interest rate assuming a 100% probability of winning is 0.7% (plus inflation adjustment). Please let me know whether you decide to go through with it; withdrawal is completely understandable—I have no particular desire for money at the cost of forcing someone else to go through with a bet they feel uncomfortable about. (Or rather, my desire for $100 is not this strong—I would probably find $100,000 much more tempting.)
Don’t worry, my only debitor who pays higher interest rates than that is my bank. As long as that’s not my main liquidity bottleneck I’m happy to follow medieval morality on lending.
If you publish transaction data to confirm the bet, please remove my legal name.
Bet received. I feel vaguely guilty and am reminding myself hard that money in my Paypal account is hopefully a good thing from a consequentialist standpoint.
I’m offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you’re going to transfer it to SIAI/CFAR tell me and I’ll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we’re both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.
(Neglecting any logistic or legal isses) this sounds like a no brainer for Eliezer (accept).
How’s that sound?
Like you would be better served by making the amounts you give and expect to receive if you win somewhat more proportionate to expected utility of the resources at the time. If Eliezer was sure he was going to lose he should still take the low interest loan.
Even once the above is accounted for Eliezer should still accept the bet (in principle).
Dollar amounts are meant as purchasing-power-adjusted. I am sticking my fingers in my ears and chanting “La la, can’t hear you” at discounting effects.
I definitely expect nanotech a few orders of magnitude awesomer than we have now. I expect great progress on aging and disease, and wouldn’t be floored by them being solved in theory (though it does sound hard). What I don’t expect is worldwide deployment. There are still people dying from measles, when in any halfway-developed country every baby gets an MMR shot as a matter of course. I wouldn’t be too surprised if everyone who can afford basic care in rich countries was immortal while thousands of brown kids kept drinking poo water and dying. I also expect longevity treatments to be long-term, not permanent fixes, and thus hard to access in poor or politically unstable countries.
The above requires poor countries to continue existing. I expect great progress, but not abolition of poverty. If development continues the way it has (e.g. Brazil), a century isn’t quite enough for Somalia to get its act together. If there’s a game-changing, universally available advance that bumps everyone to cutting-edge tech levels (or even 2012 tech levels), then I won’t regret that $100 much.
I have no idea what wars will look like, but I don’t expect them to be nonexistent or nonlethal. Given no game-changer, socioeconomic factors vary too slowly to remove incentive for war. Straightforward tech applications (get a superweapon, get a superdefense, give everyone a superweapon, etc.) get you very different war strategies, but not world peace. If you do something really clever like world government nobody’s unhappy with, arms-race-proof shields for everyone, or mass Gandhification, then I have happily lost.
(I guess I had been primed to take “Earth” to mean ‘a planet or dwarf planet (according to the current IAU definition) orbiting the Sun between Venus and Mars’ by this. EDIT: Dragon Ball too, where destroying a planet means turning it into dust, not just rendering it inhabitable.)
Also, if your interpretation is correct, by saying that the Earth won’t last 100 years he’s either admitting defeat (i.e. saying that an uFAI will be built
EY does seem in a darker mood than usual lately, so it wouldn’t surprise me to see him implying pessimism about our chances out loud, even if it doesn’t go so far as “admitting defeat”. I do hope it’s just a mood, rather than that he has rationally updated his estimation of our chances of survival to be even lower than they already were. :-)
When I had commented, EY hadn’t clarified yet that by Earth he meant “the world as we know it”, so I didn’t expect “Earth” to exclude ‘the planet between Venus and Mars 50 years after a FAI is started on it’.
50 years after a self-improving AI is released into the wild, I don’t expect Venus and Mars to be in their present orbits. I expect that they would be gradually moving towards being in the same orbit that the Earth is moving towards (or is already established in) 120 degrees apart, propelled by a rocket which uses large reflectors in space to heat portion of the surface of the planet, which is then forced to jet in the desired vector at escape velocity. ETA: That would mean the removal of three objects from the list of planets of Sol.
I think it will only be a few hundred years after FAI before interplanetary travel requires routine ‘take your shoes off’ type of screening.
IMHO you’re being provincial. Your intuitions for interplanetary travel come directly from flying in the US; if you were used to saner policies you’d make different predictions. (If you’re not from North America, I am very confused.)
Your idea of provincialism is provincial. The idea of shipping tinned apes around the solar system is the true failure of vision here, nevermind the bag check procedures.
If the AI is Friendly? The enhancement of humanity’s utility/happiness/wealth—I assume terraforming is a lot easier if planets are near the middle of the water zone.
We don’t know what it takes to terraform a world—it’s easy to go “well, it needs more water and air for starters,” but that conceals an awful lot of complexity. Humans, talking populations thereof, can’t live just anywhere. We don’t even have a really good, working definition of what the “habitability” of a planet is, in a way that’s more specific than “I knows it when I sees it.” Most of the Earth requires direct cultural adaptation to be truly livable. There’s no such thing as humans who don’t use culture and technology to cope with the challenges posed by their environment.
Anyway, my point is more that your prediction suggests some cached premises: why should FAI do that particular thing? Why is that a more likely outcome than any of the myriad other possibilities?
I specifically mentioned that Earth’s orbit would also be optimized- although the solar-powered jet engine concept has bigger downsides when used on an inhabited planet.
Ah! I had read the wiki article on planets, which said “and has cleared its neighbouring region of planetesimals,” and didn’t bother to look up primary sources. I should know better. Thanks!
So, we can construct an argument that CFAR would rise in relative importance over SIAIif we see strong evidence the world as we know it will end within 100 years, and an argument with the same conclusion if we see strong evidence that the world as we know it will last for at least 100 years.
I guess he means “only last a hundred years”, not “last at least a hundred years”.
Just to make sure I understand: you interpret EY to be saying that the Earth will last more than a hundred years, not saying that the Earth will fail to last more than a hundred years. Yes?
If so, can you clarify how you arrive at that interpretation?
“If you told me the Earth would only last a hundred years (i.e. won’t last longer than that) …. It’s a moot point since the Earth won’t only last a hundred year (i.e. it will last longer).” At least that’s what I got on the first reading.
I think I could kind-of make sense “it would increase the immediate priority of CFAR and decrease that of SIAI” under either hypothesis about what he means, though one interpretation would need to be more strained than the other.
The idea is that if Earth lasts at least a hundred years, (if that’s a given), then the possibility of a uFAI in that timespan severely decreases—so SIAI (which seeks to prevent a uFAI by building a FAI) is less of an immediate priority and it becomes a higher priority to develop CFAR that will increase the public’s rationality for the future generations, so that the future generations don’t launch a uFAI.
(The other interpretation would be “If the Earth is going to only last a hundred years, then there’s not much point in trying to make a FAI since in the long-term we’re screwed anyway, and raising the sanity waterline will make us enjoy more what time there is left.)
EDIT: Also, if your interpretation is correct, by saying that the Earth won’t last 100 years he’s either admitting defeat (i.e. saying that an uFAI will be built) or saying that even a FAI would destroy the Earth within 100 years (which sounds unlikely to me—even if the CEV of humanity would eventually want to do that, I guess it would take more than 100 years to terraform another place for us to live and for us all to move there).
I was just using “Earth” as a synonym for “the world as we know it”.
I think I disagree; care to make it precise enough to bet on? I’m expecting life still around, Earth the main population center, most humans not uploaded, some people dying of disease or old age or in wars, most people performing dispreferred activities in exchange for scarce resources at least a couple months in their lives, most children coming out of a biological parent and not allowed to take major decisions for themselves for at least a decade.
I’m offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you’re going to transfer it to SIAI/CFAR tell me and I’ll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we’re both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.
How’s that sound? All of the above is up for negotiation.
As wedifrid says, this is a no-brainer “accept” (including the purchasing-power-adjusted caveat). If you are inside the US and itemize deductions, please donate to SIAI, otherwise I’ll accept via Paypal. Your implied annual interest rate assuming a 100% probability of winning is 0.7% (plus inflation adjustment). Please let me know whether you decide to go through with it; withdrawal is completely understandable—I have no particular desire for money at the cost of forcing someone else to go through with a bet they feel uncomfortable about. (Or rather, my desire for $100 is not this strong—I would probably find $100,000 much more tempting.)
PayPal-ed to sentience at pobox dot com.
Don’t worry, my only debitor who pays higher interest rates than that is my bank. As long as that’s not my main liquidity bottleneck I’m happy to follow medieval morality on lending.
If you publish transaction data to confirm the bet, please remove my legal name.
Bet received. I feel vaguely guilty and am reminding myself hard that money in my Paypal account is hopefully a good thing from a consequentialist standpoint.
Bet recorded: LW bet registry, PB.com.
(Neglecting any logistic or legal isses) this sounds like a no brainer for Eliezer (accept).
Like you would be better served by making the amounts you give and expect to receive if you win somewhat more proportionate to expected utility of the resources at the time. If Eliezer was sure he was going to lose he should still take the low interest loan.
Even once the above is accounted for Eliezer should still accept the bet (in principle).
Dollar amounts are meant as purchasing-power-adjusted. I am sticking my fingers in my ears and chanting “La la, can’t hear you” at discounting effects.
That’s a nice set of criteria by which to distinguish various futures (and futurists).
Care to explain why? You sound like you expect nanotech by then.
I definitely expect nanotech a few orders of magnitude awesomer than we have now. I expect great progress on aging and disease, and wouldn’t be floored by them being solved in theory (though it does sound hard). What I don’t expect is worldwide deployment. There are still people dying from measles, when in any halfway-developed country every baby gets an MMR shot as a matter of course. I wouldn’t be too surprised if everyone who can afford basic care in rich countries was immortal while thousands of brown kids kept drinking poo water and dying. I also expect longevity treatments to be long-term, not permanent fixes, and thus hard to access in poor or politically unstable countries.
The above requires poor countries to continue existing. I expect great progress, but not abolition of poverty. If development continues the way it has (e.g. Brazil), a century isn’t quite enough for Somalia to get its act together. If there’s a game-changing, universally available advance that bumps everyone to cutting-edge tech levels (or even 2012 tech levels), then I won’t regret that $100 much.
I have no idea what wars will look like, but I don’t expect them to be nonexistent or nonlethal. Given no game-changer, socioeconomic factors vary too slowly to remove incentive for war. Straightforward tech applications (get a superweapon, get a superdefense, give everyone a superweapon, etc.) get you very different war strategies, but not world peace. If you do something really clever like world government nobody’s unhappy with, arms-race-proof shields for everyone, or mass Gandhification, then I have happily lost.
Thanks for explaining!
Of course, nanotech could be self replicating and thus exponentially cheap, but the likelihood of that is … debatable.
I feel an REM song coming on...
(I guess I had been primed to take “Earth” to mean ‘a planet or dwarf planet (according to the current IAU definition) orbiting the Sun between Venus and Mars’ by this. EDIT: Dragon Ball too, where destroying a planet means turning it into dust, not just rendering it inhabitable.)
EY does seem in a darker mood than usual lately, so it wouldn’t surprise me to see him implying pessimism about our chances out loud, even if it doesn’t go so far as “admitting defeat”. I do hope it’s just a mood, rather than that he has rationally updated his estimation of our chances of survival to be even lower than they already were. :-)
“The world as we know it” ends if FAI is released into the wild.
When I had commented, EY hadn’t clarified yet that by Earth he meant “the world as we know it”, so I didn’t expect “Earth” to exclude ‘the planet between Venus and Mars 50 years after a FAI is started on it’.
50 years after a self-improving AI is released into the wild, I don’t expect Venus and Mars to be in their present orbits. I expect that they would be gradually moving towards being in the same orbit that the Earth is moving towards (or is already established in) 120 degrees apart, propelled by a rocket which uses large reflectors in space to heat portion of the surface of the planet, which is then forced to jet in the desired vector at escape velocity. ETA: That would mean the removal of three objects from the list of planets of Sol.
I think it will only be a few hundred years after FAI before interplanetary travel requires routine ‘take your shoes off’ type of screening.
We’ll still have shoes? And terrorists? I’m disappointed in advance.
And even the right and ability (if we currently have it) to make choices, and some privacy!
IMHO you’re being provincial. Your intuitions for interplanetary travel come directly from flying in the US; if you were used to saner policies you’d make different predictions. (If you’re not from North America, I am very confused.)
Your idea of provincialism is provincial. The idea of shipping tinned apes around the solar system is the true failure of vision here, nevermind the bag check procedures.
How quickly do you think humans will give up commuting?
Why would you put them into an inherently dynamically-unstable configuration, position-corrected by a massive kludge? I mean, what’s in it for the AI?
How about a dynamically stable one?
Oh, and roughly ten to twenty times the total available living space for humans, at an order-of-magnitude guess.
If the AI is Friendly? The enhancement of humanity’s utility/happiness/wealth—I assume terraforming is a lot easier if planets are near the middle of the water zone.
We don’t know what it takes to terraform a world—it’s easy to go “well, it needs more water and air for starters,” but that conceals an awful lot of complexity. Humans, talking populations thereof, can’t live just anywhere. We don’t even have a really good, working definition of what the “habitability” of a planet is, in a way that’s more specific than “I knows it when I sees it.” Most of the Earth requires direct cultural adaptation to be truly livable. There’s no such thing as humans who don’t use culture and technology to cope with the challenges posed by their environment.
Anyway, my point is more that your prediction suggests some cached premises: why should FAI do that particular thing? Why is that a more likely outcome than any of the myriad other possibilities?
I specifically mentioned that Earth’s orbit would also be optimized- although the solar-powered jet engine concept has bigger downsides when used on an inhabited planet.
Do distinct planets necessarily have distinct orbits?
According to the modern definition, yes.
Ah! I had read the wiki article on planets, which said “and has cleared its neighbouring region of planetesimals,” and didn’t bother to look up primary sources. I should know better. Thanks!
Not thinking very ambitious I see.
That’s on the five-millennium plan.
So, we can construct an argument that CFAR would rise in relative importance over SIAIif we see strong evidence the world as we know it will end within 100 years, and an argument with the same conclusion if we see strong evidence that the world as we know it will last for at least 100 years.
There is something wrong.