Ah—I’d seen the link, but the widget just spun. I’ll go look at the PDF. The below is before I have read it—it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).
As for the SSA being wrong on the face of it—the DA wiki page says “The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist.” Assuming this is true (I do not know enough to judge yet), then if the SSA is false, then the DA argument is unsupported.
So—lets look at SSA. In a nutshell, it revolves around how unlikely it is that you were born in the first small% of history—and ergo, doomsday must be around the corner.
I can think of 2 very strong arguments for the SSA being untrue.
First—this isn’t actually how probability works. Take a fair coin and decide to flip it. The probability of heads and tails are the same, 1⁄2 − 50% for each. Flip the coin, and note the result. The probability is now unity—there is no magic way to get that 50⁄50 back. That coin toss result is now and forever more heads (or tails). You cannot look at a given result, and work backwards about how improbable it was, then use that—because it is no longer improbable, it’s history. Probability does not actually work backwards in time, although it is convenient in some cases to pretend it does.
Another example—what is the probability that I was born at the exact second, minute, hour, and day, at the exact location I was born at, out of the countless other places and times that humanity has existed that I could have been born in/at? The answer, of course—unity. And nil at all other places and times, because it has already happened—the wave form, if you will, has collapsed, Elvis has left the building.
So—what is the probability you were born so freakishly close to the 5 million year reign of humanity, in the first 0.000001% of all living people? Unity. Because it’s history. And the only thing making this position any different whatsoever from the others is blind chance. There is nothing one bit special about being in the first bit, other than that it allows you to notice that. (Feel free to substitute anything for 5 million above—it’s all the same).
Second—there are also logical issues—you can spin the argument on it’s head, and it still works (with less force to be sure). What are the chances of me being alive for Doomsday? Fairly small—despite urban legend, the number of people alive are a fairly small percentage (6-7%) of all who have ever lived. Ergo—doomsday cannot be soon, because it was unlikely I would be born to see it. (again, flawed—right now, the liklihood I was born at that time is unity)
An argument that can be used to “prove” both T and ~T is flawed, and should be discarded, aside from the probability thing. Prove here being used very loosely, because this is nowhere close to proof, which is good because I like things like Math working.
Time to go read a PDF.
Update: Done. That was quite enjoyable, thank you. A great deal of food for thought, and like most good,
crunchy info filled things, there were bits I quite agreed with, and quite disagreed with (and that’s fine.)
I took some notes; I will not attempt to post them here, because I have already run into comment length issues, and I’m a wordy SOB. I can post them to a gist or something if anyone is interested, I kept them mostly
so I could comment intelligently after reading it. Scanning back thru for the important bits:
Anthropomorphic reasoning would be useless as suggested—unless the AI was designed by and for humans to use. Which it would be. So—it may well be useful in the beginning, because presumably we would be modeling desired traits (like “friendliness”) on human traits. That could easily fail catastrophically later, of course.
The comparison between evolution and AI, in terms of relation to humans on page 11 was profound, and very well said.
There are an awful lot of assumptions presented as givens, and then used to assert other things. If any of them are wrong—the chain breaks. There were also a few suggestions that would violate physics, but the point being made was still valid (“With molecular nanotechnology, the AI could (potentially) rewrite the solar system unopposed.” was my favorite; it is probably beneficial to separate what is possible and impossible, given things like distances and energy and time, not to mention “why?”).
There is an underlying assumption that intelligence can increase without bound. I am by no means sure this is true—I can think of no other trait that does so, you run into limits (again) of physics and energy and so on. It is very possible that things like the speed-of-light propagation delay, heat, and inherent difficulty of certain tasks such as factoring would end up imposing an upper-limit on intelligence of an AI before it reached the w00 w00 god-power magic stage. Not that it matters that much, if it’s goal is to harm us, you don’t need to be too smart to do that...
Anyone thinking an AI might want my body for it’s atoms is not thinking clearly. I am made primarily of carbon, hydrogen, and oxygen—all are plentiful, in much easier to work with form, elsewhere. An early stage AI bootstrapping production would almost certainly want metals, some basic elements like silicon, and hydrocarbons (which we keep handy). Oh, and likely fissionables for power. Not us. Later on, all bets are off, but there are still far better places to get atoms than people.
Finally—the flaw in assuming an AI will predate mind upload is motivation. Death is a powerful, powerful motivator. A researcher close to being able to do it, about to die, is damn well going to try, no matter what
the government says they can or can’t do—I would. And the guesses as to fidelity required are just that—guesses. Life extension is a powerful, powerful draw. Upload may also ultimately be easier—hand-waving away a ton of details, it’s just copy and simulation; it does not require new, creative inventions, just refinements on current thoughts. You don’t need to totally understand how something works to scan and simulate it.
Enough. If you have read this far—more power to you, thank you much for your time.
PS. I still don’t get the whole “simulated human civilizations” bit—the paper did not seem to touch on that. But I rather suspect it’s the same backwards probability thing...
I think you’re wrong about “backwards probability”.
Probabilities describe your state of knowledge (or someone else’s, or some hypothetical idealized observer’s, etc.). It is perfectly true that “your” probability for some past event known to you will be 1 (or rather something very close to 1 but allowing for the various errors you might be making), but that isn’t because there’s something wrong with probabilities of past events.
Now, it often happens that you need to consider probabilities that ignore bits of knowledge you now have. Here’s a simple example.
I have a 6-sided die. I am going to roll the die, flip a number of coins equal to the number that comes up, and tell you how many heads I get. Let’s say the number is 2. Now I ask you: how likely is it that I rolled each possible number on the die?
To answer that question (beyond the trivial observation that clearly I didn’t roll a 1) one part of the calculation you need to do is: how likely was it, given a particular die roll but not the further information you’ve gained since then, that I would get 2 heads? You will get completely wrong answers if you answer all those questions with “the probability is 1 because I know it was 2 heads”.
(Here’s how the actual calculation goes. If the result of the die roll was k, then Pr(exactly 2 heads) was (k choose 2) / 2^k, which for k=1..6 goes 0, 1⁄4, 3⁄8, 6⁄16, 10⁄32, 15⁄64; since all six die rolls were equiprobable to start with, your odds after learning how many heads are proportional to these or (taking a common denominator) to 0 : 16 : 24 : 24 : 20 : 15, so e.g. Pr(roll was 6 | two heads) is 15⁄99 = 5⁄33. Assuming I didn’t make any mistakes in the calculations, anyway.)
The SSA-based calculations work in a similar way.
Consider the possible different numbers of humans there could ever have been (like considering all the possible die rolls).
For each, see how probable it is that you’d have been human # 70 billion, or whatever the figure is (like considering how probable it was that you’d get two heads).
Your posterior odds are obtained from these probabilities, together with the probabilities of different numbers of human beings a priori.
I am not claiming that you should agree with SSA. But the mere fact that it employs these backward-looking probabilities is not an argument against it; if you disagree, you should either explain why computations using “backward probabilities” correctly solve the die+coins problem (feel free to run a simulation to verify the odds I gave) despite the invalidity of “backward probabilities”, or else explain why the b.p.’s used in the doomsday argument are fundamentally different from the ones used in the die+coins problem.
Ah—I’d seen the link, but the widget just spun. I’ll go look at the PDF. The below is before I have read it—it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).
As for the SSA being wrong on the face of it—the DA wiki page says “The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist.” Assuming this is true (I do not know enough to judge yet), then if the SSA is false, then the DA argument is unsupported.
So—lets look at SSA. In a nutshell, it revolves around how unlikely it is that you were born in the first small% of history—and ergo, doomsday must be around the corner.
I can think of 2 very strong arguments for the SSA being untrue.
First—this isn’t actually how probability works. Take a fair coin and decide to flip it. The probability of heads and tails are the same, 1⁄2 − 50% for each. Flip the coin, and note the result. The probability is now unity—there is no magic way to get that 50⁄50 back. That coin toss result is now and forever more heads (or tails). You cannot look at a given result, and work backwards about how improbable it was, then use that—because it is no longer improbable, it’s history. Probability does not actually work backwards in time, although it is convenient in some cases to pretend it does.
Another example—what is the probability that I was born at the exact second, minute, hour, and day, at the exact location I was born at, out of the countless other places and times that humanity has existed that I could have been born in/at? The answer, of course—unity. And nil at all other places and times, because it has already happened—the wave form, if you will, has collapsed, Elvis has left the building.
So—what is the probability you were born so freakishly close to the 5 million year reign of humanity, in the first 0.000001% of all living people? Unity. Because it’s history. And the only thing making this position any different whatsoever from the others is blind chance. There is nothing one bit special about being in the first bit, other than that it allows you to notice that. (Feel free to substitute anything for 5 million above—it’s all the same).
Second—there are also logical issues—you can spin the argument on it’s head, and it still works (with less force to be sure). What are the chances of me being alive for Doomsday? Fairly small—despite urban legend, the number of people alive are a fairly small percentage (6-7%) of all who have ever lived. Ergo—doomsday cannot be soon, because it was unlikely I would be born to see it. (again, flawed—right now, the liklihood I was born at that time is unity)
An argument that can be used to “prove” both T and ~T is flawed, and should be discarded, aside from the probability thing. Prove here being used very loosely, because this is nowhere close to proof, which is good because I like things like Math working.
Time to go read a PDF.
Update: Done. That was quite enjoyable, thank you. A great deal of food for thought, and like most good, crunchy info filled things, there were bits I quite agreed with, and quite disagreed with (and that’s fine.)
I took some notes; I will not attempt to post them here, because I have already run into comment length issues, and I’m a wordy SOB. I can post them to a gist or something if anyone is interested, I kept them mostly so I could comment intelligently after reading it. Scanning back thru for the important bits:
Anthropomorphic reasoning would be useless as suggested—unless the AI was designed by and for humans to use. Which it would be. So—it may well be useful in the beginning, because presumably we would be modeling desired traits (like “friendliness”) on human traits. That could easily fail catastrophically later, of course.
The comparison between evolution and AI, in terms of relation to humans on page 11 was profound, and very well said.
There are an awful lot of assumptions presented as givens, and then used to assert other things. If any of them are wrong—the chain breaks. There were also a few suggestions that would violate physics, but the point being made was still valid (“With molecular nanotechnology, the AI could (potentially) rewrite the solar system unopposed.” was my favorite; it is probably beneficial to separate what is possible and impossible, given things like distances and energy and time, not to mention “why?”).
There is an underlying assumption that intelligence can increase without bound. I am by no means sure this is true—I can think of no other trait that does so, you run into limits (again) of physics and energy and so on. It is very possible that things like the speed-of-light propagation delay, heat, and inherent difficulty of certain tasks such as factoring would end up imposing an upper-limit on intelligence of an AI before it reached the w00 w00 god-power magic stage. Not that it matters that much, if it’s goal is to harm us, you don’t need to be too smart to do that...
Anyone thinking an AI might want my body for it’s atoms is not thinking clearly. I am made primarily of carbon, hydrogen, and oxygen—all are plentiful, in much easier to work with form, elsewhere. An early stage AI bootstrapping production would almost certainly want metals, some basic elements like silicon, and hydrocarbons (which we keep handy). Oh, and likely fissionables for power. Not us. Later on, all bets are off, but there are still far better places to get atoms than people.
Finally—the flaw in assuming an AI will predate mind upload is motivation. Death is a powerful, powerful motivator. A researcher close to being able to do it, about to die, is damn well going to try, no matter what the government says they can or can’t do—I would. And the guesses as to fidelity required are just that—guesses. Life extension is a powerful, powerful draw. Upload may also ultimately be easier—hand-waving away a ton of details, it’s just copy and simulation; it does not require new, creative inventions, just refinements on current thoughts. You don’t need to totally understand how something works to scan and simulate it.
Enough. If you have read this far—more power to you, thank you much for your time.
PS. I still don’t get the whole “simulated human civilizations” bit—the paper did not seem to touch on that. But I rather suspect it’s the same backwards probability thing...
I think you’re wrong about “backwards probability”.
Probabilities describe your state of knowledge (or someone else’s, or some hypothetical idealized observer’s, etc.). It is perfectly true that “your” probability for some past event known to you will be 1 (or rather something very close to 1 but allowing for the various errors you might be making), but that isn’t because there’s something wrong with probabilities of past events.
Now, it often happens that you need to consider probabilities that ignore bits of knowledge you now have. Here’s a simple example.
I have a 6-sided die. I am going to roll the die, flip a number of coins equal to the number that comes up, and tell you how many heads I get. Let’s say the number is 2. Now I ask you: how likely is it that I rolled each possible number on the die?
To answer that question (beyond the trivial observation that clearly I didn’t roll a 1) one part of the calculation you need to do is: how likely was it, given a particular die roll but not the further information you’ve gained since then, that I would get 2 heads? You will get completely wrong answers if you answer all those questions with “the probability is 1 because I know it was 2 heads”.
(Here’s how the actual calculation goes. If the result of the die roll was k, then Pr(exactly 2 heads) was (k choose 2) / 2^k, which for k=1..6 goes 0, 1⁄4, 3⁄8, 6⁄16, 10⁄32, 15⁄64; since all six die rolls were equiprobable to start with, your odds after learning how many heads are proportional to these or (taking a common denominator) to 0 : 16 : 24 : 24 : 20 : 15, so e.g. Pr(roll was 6 | two heads) is 15⁄99 = 5⁄33. Assuming I didn’t make any mistakes in the calculations, anyway.)
The SSA-based calculations work in a similar way.
Consider the possible different numbers of humans there could ever have been (like considering all the possible die rolls).
For each, see how probable it is that you’d have been human # 70 billion, or whatever the figure is (like considering how probable it was that you’d get two heads).
Your posterior odds are obtained from these probabilities, together with the probabilities of different numbers of human beings a priori.
I am not claiming that you should agree with SSA. But the mere fact that it employs these backward-looking probabilities is not an argument against it; if you disagree, you should either explain why computations using “backward probabilities” correctly solve the die+coins problem (feel free to run a simulation to verify the odds I gave) despite the invalidity of “backward probabilities”, or else explain why the b.p.’s used in the doomsday argument are fundamentally different from the ones used in the die+coins problem.