If your plan says that you only launch an AGI when you know it’s a FAI, you can’t get there faster by omitting the FAI part. And if you do omit the FAI, you are just working for destruction, no point in getting there faster.
That idea seems to be based on a “binary” model—win or lose.
It seems unlikely to me that the world will work like that. The quantity of modern information that is preserved into the far future is a continuous quantity. The probability of our descendants preserving humans instrumentally—through historical interest—also looks like a continuous quantity to me. It looks more as though there will be a range of possible outcomes—of varying desirability to existing humans.
That idea seems to be based on a “binary” model—win or lose.
Well, there are a thousand different ways to lose, but I lable any future containing “six billion corpses” as a losing one.
And remember that in the space of all possible minds, things that take marginal interest in us is a diminishing small space compared to things that readily wipe us out.
Well, there are a thousand different ways to lose, but I lable any future containing “six billion corpses” as a losing one.
You don’t think humanity would ever willingly go for destructive uploading?
I don’t think win/lose is too useful here. The idea that there are many ways to lose is like saying that most arrangements of a 747′s components don’t fly. True—but not too relevant when planning a flight.
You don’t think humanity would ever willingly go for destructive uploading?
Understand my meaning, do not cleave my words. I mean of course “six billion mind-state annihilations,” and I highly doubt you were not able to think of that interprentation.
I don’t think win/lose is too useful here. The idea that there are many ways to lose is like saying that most arrangements of a 747′s components don’t fly. True—but not too relevant when planning a flight.
But there are any number of failure points and combinations thereof that would be mission-fatal during flight.
That idea seems to be based on a “binary” model—win or lose.
It seems unlikely to me that the world will work like that. The quantity of modern information that is preserved into the far future is a continuous quantity. The probability of our descendants preserving humans instrumentally—through historical interest—also looks like a continuous quantity to me. It looks more as though there will be a range of possible outcomes—of varying desirability to existing humans.
Well, there are a thousand different ways to lose, but I lable any future containing “six billion corpses” as a losing one.
And remember that in the space of all possible minds, things that take marginal interest in us is a diminishing small space compared to things that readily wipe us out.
You don’t think humanity would ever willingly go for destructive uploading?
I don’t think win/lose is too useful here. The idea that there are many ways to lose is like saying that most arrangements of a 747′s components don’t fly. True—but not too relevant when planning a flight.
Understand my meaning, do not cleave my words. I mean of course “six billion mind-state annihilations,” and I highly doubt you were not able to think of that interprentation.
But there are any number of failure points and combinations thereof that would be mission-fatal during flight.