Most of this post, along with the previous posts in the series, is both beautiful and true—the best combination. It’s a pity it had to be mixed in with the meme about computers magically waking up with superpowers. I don’t think that meme is necessary here, any more than it’s necessary to believe the world was created in 4004 BC to appreciate Christmas. Taking it out—discussing it in separate posts if you wish to discuss it—is the major improvement I would suggest.
rwallace
Good points, upvoted. But in fairness, I think the ink blot analogy is a decent one.
Imagine you asked the question about the ink blot to a philosopher in ancient Greece, how might he answer? He might say there is no definite number. Or he might say there must be some underlying reality, even though he doesn’t know for sure what it is; and the best guess says it’s based on atoms; so he might reply that he doesn’t know the answer, but hopefully it might be possible in principle to calculate it if you could count atoms.
I think that’s about where we are regarding the Born probabilities and number or measure of different worlds in MWI right now.
There is a wonderfully evocative term, Stand Alone Complex, from the anime series of the same name, which refers to actions taken by people behaving as though they were part of a conspiracy even though no actual conspiracy is present. It’s pretty much tailor-made for this case.
Mencius Moldbug calls this instance the Cathedral, in an insightful series of articles indexed here.
You could also trade off things that were more important in the ancestral environment than they are now. For example, social status (to which the neurotypical brain devotes much of its resources) is no longer the evolutionary advantage that it used to be.
Only if you take ‘ten times smarter’ to mean multiplying IQ score by ten. But since the mapping of the bell curve to numbers is arbitrary in the first place, that’s not a meaningful operation; it’s essentially a type error. The obvious interpretation of ‘ten times smarter’ within the domain of humans is by percentile, e.g. if the author is at the 99% mark, then it would refer to the 99.9% mark.
And given that, his statement is true; it is a curious fact that IQ has diminishing returns, that is, being somewhat above average confers significant advantage in many domains, but being far above average seems to confer little or no additional advantage. (My guess at the explanation: first, beyond a certain point you have to start making trade-offs from areas of brain function that IQ doesn’t measure; second, Amdahl’s law.)
There is kidnapping for interrogation, slavery and torture today, so there is no reason to believe there won’t be such in the future. But I don’t believe it will make sense in the future to commit suicide at the mere thought, any more than it does today.
As for whether such a society will exist, I think it’s possible it may. It’s possible there may come a day when people don’t have to die. And there is a better chance of that happening if we refrain from poisoning our minds with scare stories optimized for appeal to primate brains over correspondence to external reality.
I’ve been snarky for this entire conversation—I find advocacy of death extremely irritating—but I am not just snarky by any means. The laws of physics as now understood allow no such thing, and even the author of the document to which you refer—a master of wishful thinking—now regards it as obsolete and wrong. And the point still holds—you cannot benefit today the way you could in a post-em world. If you’re prepared to throw away billions of years of life as a precaution against the possibility of billions of years of torture, you should be prepared to throw away decades of life as a precaution against the possibility of decades of torture. If you aren’t prepared to do the latter, you should reconsider the former.
An upload, at least of the early generations, is going to require a supercomputer the size of a rather large building to run, to point out just one of the reasons why the analogy with playing a pirate MP3 is entirely spurious.
Warhammer 40K is one of those settings that is highly is open to interpretation. My interpretation is that it’s in a situation where things could be better and could be worse, victory and defeat are both very much on the cards, and hope guided by cold realism is one of the main factors that might tip the balance towards the first outcome. I consider it similar in that regard to the Cthulhu mythos, and for that matter to real life.
If you postulate ems that can run a million subjective years a minute (which is not at all scientifically plausible), the mainline copies can do that as well, which means talking about wall clock time at all is misleading; the new subjective timescale is the appropriate one to use across the board.
As for the rest, people are just as greedy today as they will be in the future. Organized criminals could torture you until you agree to sign over your property to them. Your girlfriend could pour petrol over you and set you on fire while you’re asleep. If you sign up for a delivery or service with Apple and give them your home address, you’re trusting them not to send thugs around to your house and kidnap you. Ever fly on an airliner? Very few, perhaps no one, will have the engineering skill to fly without someone else’s assistance. When you’re on the plane, you’re trusting the airline not to deliver you to a torture camp. Is anyone worthy of that trust? And even if you get home safely, how will you stay safe while you’re asleep? And how will you protect yourself against criminals?
Does committing suicide today sound a more plausible idea now?
The comment holds regardless. In today’s world, you can only be tortured for a few decades, but by the same token you can only lose a few decades of lifespan by committing suicide. If in some future world you can be tortured for a billion years, then you will also be losing a billion years of happy healthy life by committing suicide. If you think the mere possibility of torture—with no evidence that it is at all likely—will be grounds for committing suicide in that future world, then you should think it equally good grounds for committing suicide today. If you agree with me that would be insanely irrational today, you should also agree it will be insanely irrational in that future world.
Also, in the absence of any evidence that this is at all unlikely to occur.
If you think the situation is that symmetrical, you should be indifferent on the question of whether to commit suicide today.
But notice the original poster does not dwell on the probability of this scenario, only on its mere possibility.
If it had been generated as part of an exhaustive listing of all possible scenarios, I would have refrained from comment. As it is, being raised in the context of a discussion on whether one should try for uploading in the unlikely event one lives that long, it’s obviously intended to be an argument for a negative answer, which means it constitutes:
http://lesswrong.com/lw/19m/privileging_the_hypothesis/
Advocacy of death.
With the possibility? Of course not. Anything that doesn’t involve a logical self-contradiction is possible. My disagreement is with the idea that it is sane or rational to base decisions on fantasies about being kidnapped and tortured in the absence of any evidence that this is at all likely to occur.
If you think that kind of argument holds water, you should commit suicide today lest a sadist kidnap you and torture you in real life.
No. The mainstream expectation has pretty much always been that locations conducive to life would be reasonably common; the results of the last couple of decades don’t overturn the expectation, they reinforce it with hard data. The controversy has always been on the biological side: whether going from the proverbial warm little pond to a technological civilization is probable (in which case much of the Great Filter must be in front of us) or improbable (in which case we can’t say anything about what’s in front of us one way or the other). For what it’s worth, I think the evidence is decisively in favor of the latter view.
I’m perfectly prepared to bite this bullet. Extending the life of an existing person a hundred years and creating a new person who will live for a hundred years are both good deeds, they create approximately equal amounts of utility and I believe we should try to do both.
Thanks for the link, yes, that does seem to be a different opinion (and some very interesting posts).
I agree with you about the publishing and music industries. I consider current rampant abuse of intellectual property law to be a bigger threat than the Singularity meme, sufficiently so that if your comparative advantage is in politics, opposing that abuse probably has the highest expected utility of anything you could be doing.
That’s awfully vague. “Whatever window of time we had”, what does that mean?
The current state of the world is unusually conducive to technological progress. We don’t know how long this state of affairs will last. Maybe a long time, maybe a short time. To fail to make progress as rapidly as we can is to gamble the entire future of intelligent life on it lasting a long time, without evidence that it will do so. I don’t think that’s a good gamble.
There’s one kind of “technological progress” that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness.
I have seen claims to the contrary from a number of people, from Eliezer himself a number of years ago up to another reply to your comment right now. If SIAI were to officially endorse the position you just suggested, my assessment of their expected utility would significantly increase.
Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl’s law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.
“The price of freedom is eternal vigilance.”
It would be wonderful if defending freedom were a one-off job like proving Fermat’s Last Theorem. As it turns out, it’s an endlessly recurring job like fighting disease; unfortunate, but that’s the way it is. And yes, sometimes our efforts fail, and freedoms are lost or people get sick and die. But the answer to that is to work harder and smarter, not to give up.