Then he could give a guest lecture, and that’d be pretty cool.
Dolores1984
In our club, we’ve decided to assume atheism (or, minimum, deism) on the part of our membership. Our school has an extremely high percentage of atheists and agnostics, and we really don’t feel it’s worth arguing over that kind of inferential distance. We’d rather it be the ‘discuss cool things’ club than the ‘argue with people who don’t believe in evolution’ club.
This perspective looks deeply insane to me.
I would not kill a million humans to arrange for one billion babies to be born, even disregarding the practical considerations you mentioned, and, I suspect, neither would most other people. This perspective more or less requires anyone in a position of power to oppose birth control availability, and require mandatory breeding.
I would be about as happy with a human population of one billion as a hundred billion, not counting the number of people who’d have to die to get us down to a billion. I do not have strong preferences over the number of humans. The same does not go for the survival of the living.
There would be some number of digital people that could run simultaneously on whatever people-emulating hardware they have.
I expect this number to become unimaginably high in the foreseeable future, to the point that it is doubtful we’ll be able to generate enough novel cognitive structures to make optimal use of it. The tradeoff would be more like ‘bringing back dead people’ v. ‘running more parallel copies of current people.’ I’d also caution against treating future society as a monolithic Entity with Values that makes Decisions—it’s very probably still going to be capitalist. I expect the deciding factor regarding whether or not cryopatients are revived to be whether or not Alcor can pay for the revival while remaining solvent.
Also, I’m not at all certain about your value calculation there. Creating new people is much less valuable than preserving old ones. It would be wrong to round up and exterminate a billion people in order to ensure than one billion and one babies are born.
Right, but (virtually) nobody is actually proposing doing that. It’s obviously stupid to try from chemical first principles. Cells might be another story. That’s why we’re studying neurons and glial cells to improve our computational models of them. We’re pretty close to having adequate neuron models, though glia are probably still five to ten years off.
I believe there’s at least one project working on exactly the experiment you describe. Unfortunately, C. elegans is a tough case study for a few reasons. If it turns out that they can’t do it, I’ll update then.
Which is obvious nonsense. PZ Meyers thinks we need atom-scale accuracy in our preservation. Were that the case, a sharp blow to the head or a hot cup of coffee would render you information theoretically-dead. If you want to study living cell biology, frozen to nanosecond accuracy, then, no, we can’t do that for large systems. If you want extremely accurate synaptic and glial structural preservation, with maintenance of gene expressions and approximate internal chemical state (minus some cryoprotectant-induced denaturing), then we absolutely can do that, and there’s a very strong case to be made that that’s adequate for a full functional reconstruction of a human mind.
I propose that we continue to call them koans, on the grounds that changing involves a number of small costs, and it really, fundamentally, does not matter in any meaningful sense.
So far, I’m twenty pages in, and getting close to being done with the basic epistemology stuff.
Lottery winners have different problems. Mostly that sharp changes in money are socially disruptive, and that lottery players are not the most fiscally responsible people on Earth. It’s a recipe for failure.
My mistake.
In general, when something can be either tremendously clever, or a bit foolish, the prior tends to the latter. Even with someone who’s generally a pretty smart cookie. You could run the experiment, but I’m willing to bet on the outcome now.
It’s important to remember that it isn’t particularly useful for this book to be The Sequences. The Sequences are The Sequences, and the book can direct people to them. What would be more useful would be a condensed, rapid introduction to the field that tries to maximize insight-per-byte. Not something that’s a definitive work on rationality, but something that people can crank through in a day or two, rave about to their friends, and come away with a better idea of what rational thinking looks like. It’d also serve as a less formidable introduction for those who are very interested, to the broader pool of work on the subject, including the Sequences. Dollar for sanity-waterline dollar, that’s a very heavily leveraged position.
Actually, if CFAR isn’t going to write that book, I will.
You could plug a baby’s nervous system into the output of a radium decay random number generator. It’d probably disagree (disregarding how crazy it would be) that its observations were best described by causal graphs.
It does not. Epiphenomenal consciousness could be real for the same reason that the spaceship vanishing over the event horizon. It’s Occam’s Razor that knocks down that one.
1: If your cousin can demonstrate that ability using somebody else’s deck, under experimental conditions that I specify and he is not aware of ahead of time, I will give him a thousand dollars.
2: In the counter-factual case where he accomplishes this, that does not mean that his ability is outside the realm of science (well, probably it means the experiment was flawed, but we’ll assume otherwise). There have been a wide range of inexplicable phenomena which are now understood by science. If your cousin’s psychic powers are real, then science can study them, and break down the black box to find out what’s inside. There are certainly causal arrows there, in any case. If there weren’t, we wouldn’t know about it.
3: If your strongest evidence that your partner loves you is psychic intuition, you should definitely get a prenup.
Oh, and somebody get Yudkowsky an editor. I love the sequences, but they aren’t exactly short and to the point. Frankly, they ramble. Which is fine if you’re just trying to get your thoughts out there, but people don’t finish the majority of the books they pick up. You need something that’s going to be snappy, interesting, and cater to a more typical attention span. Something maybe half the length we’re looking at now. The more of it they get through, the more good you’re doing.
EDIT: Oh! And the whole thing needs a full jargon palette-swap. There’s a lot of LW-specific jargon that isn’t helpful. In many cases, there’s existing academic jargon that can take the place of the phrases Yudkowky uses. Aside from lending the whole thing a superficial-but-useful veneer of credibility, it’ll make the academics happy, and make them less likely to make snide comments about your book in public fora. If you guys aren’t already planning on a POD demand run, you really should. Ebooks are wonderful, but the bulk of the population is still humping dead trees around. An audiobook or podcast might be useful as well.
If it were me, I’d split your list after reductionism into a separate ebook. Everything that’s controversial or hackles-raising is in the later sequences. A (shorter) book consisting solely of the sequences on cognitive biases, rationalism, and reductionism could be much more a piece of content somebody without previous rationalist intentions can pick up and take something valuable away from. The later sequences have their merits, but they are absolutely counterproductive to raising the sanity waterline in this case. They’ll label your book as kooky and weird, and they don’t, in themselves, improve their readers enough to justify the expense. People interested in the other stuff can get the companion volume.
You could label the pared down volume something self helpey like ‘Thinking Better: The Righter, Smarter You.” For goodness sake, don’t have the word ‘sequences’ in the title. That doesn’t mean anything to anyone not already from LW, and it won’t help people figure out what it’s about.
EDIT: Other title suggestions—really just throwing stuff at the wall here
Rationality: Art and Practice
The Rational You
The Art of Human Rationality
Black Belt Bayesian: Building a Better Brain
The Science of Winning: Human Rationality and You
Science of Winning: The Art and Practice of Human Rationality (I quite like this one)
There will always be multiple centers of power What’s at stake is, at most, the future centuries of a solar-system civilization No assumption that individual humans can survive even for hundreds of years, or that they would want to
You give no reason why we should consider these as more likely than the original assumptions.
Sure. I think we just have different definitions of the term. Not much to be gained here.
How about a cyborg whose arm unscrews? Is he not augmented? Most of a cochlear implant can be removed. Nothing about trans-humanism says your augmentations have to be permanently attached to your body. You need only want to improve yourself and your abilities, which a robot suit of that caliber definitely accomplishes.
And, yes, obviously transhumanism is defined relative to historical context. If everyone’s doing it, you don’t need to have a word for it. That we have a word implies that transhumanists are looking ahead, and looking for things that not everyone has yet. So, no, your car doesn’t make you a trans-humanist, but a robotic exoskeleton might be evidence of that philosophy.
Nothing so drastic. Just a question of the focus of the club, really. Our advertising materials will push it as a skeptics / freethinkers club, as well as a rationality club, and the leadership will try to guide discussion away from heated debate over basics (evolution, old earth, etc.).