How do you get from “in the long run, we should expect things to be very different in some way” or “hypotheses or predictions predicated on materialism” or “Copernican/mediocrity principle” to cryonics, superhuman AIs, or foom-style singularity?
The human brain is made out of matter (materialism). Many people’s brains are largely intact at the time of their deaths. By preserving the brain, we give possible future advanced neuroscience and materials technology a chance at restoring the original person. There are certainly a number of good reasons to think that this probably won’t happen, but it doesn’t belong in the same reference class of “predictions promising eternal life,” because most previous predictions about about eternal life didn’t propose technological means in a material universe. Cryonics isn’t about rapturing people’s souls up to heaven; it’s about reconstruction of a damaged physical artifact. Conditional on continued scientific progress (which might or might not happen), it seems plausible. I do agree that “technology which isn’t even remotely here” is a good reference class. Similarly …
superhuman AIs
Intelligence doesn’t require ontologically fundamental things that we can’t create more of, only matter appropriately arranged (materialism). Humans are not the most powerful possible intelligences (mediocrity). Conditional on continued scientific progress, it’s plausible that we could create superhuman AIs.
foom-style singularity
Human minds are not the fastest-thinking or the fastest-improving possible intelligences (mediocrity). Faster processes outrun slower ones. Conditional on our creating AIs, some of them might think much faster than us, and faster minds probably have a greater share in determining the future.
These are fine arguments, but they all take the inside view—focusing on particulars of a situation, not finding big robust reference classes to which the situation belongs.
And in any case you seem to be arguing for such inventions not being prohibited by laws of physics more than for them happening with very high probability in near future, as many here believe. As a reference class, things which are merely not prohibited by laws of physics almost never happen anyway—this class is just too huge.
How do you get from “in the long run, we should expect things to be very different in some way” or “hypotheses or predictions predicated on materialism” or “Copernican/mediocrity principle” to cryonics, superhuman AIs, or foom-style singularity?
The human brain is made out of matter (materialism). Many people’s brains are largely intact at the time of their deaths. By preserving the brain, we give possible future advanced neuroscience and materials technology a chance at restoring the original person. There are certainly a number of good reasons to think that this probably won’t happen, but it doesn’t belong in the same reference class of “predictions promising eternal life,” because most previous predictions about about eternal life didn’t propose technological means in a material universe. Cryonics isn’t about rapturing people’s souls up to heaven; it’s about reconstruction of a damaged physical artifact. Conditional on continued scientific progress (which might or might not happen), it seems plausible. I do agree that “technology which isn’t even remotely here” is a good reference class. Similarly …
Intelligence doesn’t require ontologically fundamental things that we can’t create more of, only matter appropriately arranged (materialism). Humans are not the most powerful possible intelligences (mediocrity). Conditional on continued scientific progress, it’s plausible that we could create superhuman AIs.
Human minds are not the fastest-thinking or the fastest-improving possible intelligences (mediocrity). Faster processes outrun slower ones. Conditional on our creating AIs, some of them might think much faster than us, and faster minds probably have a greater share in determining the future.
These are fine arguments, but they all take the inside view—focusing on particulars of a situation, not finding big robust reference classes to which the situation belongs.
And in any case you seem to be arguing for such inventions not being prohibited by laws of physics more than for them happening with very high probability in near future, as many here believe. As a reference class, things which are merely not prohibited by laws of physics almost never happen anyway—this class is just too huge.
Things not prohibited by physics that humans want to happen don’t happen eventually? Very far from clear.