This is mostly just arguing over semantics. Just replace “philosophical zombie” with whatever your preferred term is for a physical human who lacks any qualia.
ImmortalRationalist
Why is it that philosophical zombies are unlikely to exist? In Eliezer’s article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?
Also, what are LessWrong’s views on the idea of a continuous consciousness? CGPGrey brought up this issue in The Trouble with Transporters. Does a continuous self exist at all, or is our perception of being a continuous conscious entity existing throughout time just an illusion?
This video by CGPGrey is somewhat related to the idea of memetic tribes and the conflicts that arise between them.
This is a bit unrelated to the original post, but Ted Kaczynski has an interesting hypothesis on the Great Filter, mentioned in Anti-Tech Revolution: Why and How.
But once self-propagating systems have attained global scale, two crucial differences emerge. The first difference is in the number of individuals from among which the “fittest” are selected. Self-prop systems sufficiently big and powerful to be plausible contenders for global dominance will probably number in the dozens, or possibly in the hundreds; they certainly will not number in the millions. With so few individuals from among which to select the “fittest,” it seems safe to say that the process of natural selection will be inefficient in promoting the fitness for survival of the dominant global self-prop systems. It should also be noted that among biological organisms, species that consist of a relatively small number of large individuals are more vulnerable to extinction than species that consist of a large number of small individuals. Though the analogy between biological organisms and self-propagating systems of human beings is far from perfect, still the prospect for viability of a world-system based on the dominance of a few global self-prop systems does not look encouraging.
The second difference is that in the absence of rapid, worldwide transportation and communication, the breakdown or the destructive action of a small-scale self-prop system has only local repercussions. Outside the limited zone where such a self-prop system has been active there will be other self-prop systems among which the process of evolution through natural selection will continue. But where rapid, worldwide transportation and communication have led to the emergence of global self-prop systems, the breakdown or the destructive action of any one such system can shake the whole world-system. Consequently, in the process of trial and error that is evolution through natural selection, it is highly probable that after only a relatively small number of “trials” resulting in “errors,” the world-system will break down or will be so severely disrupted that none of the world’s larger or more complex self-prop systems will be able to survive. Thus, for such self-prop systems, the trial-and-error process comes to an end; evolution through natural selection cannot continue long enough to create global self-prop systems possessing the subtle and sophisticated mechanisms that prevent destructive internal competition within complex biological organisms.
Meanwhile, fierce competition among global self-prop systems will have led to such drastic and rapid alterations in the Earth’s climate, the composition of its atmosphere, the chemistry of its oceans, and so forth, that the effect on the biosphere will be devastating. In Part IV of the present chapter we will carry this line of inquiry further: We will argue that if the development of the technological world-system is allowed to proceed to its logical conclusion, then in all probability the Earth will be left a dead planet-a planet on which nothing will remain alive except, maybe, some of the simplest organisms-certain bacteria, algae, etc.-that are capable of surviving under extreme conditions.
The theory we’ve outlined here provides a plausible explanation for the so-called Fermi Paradox. It is believed that there should be numerous planets on which technologically advanced civilizations have evolved, and which are not so remote from us that we could not by this time have detected their radio transmissions. The Fermi Paradox consists in the fact that our astronomers have never yet been able to detect any radio signals that seem to have originated from an intelligent extraterrestrial source.
According to Ray Kurzweil, one common explanation of the Fermi Paradox is “that a civilization may obliterate itself once it reaches radio capability.” Kurzweil continues: “This explanation might be acceptable if we were talking about only a few such civilizations, but [if such civilizations have been numerous], it is not credible to believe that every one of them destroyed itself” Kurzweil would be right if the self-destruction of a civilization were merely a matter of chance. But there is nothing implausible about the foregoing explanation of the Fermi Paradox if there is a process common to all technologically advanced civilizations that consistently leads them to self-destruction. Here we’ve been arguing that there is such a process.
One perspective on pain is that it is ultimately caused by less than ideal Darwinian design of the brain. Essentially, we experience pain and other forms of suffering for the same reason that we have backwards retinas. Other proposed systems, such as David Pearce’s gradients of bliss, would accomplish the same things as pain without any suffering involved.
Should the mind projection fallacy actually be considered a fallacy? It seems like being unable to imagine a scenario where something is possible is in fact Bayesian evidence that it is impossible, but only weak Bayesian evidence. Being unable to imagine a scenario where 2+2=5, for instance, could be considered evidence that 2+2 ever equaling 5 is impossible.
Here is a somewhat relevant video.
This LessWrong Survey had the lowest turnout since Scott’s original survey in 2009
What is the average amount of turnout per survey, and what has the turnout been year by year?
Does anyone here know any ways of dealing with brain fog and sluggish cognitive tempo?
What is the probability that induction works?
On a related question, if Unfriendly Artificial Intelligence is developed, how “unfriendly” is it expected to be? The most plausible sounding outcome may be human extinction. The worst case scenario could be if the UAI actively tortures humanity, but I can’t think of many scenarios in which this would occur.
Eliezer Yudkowsky wrote this article a while ago, which basically states that all knowledge boils down to 2 premises: That “induction works” has a sufficiently large prior probability, and that there exists some single large ordinal that is well-ordered.
If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?
Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don’t exist.
Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement “Induction works” is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that “Induction works” has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that does not depend on priors. The second problem is that you also need to justify that your memories are reliable. This could be done using induction and with a sufficiently large prior probability that memory works, but this has the same problems mentioned previously.
For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?
Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement “Induction works” has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?
Eliezer wrote this article a few years ago, about the 2 things that rationalists need faith to believe. Has any progress been made in finding justifications for either of these things that do not require faith?
We guess we are around the LW average.
What would you estimate to be the LW average?
What do you think of Avshalom Elitzur’s arguments for why he reluctantly thinks interactionist dualism is the correct metaphysical theory of consciousness?