I don’t see why darwinian evolution would necessarily create humanoid aliens in other environments—sure arguing that they are likely to have structures similar to eyes to take advantage of EM waves makes sense, and even arguing that they’ll have a structure similar to a head where a centralized sensory-decision-making unit like a brain exists makes sense, but walking on two legs? Even looking at the more intelligent life-forms on our own planet we find a great diversity of structure: from apes to dolphins to elephants to octopi… All I’d say we can really gather from this argument is that aliens will look like creatures and not like flickering light rays or crystals or something incomprehensibly foreign.
LauraABJ
Your argument is interesting, but I’m not sure if you arrived at your 1% estimate by specific reasoning about uploading/AI, or by simply arguing that paradigmatic ‘surprises’ occur frequently enough that we should never assign more than a 99% chance to something (theoretically possible) not happening.
I can conceive of many possible worlds (given AGI does not occur) in which the individual technologies needed to achieve uploading are all in place, and yet are never put together for that purpose due to general human revulsion. I can also conceive of global-political reasons that will throw a wrench in tech-development in general. Should I assign each of those a 1% probability just because they are possible?
Also, no offense meant to you or anyone else here, but I frequently wonder how much bias there is in this in-group of people who like to think about uploading/FAI towards believing that it will actually occur. It’s a difficult thing to gage, since it seems the people best qualified to answer questions about these topics are the ones most excited/invested in the positive outcomes. I mean, if someone looks at the evidence and becomes convinced that the situation is hopeless, they are much less likely to get involved in bringing about a positive outcome and more likely to rationalize all this away as either crazy or likely to occur so far in the future that it won’t bother them. Where do you go for an outside view?
I actually did reflect after posting that my probability estimate was ‘overconfident,’ but since I don’t mind being embarrassed if I’m wrong, I’m placing it at where I actually believe it to be. Many posts on this blog have been dedicated to explaining how completely difficult the task of FAI is and how few people are capable of making meaningful contributions to the problem. There seems to be a panoply of ways for things to go horribly wrong in even minute ways. I think 1 in 10,000, or even 1 in a million is being generous enough with the odds that the problem is still worth looking at (given what’s at stake). Perhaps you have a problem with the mind-set of low probabilities, like it’s pessimistic and self-defeating? Also, do you really believe uploading could occur before AI?
Interesting. I remember my brother saying, “I want to be frozen when I die, so I can be brought back to life in the future,” when he was child (somewhere between ages 9-14, I would guess). Probably got the idea from a cartoon show. I think the idea lost favor with him when he realized how difficult a proposition reanimating a corpse really was (he never thought about the information capture aspect of it.)
Well, I look at it this way:
I place the odds of humans actually being able to resuspend a frozen corpse near zero.
Therefore, in order for cryonics to work, we would need some form of information capture technology that would scan the in tact frozen brain and model the synaptic information in a form that could be ‘played.’ This is equivalent to the technology needed for uploading.
Given the complicated nature of whole brain simulations, some form of ‘easier’ quick and dirty AI is vastly more likely to come into being before this could take place.
I place the odds of this AI being friendly near zero. This might be where our calculations diverge.
In terms of ‘evertt branches’, one can never ‘experience’ being dead, so if we’re going to go that route, we might as well say that we all live on in some branch where FAI was developed in time to save us… needless to say, this gets a bit silly as an argument for real decisions.
A question for Eliezer and anyone else with an opinion: what is your probability estimate of cryonics working? Why? An actual number is important, since otherwise cryonics is an instance of pascal’s mugging. “Well, it’s infinitely more than zero and you can multiply it by infinity if it does work” doesn’t cut it for me. Since I place the probability of a positive singularity diminishingly small (p<0.0001), I don’t see a point in wasting the money I could be enjoying now on lottery tickets or spending the social capital and energy on something that will make me seem insane.
This is obviously true, but I’m not suggesting that all people will become heroin junkies. I’m using heroin addiction as an example of where neurochemistry changes directly change preference and therefore utility function- IE the ‘utility function’ is not a static entity. Neurochemistry differences among people are vast, and heroin doesn’t come close to a true ‘wire-head,’ and yet some percent of normal people are susceptible to having it alter their preferences to the point of death. After uploading/AI, interventions far more invasive and complete than heroin will be possible, and perhaps widely available. It is nice to think that humans will opt not to use them, and most people with their current preferences in tact might not even try (as many have never tried heroin), but if preferences are constantly being changed (as we will be able to do), then it seems likely than people will eventually slide down a slippery slope towards wire-heading, since, well, it’s easy.
Combination of being broke, almost dying, mother-interference, naltrexone, and being institutionalized. I think there are many that do not quit though.
There’s a far worse problem with the concept of ‘utility function’ as a static entity than that different generations have different preferences: The same person has very different preferences depending on his environment and neurochemistry. A heroin addict really does prefer heroin to a normal life (at least during his addiction). An ex-junkie friend of mine wistfully recalls how amazing heroin felt and how he realized he was failing out of school and slowly wasting away to death, but none of that mattered as long as there was still junk. Now, it’s not hard to imagine how in a few itterations of ‘maximizing changing utilities’ we all end up wire-headed one way or another. I see no easy solution to this problem. If we say “The utility function is that of unaltered, non-digital humans, living today,” then there will be no room for growth and change after the singularity. However, I don’t see an easy way of not falling into the local maximum of wire-heading one way or another at some point… Solutions welcome.
Also the inverse seems to be true: watching someone fail to exert self control makes it more difficult for you to. My husband’s laziness has proven dangerously contagious.
11) Status doesn’t make people stupid, rather traits other than intelligence determine status, making it unlikely that the highest status individual in a group will also be the most intelligent.
IE- Do the most intelligent scientists really get the most grant money? Does the most intelligent candidate usually win the election? Are the most arrogant mofos (self perceiving high-status) really the smartest?
That being said, I do see a positive correlation with intelligence and status, but it probably breaks down at the high levels of intelligence you (Eliezer) generally deal with.
Do you have a study that confirms your ‘melatonin subtracts an hour’ theory you could link to? My husband uses melatonin and can still easily spend 12 hours in bed. I’ve avoided using it, since I don’t have difficulty actually falling asleep and I didn’t want to sleep longer as a result of using it. You should probably argue that everyone should try using melatonin for a week or so, since the potential gains are large, not that everyone who doesn’t use it is being foolish. The whole argument falls apart if your base assertion is wrong, and you provide no evidence that the effect melatonin has on you generalizes to everyone. That being said, I am glad you shared this information.
What I think Mitchell is looking for (an he can correct me if I’m wrong) as an explanation of experience is some model that describes the elements necessary for experience and how they interact in some quantitative way. For example, let’s pretend that flesh brains are not the only modules capable of experience, and that we can build experiences out of other materials. A theory of experience would help to answer: what materials can be used, what processing speeds are acceptable (ie, can experience exist in stasis), what cpus/processors/algorithms must be implemented, and what outputs will convince us that experience is taking place (vs creating a Chinese letter box). Now, I don’t think we will have any way of answering these questions before uploading/AI, but I can conceive of ways of testing many variables in experience once a mind has been uploaded. We could change one variable- ask the subject to describe the change- change it back and ask the subject what his memory of the experience is, etc,etc. We can run simulations that are deliberately missing normal algorithms until we find which pieces of a mind are the bare bone essentials of experience. To me this is just another question for the neuroscientists and information theorists, once our technology is advanced enough to actually experiment on it. It is only a ‘problem’ if you believe p-zombies are possible, and that we might create entities that describe experience without having it.
I agree with your interpretation of our current physical and experiential evidence. I believe the perceived dualistic problem arises from imperfections in our current modeling of brain states and control of our own. We cannot easily simulate experiential brain states, reconfigure our own brains to match, and try them out ourselves. We cannot make adjustments of these states on a continuum that would allow us to say physical state A corresponds exactly to experience B and here’s the math. We cannot create experience on a machine and have it tell us that it is experiencing. Without internal access to our source-code, our experiences come into our consciousness fully formed and appear magical.
That being said, the blunt tools we do have—descriptions of other’s experiences, drugs, brain stimulation, fMRI, and psychophysics—do seem to indicate that experience follows directly from physical states of the brain without the need for a dualist explanation. Perhaps the problem will dissolve itself once uploading is possible and individual experiences are more tradeable and malleable.
I had hoped that by asking him to write clearly, he would need to have a point to make clear. You are probably right that this is not the case.
I was interested to see what you had posted that got you expelled from the blog. I think your problem is two-fold: 1) Your comments are very unclearly phrased, such that it takes the reader a long time to figure out what you are trying to say, and 2) You have commented a lot in a very short period of time.
Try putting more time into a small number of well thought-out, well-phrased comments.
Well put! To add some anecdotal evidence to your model, an older psychiatrist friend of mine described an increasing prevalence of the delusion that a person was the only real thing in the world, and everything else was either an illusion like the matrix or that everyone else was a type of p-zombie. I suggested that the movie The Matrix might be the source of these delusions, but he said the delusion seemed to be gaining in popularity even now, much after the movie’s release. I guess the idea is just generally floating around our culture, and it appeals to people as an explanation for their feelings of being an outsider. It’s much nicer to think you are privy to deep truth no one else is capable of than that you are mentally ill.
If that happens every year then I think that is strong evidence that the reasons you provide are correct. Surprising and interesting…
I wasn’t there, but it seems unlikely that the reason people bargained so hard as to almost refuse to give you anything (10-90 split) while the other side gave the other guy almost ALL of their money was merely that they realized you had an unfair advantage. Unless this was brought up frequently as the reason, I find it highly dubious, and would guess that the other guy was just more well liked in general than you were, such that the people in the class wanted him to win and didn’t care whether or not you won (or even wanted you to lose).
I’d be interested in seeing your reasoning written out in a top-level post. 2:1 seems beyond optimistic to me, especially if you give AI before uploading 9:1, but I’m sure you have your reasons. Explaining a few of these ‘personally credible stories,’ and what classes you place them in such that they sum to 10% total may be helpful. This goes for why you think FAI has such a high chance or succeeding as well.
Also, I believe I used the phrase ‘outside view’ incorrectly, since I didn’t mean reference classes. I was interested to know if there are people who are not part of your community that help you with number crunching on the tech-side. An ‘unbiased’ source of probabilities, if you will.