Robert Ettinger wrote a story in 1948 called “The Penultimate Trump” in which the main character, the first cryonicist, awakes seemingly trumphantly only to be sent to Hell in punishment for his crimes.
Unknowns
Robin Hanson notes that the existence of a stock market can also give rise to an incentive to e.g. bomb a company’s offices, yet such things very rarely actually happen.
Yes, of course, but people sometimes recover after they are dying, and even after they are dying, they don’t find out they are dead the next day.
Also, in practice this only happens when someone is procrastinating and the supposedly additional steps are just ways of avoiding the more difficult steps, so a reasonable estimate of the remaining time to completion is that person is simply not going to complete the task, ever.
It seems that you are expecting a situation somewhat like this:
Day 1: I expect to be done in 5 days. Day 2: I expect to be done in 5 days. Day 10: I expect to be done in 7 days. Day 20: I expect to be done in 4 days. Day 30: I expect to be done in 5 days.
Basically, this cannot happen if I am updating rationally. You say, “Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn’t add anything to predict the additional twelve steps you added after completing step 19.” But in fact, it does add something: namely that this task that I am trying to accomplish is very long and unpredictable, and the more such steps are added, the longer and more unpredictable I should assume it to be, even in the remaining portion of the task. So by day 30, I should be expecting about another month, not another 5 days. And if I do this, at some point it will become clear that it is not worth finishing the task, at least assuming that it is not simply the process itself that is worth doing.
I would act as though he were wrong, for anthropic reasons. In other words, six months later (or 70 years later) I will certainly not wake up one day and notice that I am dead, since this can never happen to anyone.
I work with multiple screens and I estimate that I save between 20 minutes and one hour per day in comparison to using only one. I do financial work and examples would be: Quickbooks open on one screen and an internet bank account open on the other; or the account open on one page and some financial pdf open on the other; or similar things.
Richard Loosemore has stated a number of times that he does not expect an AI to have goals at all in a sense which is relevant to this discussion, so in that way there is indeed disagreement about whether AIs “pursue their goals.”
Basically he is saying that AIs will not have goals in the same way that human beings do not have goals. No human being has a goal that he will pursue so rigidly that he would destroy the universe in order to achieve it, and AIs will behave similarly.
I don’t see how this study does any good unless first they measure the rate at which people actually match the stereotypical preconceptions and then compare this with the two average ratings. Otherwise it is possible the people were becoming less biased, not more.
This calls into question your claim that you won’t accept bets that would call into question your ability to pay if you lose.
What do think is the probability (given the fixed assumption that Christianity is false) that sometime before 2045 you will have the psychological experience of a vision of Christ claiming to be risen from the dead?
If you “use the word that the person in question prefers,” then the word acquires a new meaning. From that moment on, the word “male” means “a human being who prefers to be called ‘male’” and the word “female” means “a human being who prefers to be called ‘female’”. These are surely not the original meaning of the words.
No, but you will.
Yes, to the degree that you accept the existence of a Big World, together with the usual assumptions about personal identity, you should expect never to die.
Even if there is no Big World, however, no one will ever experience dying anyway. Your total lifespan will be limited, but you will never notice it come to an end. So you might as well think of that limited span as a projection of an infinite lifespan onto an open finite interval. So again, one way or another you should expect never to die.
One obvious reason for being able to choose to determine various parts of the map is that it contributed to survival. For example, the leader of the tribe hates you and makes a few insulting remarks. You can choose to interpret these in a fairly neutral way, or you can interpret them as they are. If you choose to interpret them as hateful and insulting, as they are, you may have a hard time not responding in a corresponding manner, and so you may end up dead. You will be better off if you can choose to interpret them in the neutral way. Or again, the leader of the tribe proclaims an obviously false religious dogma. If you can choose to accept it, things will go on as usual. If you cannot accept it, you may have a hard time pretending well enough to avoid getting killed as a heretic. Again you will be better off if you are in control of your map.
Also, I disagree that there is any rigid distinction between beliefs we can control and others we cannot (as I suggested in my post on belief in belief). We cannot generally change the visual sensation when we look at the sky. But whether or not we believe the statement, “the sky is blue,” is indeed up to us, and some people will e.g. deny that the sky is blue, since it is not really colored in the same way as other things. Or someone could indeed believe that the sky is fundamentally green, if that were e.g. a religious dogma.
Sometimes people will argue that if you would pay a lot to save your own life from a fatal illness, that means you don’t value lives equally but prefer your own, and therefore you should sign up for cryonics. But this argument seems a bit problematic to me, because it assumes my preference to save my life in the case of the fatal illness is ideal. In reality it might not be ideal at all. I am certainly not Zachary Baumkletterer, but it’s likely I would be a better person if I were. If this is the case, the problem is not that I am unwilling to sign up for cryonics, but that I would pay to save myself from the fatal illness instead of giving the money away. And this argument does not mean that if I don’t want to sign up for cryonics, instead I have to start donating all my money to charity. It just means I am doing the best I feel that I can, and if I signed up for cryonics I would be doing even worse (by doing less for others.)
From thirty years ago. Amazingly boring.
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one. And if you don’t use the option, that’s too bad for you—then the attacker just tortures you until you die, since they don’t believe you when you deny the existence of it.
There is no perfect decision procedure which is beneficial in all possible situations. If the situation is able to know the agent’s decision procedure, it can act in such a way as to “minimize the agent’s utility if the agent uses decision procedure X”, and in that situation decision procedure X, whatever it is, will be bad for the agent. So in order to have perfect knowledge of what decision procedure to use, you have to know what situations are going to actually happen to you.
I completely disagree with the position argued by some here that “happiness that comes from having mistaken beliefs isn’t valuable.” I think that such happiness is a good and valuable thing. I do not think it is merely “less bad” than other things; I think it is good. The false belief is bad, but the happiness that comes from it is good.
I do not think that my position about this is an unusual position for people to hold. It is fairly common for me not to correct someone’s false belief because I think they are happier and better off with the false belief than without it, and this is something that many other people do as well. Likewise, I know of a number of formerly religious people who explicitly envy their formerly religious self; if they could push a button to get back their false belief and the happiness it caused, they would push it. But they do not think there is such a button.
Ian Morris argues in Why the West Rules that people all over the world had the tendency to develop agriculture and the like, and started to do so with the start of the present interglacial period, but that people in the Middle East succeeded first simply because there were more plant and animal species there that could be usefully domesticated. According to him, people elsewhere would have done the same thing in the long run, perhaps in another one or two thousand years, but in many places this was prevented by the societies meeting before this had a chance to happen. I found his account pretty plausible.