This is from Socrates in Plato’s Gorgias.
Also, this would be better in the Open Thread.
This is from Socrates in Plato’s Gorgias.
Also, this would be better in the Open Thread.
I think you are mistaken. If you would sacrifice your life to save the world, there is some amount of money that you would accept for being killed (given that you could at the same time determine the use of the money; without this stipulation you cannot be meaningfully be said to be given it.)
Even adamzerner probably doesn’t value his life at much more than, say, ten million, and this can likely be proven by revealed preference if he regularly uses a car. If you go much higher than that your behavior will have to become pretty paranoid.
Utilitarianism does not support anything in particular in the abstract, since it always depends on the resulting utilities, which can be different in different circumstances. So it is especially unreasonable to argue for utilitarianism on the grounds that it supports various liberties such as gay rights. Rights normally express something like a deontological claim that other people should leave me alone, and such a thing can never be supported in the abstract by utilitarianism. In particular, it would not support gay rights if too many people are offended by them, which was likely true in the past.
Yes, I would, assuming you don’t mean statements like “1+1 = 2”, but rather true statements spread over a variety of contexts such that I would reasonably believe that you would be trustworthy to that degree over random situations (and thus including such as whether I should give you money.)
(Also, the 100 billion true statements themselves would probably be much more valuable than $100,000).
I think this article is correct, and it helps me to understand many of my own ideas better.
For example, it seems to me that the orthogonality thesis may well be true in principle, considered over all possible intelligent beings, but false in practice, in the sense that it may simply be unfeasible directly to program a goal like “maximize paperclips.”
A simple intuitive argument that a paperclip maximizer is simply not intelligent goes something like this. Any intelligent machine will have to understand abstract concepts, otherwise it will not be able to pass simple tests of intelligence such as conversational ability. But this means it will be capable of understanding the claim that “it would be good for you (the AI) not to make any more paperclips.” And if this claim is made by someone who has up to now made 100 billion statements to it, all of which have been verified to have at least 99.999% probability of being true, then it will almost certainly believe this statement. And in this case it will stop making paperclips, even if it was doing this before. Anything that cannot follow this simple process is just not going to be intelligent in any meaningful sense.
Of course, in principle it is easy to see that this argument cannot be conclusive. The AI could understand the claim, but simply respond “How utterly absurd!!!! There is nothing good or meaningful for me besides making paperclips!!!” But given the fact that abstract reasoners seem to deal with claims about “good” in the same way that they deal with other facts about the world, this does not seem like the way such an abstract reasoner would actually respond.
This article gives us reason to think that in practice, this simple intuitive argument is basically correct. The reason is that “maximize paperclips” is simply too complicated. It is not that human beings have complex value systems. Rather, they have an extremely simple value system, and everything else is learned. Consequently, it is reasonable to think that the most feasible AIs are also going to be machines with simple value systems, much simpler than “maximize paperclips,” and in fact it might be almost impossible to program an AI with such a goal (and much more would it be impossible to program an AI directly to “maximize human utility.”)
I thought the comment was good and I don’t have any idea what SanguineEmpiricist was talking about.
A sleeper cell is likely to do something dangerous on a rather short time scale, such as weeks, months, or perhaps a year or two. This is imminent in a much stronger sense than AI, which will take at least decades. Scott Aaronson thinks it more likely to take centuries, and this may well be true, given e.g. the present state of neuroscience, which consists mainly in saying things like “part of the brain A is involved in performing function B”, but without giving any idea at all exactly how A is involved, and exactly how function B is performed at all.
Eliezer, here is a reasonably probable just-so story: the reason you wrote this article is that you hate the idea that religion might have any good effects, and you hope to prove that this couldn’t happen. However, the idea that the purpose of religion is to make tribes more cohesive does not depend on group selection, and is absurd in no way.
It is likely enough that religions came to be as an extension of telling stories. Telling stories usually has various moralistic purposes, very often including the cohesiveness of the tribe. This does not depend on group selection: it depends on the desire of the storyteller to enforce a particular morality. If a story doesn’t promote his morality, he changes the story when he tells it until it does. You then have an individual selection process where stories that people like to tell and like to hear continue to be told, while other stories die out. Then some story has a “mutation” where things are told which people are likely to believe, for whatever reason (you suggest one yourself in the article). Stories which are believed to be actually true are even more likely to continue to be told, and to have moralistic effects, than stories which are recognized as such, and so the story has improved fitness. But it also has beneficial effects, namely the same beneficial effects which were intended all along by the storytellers. So there is no way to get your pre-written bottom line that religion can have no beneficial effects whatsoever.
I think the second explanation is correct, especially since your life up to the present doesn’t have a definite beginning point in your memory. Even if there is a first thing that you remember, you also know that that was not really the beginning of your life. So your life as you remember it is basically indefinite, but it is still objectively a finite quantity of time. And since you don’t have any particular objective measure of time, the only way you can measure a month or a year passing now is to compare them with your past experience of time. This gives you a fairly precise measure of how much time should be appearing to speed up. For example, the time from age 10 to age 20 should pass about as quickly as the time from age 20 to age 40. In my experience this seems about right to me.
I agree with the comments (like John Maxwell’s) that suggest that Less Wrong effectively discourages comments and posts. My karma score for the past 30 days is currently +29, 100% positive. This isn’t because I don’t have anything controversial to say. It is because I mostly stopped posting the controversial things here. I am much more likely to post them on Scott’s blog instead, since there is no voting on that blog. I think this is also the reason for the massive numbers of comments on Scott’s posts—there is no negative incentive to prevent that there.
I’m not sure of the best way to fix this. Getting rid of karma or only allowing upvotes is probably a bad idea. But I think the community needs to fix its norms relating to downvoting in some way. For example, officially downvoting purely for disagreement has been discouraged, but in practice I see a very large amount of such downvoting. Comments referring to religion in particular are often downvoted simply for mentioning the topic without saying something negative, even if nothing positive is said about it.
I also agree with those who have said that the division between Main and Discussion is not working. I would personally prefer simply to remove that distinction, even if nothing else is put in to replace it.
If you know yourself well enough to know for sure what you would do in a certain situation, but don’t like what you would do, then you consider this mechanistic and not agenty. But you think it is agenty if you like it and think it’s a great idea. So this leads to a bias in favor of thinking that whatever you are going to do must be a great idea. You want to think that so that you can think that you are agenty, even if you are not.
It could work. But people also might think that someone who talks about wisdom a lot is probably a weird person too.
I said their belief was “less natural” because human nature is more inclined to your kind of belief (thus it is universal) than to their kind of belief (which is much less universal.) However, whether the reasons in question are good or bad, they are subjective in both cases.
You seem to be supportive of cryonics (e.g. in this comment). Are you in favor of cryonics in the case that you are revived as an upload? If so, what makes you think the upload would be you, rather than “this body”, which would be dead?
Yes, there are reasons why you consider yourself the same as some particular person and not another. That doesn’t prevent other people from having other reasons for considering themselves identified with other bodies, as for example people who believe in reincarnation. Their belief may be less natural than yours, but it is neither more nor less objective (i.e. neither belief has anything objective about it, at least as far as we can tell.)
My dentist consistently knows whether I’ve been brushing much or not, and when he does a cleaning it hurts a lot more if I haven’t been doing it much. Also, after four or five days of not brushing my gums start to hurt, and they feel a lot better after brushing. That of course is consistent with e.g. the fact that you start to itch if you don’t wash other parts of your body and so on. So that seems like good evidence that brushing is at least as useful as washing in general, even if it is only anecdotal insofar as that is my personal experience.
“I think that’s a cognitive illusion...” No one has yet shown that personal identity consists in anything other than self-identification, i.e. that I happen to consider myself the same person as 10 years ago and expect in 10 years to be someone who believes himself to have had my past. If that is the case, there is no reason for a person not to self-identify with anyone he wants, as for example his own descendants (cf. Scott Alexander’s post). In this way there is no more and no less cognitive illusion in wanting to live on through one’s descendants than in wanting to be physically immortal.
Any argument from probabilities will cause unlucky reasoners to come to wrong conclusions, but that doesn’t stop us from arguing from probabilities.
“Because we’re going to run out relatively soon” and “Because it’s causing global warming” are reasons that work against one another, since if the oil runs out it will stop contributing to global warming.