//Scott doesn’t understand why this works. Knowledge helps you achieve your goals. Since most humans already have some moral goals, like to minimize suffering of those around them, knowledge assists in achieving it and noticing when you fail to achieve it.//
But I, like Scott, think that when one becomes smarter, they become more likely to get particular values. For example, if one is more rational, they are more likely to be a utilitarian, and less likely to conclude that disgusting things are thereby immoral. As we get older and wiser, we learn that, for example, dark chocolate is less good than other things. But this isn’t a purely descriptive fact we learn—we learn facts about which things are worth having.
//Worth it relative to what? Worth is entirely relative. The entire concept of the paperclip maximizer is that it finds paperclips maximally worth it. It would value human suffering like you value money. A means to an end.//
This is wrong if you’re a moral realist, which I argue for in one of the linked posts.
//A selfish entity that only wants to maximize the number of paperclips (and keep itself around) is very much disastrous for you.//
But I don’t think it would just maximize the number of paper clips. It would maximize its own welfare, which I think would be bad for me but maybe good all things considered for the world.
If we want to prove whether or not 2+2 is 5, we could entertain a books worth of reasoning arguing for and against, or you could take 2 oranges, put 2 more oranges with them, and see what happens. You’re getting lost in long form arguments (that article) about moral realism when it is equally trivial to disprove.
I provided an example of a program that predicts the consequences of actions + a program sorts them + an implied body that takes the actions. This is basically how tons of modern AI already works, so this isn’t even hypothetical. That is more than enough of a proof of orthogonality, and if your argument doesn’t somehow explain why a specific one of these components cant be built, this community isn’t going to entertain it.
You think that moral realism is trivial to disprove? That seems monumentally arrogant, when most philosophers are moral realists. The following principle is plausible
If most philosophers believe some philosophical proposition, you should not think that it is trivial to disprove.
I agree that you could make a program that predicts actions and takes actions. The question is whether, in being able to predict lots of things about the world—generally through complex machine learning—it would generate generally intelligent capabilities. I think it would and these would make it care about various things.
Trivial was an overstatement on my part, but certainly not hard.
There are a lot of very popular philosophers that would agree with you, but don’t mistake popularity for truthfulness. Don’t mistake popularity for expertise. Philosophy, like religion, makes tons of unfalsifiable statements, so the “experts” can sit around making claims that sound good but are useless or false. This is a really important point. Consider all the religious experts of the world. Would you take anything they have to say seriously? The very basic principles from which they have based all their subsequent reasoning is wrong. I trust scientists because they can manufacture a vaccine that works (sort of) and I couldn’t. The philosopher experts can sell millions of copies of books, so I trust them in that ability, but not much more.
Engineers don’t get to build massive structures out of nonsense, because they have to build actual physical structures, and you’d notice if they tried. Our theories actually have to be implemented, and when you try to build a rocket using theories involving [phlogiston](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality), you will quickly become not-an-engineer one way or another.
This website is primarily populated by various engineer types who are trying to tell the world that their theories about “inherent goodness of the universe” or “moral truth” or whatever the theory is, is going to result in disaster because it doesn’t work from an engineering perspective. It doesn’t matter if 7 billion people, experts and all, believe it.
The only analogy I can think to make is that 1200s Christians are about to build a 10 gigaton nuke (continent destroying) and have the trigger mechanism be “every 10 seconds it will flip a coin and go off if it’s heads; God will perform miracle to ensure it only goes off when He wants it to”. Are you going to object to this? How are you going to deal with the priests who are “experts” in the Lord?
It is true that the experts can be wrong. But they are not wrong in obvious ways. When they are, other smart people write posts arguing that they are wrong. I do not think the non-existence of God is trivial, though I think it is highly likely, though that’s partly based on private evidence. https://benthams.substack.com/p/why-are-there-so-few-utilitarians
I’m not saying “the experts can be wrong” I’m saying these aren’t even experts.
Pick any major ideology/religion you think is false. One way or another (they can’t all be right!), the “experts” in these areas aren’t experts, they are basically insane: babbling on at length about things that aren’t at all real, which is what I think most philosophy experts are doing. Making sure you aren’t one of them is the work of epistemology which The Sequences are great at covering. In other words, the philosopher experts you are citing I view as largely [Phlogiston](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality) experts.
(Note: your formatting looks correct to me, so I suspect the issue is that you’re not using the Markdown version of the LW editor. If so, you can switch to that using the dropdown menu directly below the text input box.)
//Scott doesn’t understand why this works. Knowledge helps you achieve your goals. Since most humans already have some moral goals, like to minimize suffering of those around them, knowledge assists in achieving it and noticing when you fail to achieve it.//
But I, like Scott, think that when one becomes smarter, they become more likely to get particular values. For example, if one is more rational, they are more likely to be a utilitarian, and less likely to conclude that disgusting things are thereby immoral. As we get older and wiser, we learn that, for example, dark chocolate is less good than other things. But this isn’t a purely descriptive fact we learn—we learn facts about which things are worth having.
//Worth it relative to what? Worth is entirely relative. The entire concept of the paperclip maximizer is that it finds paperclips maximally worth it. It would value human suffering like you value money. A means to an end.//
This is wrong if you’re a moral realist, which I argue for in one of the linked posts.
//A selfish entity that only wants to maximize the number of paperclips (and keep itself around) is very much disastrous for you.//
But I don’t think it would just maximize the number of paper clips. It would maximize its own welfare, which I think would be bad for me but maybe good all things considered for the world.
If we want to prove whether or not 2+2 is 5, we could entertain a books worth of reasoning arguing for and against, or you could take 2 oranges, put 2 more oranges with them, and see what happens. You’re getting lost in long form arguments (that article) about moral realism when it is equally trivial to disprove.
I provided an example of a program that predicts the consequences of actions + a program sorts them + an implied body that takes the actions. This is basically how tons of modern AI already works, so this isn’t even hypothetical. That is more than enough of a proof of orthogonality, and if your argument doesn’t somehow explain why a specific one of these components cant be built, this community isn’t going to entertain it.
You think that moral realism is trivial to disprove? That seems monumentally arrogant, when most philosophers are moral realists. The following principle is plausible
If most philosophers believe some philosophical proposition, you should not think that it is trivial to disprove.
I agree that you could make a program that predicts actions and takes actions. The question is whether, in being able to predict lots of things about the world—generally through complex machine learning—it would generate generally intelligent capabilities. I think it would and these would make it care about various things.
Trivial was an overstatement on my part, but certainly not hard.
There are a lot of very popular philosophers that would agree with you, but don’t mistake popularity for truthfulness. Don’t mistake popularity for expertise. Philosophy, like religion, makes tons of unfalsifiable statements, so the “experts” can sit around making claims that sound good but are useless or false. This is a really important point. Consider all the religious experts of the world. Would you take anything they have to say seriously? The very basic principles from which they have based all their subsequent reasoning is wrong. I trust scientists because they can manufacture a vaccine that works (sort of) and I couldn’t. The philosopher experts can sell millions of copies of books, so I trust them in that ability, but not much more.
Engineers don’t get to build massive structures out of nonsense, because they have to build actual physical structures, and you’d notice if they tried. Our theories actually have to be implemented, and when you try to build a rocket using theories involving [phlogiston](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality), you will quickly become not-an-engineer one way or another.
This website is primarily populated by various engineer types who are trying to tell the world that their theories about “inherent goodness of the universe” or “moral truth” or whatever the theory is, is going to result in disaster because it doesn’t work from an engineering perspective. It doesn’t matter if 7 billion people, experts and all, believe it.
The only analogy I can think to make is that 1200s Christians are about to build a 10 gigaton nuke (continent destroying) and have the trigger mechanism be “every 10 seconds it will flip a coin and go off if it’s heads; God will perform miracle to ensure it only goes off when He wants it to”. Are you going to object to this? How are you going to deal with the priests who are “experts” in the Lord?
It is true that the experts can be wrong. But they are not wrong in obvious ways. When they are, other smart people write posts arguing that they are wrong. I do not think the non-existence of God is trivial, though I think it is highly likely, though that’s partly based on private evidence. https://benthams.substack.com/p/why-are-there-so-few-utilitarians
I’m not saying “the experts can be wrong” I’m saying these aren’t even experts.
Pick any major ideology/religion you think is false. One way or another (they can’t all be right!), the “experts” in these areas aren’t experts, they are basically insane: babbling on at length about things that aren’t at all real, which is what I think most philosophy experts are doing. Making sure you aren’t one of them is the work of epistemology which The Sequences are great at covering. In other words, the philosopher experts you are citing I view as largely [Phlogiston](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality) experts.
Your link looks broken; here’s a working version.
(Note: your formatting looks correct to me, so I suspect the issue is that you’re not using the Markdown version of the LW editor. If so, you can switch to that using the dropdown menu directly below the text input box.)