In case you haven’t realized it, you’re being downvoted because your post reads like this is the first thing you’ve read on this site. Just FYI.
syzygy
“Universally Preferable Behavior” by Stefan Molyneux, “Argumentation Ethics” by Hans Hermann Hoppe, and of course Objectivism, to name the most famous ones. Generally the ones I’m referring to all try to deduce some sort of Objective Ethics and (surprise) it turns out that property rights are an inherent property of the universe and capitalism is a moral imperative.
Forgive me if you’re thinking of some other libertarians who don’t have crazy ethical theories. I didn’t mean to make gross generalizations. I’ve just observed that libertarian philosophers who consciously promote their theories of ethics tend to be of this flavor.
Why is the discrimination problem “unfair”? It seems like in any situation where decision theories are actually put into practice, that type of reasoning is likely to be popular. In fact I thought the whole point of advanced decision theories was to deal with that sort of self-referencing reasoning. Am I misunderstanding something?
Maybe “progress” doesn’t refer to equality, but autonomy. It does seem like the progression of social organization generally leads to individual autonomy and equality of opportunity. Egalitarianism is a nice talking point for politicians, but when we say “progress” we really mean individual autonomy.
Austrian-minded people definitely have some pretty crazy methods, but their economic conclusions seem pretty sound to me. The problem arises when they apply their crazy methods to areas other than economics (see any libertarian theory of ethics. Crazy stuff)
I think the correct comparison would be, “since no one can agree on the nature of Earth/Earth’s existence, Earth must not exist” but this is ridiculous since everyone agrees on at least one fact about Earth: we live on it. The original argument still stands. Denying the existence of god(s) doesn’t lead to any ridiculous contradictions of universally experienced observations. Denying Earth’s geometry does.
You are merely objecting to Eliezer’s choice of scale. The distances between “intelligences” are pretty arbitrary. Plus he’s using a linear scale, so there’s no room for intelligence curves.
I think the DRH quote is pretty out of context, and Eliezer’s commentary on it is pretty unfair. DRH has a deeply personal respect for human intelligence. He doesn’t look forward to the singularity because he (correctly) points out that it will be the end of humanity. Most SI/LessWrong people accept that and look forward to it, but for Hofstadter the current view of the singularity is an extremely pessimistic view of the future. Note that this is simply a result of his personal beliefs. He never claims that people are wrong to look forward to superintelligence, brain emulation and things like that, just that he doesn’t. See this interview for his thoughts on the subject.
Congratulations, you have just discovered the difference between art and design. If Azkaban had been designed to be a commentary on muggle prisons, the connection would have had to have been made explicit within the text. The fact that Eliezer pointed out the connection does not mean he consciously tried to make it explicit in the text. Since the connection is implicit rather than explicit, the commentary is an artistic interpretation of the text. You don’t need to feel justified in an artistic interpretation.
You should collect data on time spent using the app and success. Do Science and stuff.
By the way, I spent a good amount of time using it yesterday and I just finished an entire Hershey’s bar. Apparently it’s not working for me.
In any decision involving an Omega like entity that can run perfect simulations of you, there wouldn’t be a way to tell if you were inside the simulation or in the real universe. Therefore, in situations where the outcome depends on the results of the simulation, you should act as though you are in the simulation. For example, in counterfactual mugging, you should take the lesser amount because if you’re in Omega’s simulation you guarantee your “real life” counterpart the larger sum.
Of course this only applies if the entity you’re dealing with happens to be able to run perfect simulations of reality.
What makes “science vs. bayes” a dichotomy? The scientific method is just a special case of Bayesian reasoning. I mean, I understand the point of the article, but it seems like it’s way less of a dilemma in practice.
I know this is an old post, I just wanted to write down my answers to the “morality as preference” questions.
Why do people seem to mean different things by “I want the pie” and “It is right that I should get the pie”? Why are the two propositions argued in different ways?
Do the statements, “I liked that movie” and “That movie was good” sound different? The latter is phrased as a statement of fact, while the former is obviously a statement of preference. Unless the latter is said by a movie critic or film professor, no one thinks it’s a real statement of fact. It’s just a quirk of the English language that we don’t always indicate why we believe the words we say. In English, it’s always optional to state whether it’s a self-evident fact, the words of a trusted expert or merely a statement of opinion.
When and why do people change their terminal values? Do the concepts of “moral error” and “moral progress” have referents? Why would anyone want to change what they want?
“Moral progress” doesn’t really refer to individuals. The entities we refer to making “moral progress” tend to be community level, like societies, so I don’t really get the first and last questions. As for the concept of moral progress, it refers to the amount of people who have their moral preferences met. The reason democracy is a “more ethical” society than totalitarianism is because more people have a chance to express their preferences and have them met. If I think a particular war is immoral, I can vote for the candidate or law that will end that war. If I think a law is immoral I can vote to change it. I think this theory lines up pretty well with the concept of moral progress.
Why and how does anyone ever “do something they know they shouldn’t”, or “want something they know is wrong”? Does the notion of morality-as-preference really add up to moral normality?
Usually people who do something they “know is wrong” are just doing something that most other people don’t like. The only reason it feels like it’s wrong to steal is because society has developed, culturally and evolutionary, in such a way that most people think stealing is wrong. That’s really all it is. There’s nothing in physics that encodes what belongs to who. Most people just want stuff to belong to them because of various psychological factors.
This is an awesome article. But I’ve always been bothered by people’s expectations when it comes to arriving on time for things. In my experience, people are less annoyed at the person who leaves early than the person who arrives late, even if they miss the same amount of the meeting. The usual reasons people give for avoiding being late (missing content, disrupting the meeting) apply just as much to leaving early. Why the double standard? Also, people are generally more understanding if you have to miss something than if you are an hour late, for some reason.
This is all completely anecdotal, obviously.
I suppose you’re right. Although it’s pretty easy for me to imagine something that is “conscious” that isn’t an “observer” i.e., a mind without sensory capabilities. I guess I was just wondering whether our common (non-rigorous) definitions of the two concepts are independent.
It occurred to me that I have no idea what people mean by the word “observer”. Rather, I don’t know if a solid reductionist definition for observation exists. The best I can come up with is “an optimization process that models its environment”. This is vague enough to include everything we associate with the word, but it would also include non-conscious systems. Is that okay? I don’t really know.
I had no idea. That is really interesting. What are some artificial languages that have evidential grammar? I knew lojban had evidentials, but I think they’re optional.
I understand the concept of Tegmarkian multiverses, but could you explain how they “reduce to themselves”?
Behavior is very different than thoughts. It’s easier to think of animals as machines because we have never experienced an animal thought. To us, animals just look exactly as you described, like behavior outputting machines, because we have never experienced the thought processes of animals.
It’s not meant to be “serious philosophy”. He’s not presenting the ideas in the book as being literally true, he’s just provoking the reader to look at the issues in the book in a different light. Forcing the reader to consider alternative hypotheses, if you will.