This is a deprecated account. See JessRiedel (https://www.lesswrong.com/users/jessriedel) instead.
Jess_Riedel
Careful. The term “graph theory” is usually used to refer to a specific branch of mathematics which I don’t think you’re referring to.
I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn’t healthy). I don’t mean to say that rationalists should give up, but we have to choose how to act in the meantime.
Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don’t believe this makes me irrational. In fact, given our current understanding of the problem, I don’t know of any other reasonable approaches.
Incidentally, this position is reminiscent of both Pascal’s wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.
I’ve read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I’m sure he believes that he brings some new insights, but I would disagree.
Moral skepticism is not particularly impressive as it’s the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.
The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that’s it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red.
At best we can make empirical statements of the form “A person should act in such-and-such manner in order to achieve some outcome”.
Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.
The quotation refers to punitive damages in civil cases. What evidence is there that this phenomenon exists with criminal penalties? (I don’t deny that it exists, but it is probably suppressed. That is, criminal penalties are more likely to reflect probability of detection than punitive damages).
For instance, there are road signs in northern Virginia warning of a $10,000 fine for littering. The severity of the fine is surely due to the difficulty in catching someone in the act.
Should we be worried that people will vote stuff up just because it is already popular? There is currently no penalty for voting against the crowd, so wouldn’t people (rightly) want to do this?
(Of course, we assume people are voting based on their personal impressions. It’s clear that votes bases on Bayesian beliefs are not are useful here.)
Exactly. It seems unlikely that prestigious researchers will be unable to publish their brilliant but unconventional idea because they can’t fully utilize their fame to sway editors. In fact, prestigious researchers have exactly what is needed to ensure their idea will take hold if it has merit: job security. They have plenty of time to nurture and develop their idea until it is accepted.
That’s exactly the point: voting is supposed to put comments in order according to quality, so that you can read the worthwhile comments in a reasonable time. My claim is that the current voting system will not do this well at all and that a dual voting system will be better. (That second bit is just a guess). The opinion poll information is just a nice side effect.
Yep, what I wrote is just based on my best guess. A usability study would be great.
Also, I am going with the crowd and changing to a user name with an underscore
This is one place where Caplan seems to go off the deep end. I think it illustrates what happens if you take the Cynic’s view to the logical conclusion. In his “gun to the head” analogy, Caplan suggests that OCD isn’t really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it’s better to just say that the person just really likes doing those repetitive behaviors.
As one commenter points out, this is equivalent to saying a person with a broken foot isn’t really injured because they could walk up a flight of stairs if we put a gun to their head. They just prefer to not walk up the stairs.
It is an incredibly simplistic technique to reduce the brain to a single, unified organ, and determine the “true” desires by revealed preferences. Minds are much more complex and conflicted than that. Whatever people mean by “myself”, it is surely not just the combined output of their brain.