“If a nation expects to be both ignorant and free in a state of civilization, it expects what never was and never will be” -- Thomas Jefferson
JamesCole
it’s score would be the number of karma points to be awarded for implementing it.
upon reflection, a poll might be better. along the lines of:
how many points is the implementation of this feature worth?
10
20
50
100
150
I wonder—would it be useful for people to receive karma points for programming contributions to the LW community? It sounds reasonable to me.
An interesting question is, how do you determine the number of karma points the work deserves? One approach would be that one of the site admins could assign it a value. Another would be that it could be voted upon.
Essentially the description of the ‘feature’ to be added would be a post, and it’s score would be the number of karma points to be awarded for implementing it. Vote up if you think that score is too little, vote down if you think it is too much. This would also give you a way to rank the ‘feature requests’ - those with the highest scores are the ones the community cares about most (of course that may not matter much if there’s only the occasional bit of programming work to be done).
I realise that there’d be costs and effort required to get any system like this going. E.g. you probably want such feature request ‘posts’ on a different part of the site, and you’d have to explain the scheme to people, etc.
This idea of providing karma points like this wouldn’t have to apply to just programming tasks—it could be anything else that isn’t a post or a comment but which is nonetheless a contribution to the community.
Here’s an example of such external referencing of Less Wrong posts
http://www.37signals.com/svn/posts/1750-the-planning-fallacy
[edit: included quote]
Any ‘figuring out’ is almost certainly going to produce an ad hoc Just-So Story.
that implies that the only correct intuition is one you can immediately rationally justify. how could progress in science happen if this was true?
science is basically a means to determine whether initial intuitions are true.
So it seems possible to me that I have an oversensitivity to noise and Bill has an undersensitivity to it.
That seems to imply that the typical case is the “correct” one, and that somehow your (or Bill’s) case is invalid because it’s non-typical.
If noise means that you can’t sleep, study or concentrate, and you can’t really help this, then this is a valid factor that should be taken into account.
[edit] though after reading further down i can see that you appreciate that.
that is exactly what you can’t assume if you want to explain the basis of representation.
...because they ask for a moral intuition about a case where the outcome is predefined.
One thing i found a bit dodgy about that example is that it just asserts that the outcomes were positive.
I would bet that, for the respondents, simply being told that the outcomes were positive would still have left them feeling that in a real brother-sister situation like that there would likely have likely been some negative consequences.
Greene does not seem to factor this into account when he interprets their responses.
I don’t think there’s anything that comes close to giving a theoretical account of how mathematical statements are able to, in some sense, represent things in reality.
perhaps i should have phrased it as ‘...stand by your intuition for a while—even if you can’t reason it out initially—to give yourself an adequate chance to figure it out’
forgot—there was another observation i had.… this one is just quick sketching:
regarding the idea that ‘moral properties’ are projected onto reality.
As our moral views are about things in reality, they are—amongst other things—forms of representation.
I think we need a solid understanding of what representations are, how they work, and thus exactly what it is they “refer” to in the world (and in what sense they do so), before we’ll really even have adequate language for talking about such issues in a precise, non-ambiguous fashion.
We don’t have such an understanding of representations at the moment.
I made a similar point in another comment on a post here dealing with the foundations of mathematics—that we’ll never properly understaand what mathematical statements are, and what in the world they are ‘about’ until we have a proper theory of representation.
I.e. i think that in both cases it is essentially the same thing holding us back.
No one said anything in response to that other comment, so I’m not sure what people think of such a position—I’d be quite curious to hear your opinion...
Ok, i skimmed that a bit because it was fairly long, but here’s a few observations...
I think the default human behavior is to treat what we perceive as simply being what is out there (some people end up learning better, but most seem not to). This is true for everything we percieve, regardless of the subject matter—i.e. is nothing specific to morality.
I think it can—sometimes—be reasonable to stand by your intuition even if you can’t reason it out. Sometimes it takes time to figure out and articulate the reasoning. I am not trying to justify obstinance and “blind faith” here! Just saying that sometimes you can’t be expected to understand it straight away.
I don’t see any justification given, in what you quote from Greene, for the claim that there’s essentially no justification for morality.
see also
“How Obama Is Using the Science of Change” http://www.time.com/time/printout/0,8816,1889153,00.html
I think there’s some misunderstanding here. I said don’t assume. If you have some reason to think what you’re doing is reasonable or ok, then you’re not assuming.
Rich enough that, if you’re going to make these sorts of calculations, you’ll get reasonable results (rather than misleading or wildly misleading ones).
A lot of this probably comes down to:
Don’t assume – that you have a rich enough picture of yourself, a rich enough picture of the rest of reality, or that your ability to mentally trace through the consequences of actions comes anywhere near the richness of reality’s ability to do so.
The problem is language. If you use a concept frequently, you pretty much need a shorthand way of referring to it.
But I would ask, do you need that concept – a concept for labeling this type of person – in the first place?
“Mate selection for the male who values the use of a properly weighted Bayesian model in the evaluation of the probability of phenomena” would not make a very effective post title. [as] “Mate selection for the male rationalist”.
I don’t think that’s the only other option. Maybe it could’ve been called “Mate selection for rational male” or “Mate selection for males interested in rationality”.
I don’t see why it has to even make any mention of rationality. Presumably anything posted on Less Wrong is going to be targeted at those with an interest in rationality. Perhaps it could have been “Finding a mate with a similar outlook” or “Looking for relationship?”
I’m not suggesting that any of these alternatives are great titles, I’m just using them to suggest that there are alternatives.
I agree that identifying yourself with the label rationality … But it still seems useful to have some sort of terminology to talk about clear thinking, and I can’t think of a better candidate term than rationality.
‘Rationality’ is a perfectly fine term to talk about clear thinking, but that is quite a different matter to using ‘rationalist’ or any other term as a label to identify with.
I must say that I can’t help but find it odd that you link to “Keep Your Identity Small” in discussing this problem. Did you read the footnotes? Graham lists that which we would call rationality as one of the few things you should keep in your identity:
He doesn’t quite say it’s a label you should keep in your identity, he lists it as an example of something that might be good to keep in your personal identity. I think that the argument he outlines in the essay applies to what’s in that footnote: that it’d be better to just want to “[follow] evidence wherever it leads”, than to identify too strongly as a scientist.
Heavily paraphrasing:
For local purposes [“rationalists” seems suitable]. For outside purposes [I use a description not a label]
I think it’s pretty much impossible for us to have any sort of private label for ourselves. Even if we were to use a label for ourselves within this site and never use that outside of the site, that use of it within the site is still going to be projecting that label to the wider world.
Anyone from outside the community who looks at the site is going to see whatever label(s) we employ. And even if we employ a label just on this site, it’s still likely to be part of the site’s “reputation” in outside circles—i.e. the label is still likely to reach people who’ve never seen the site.
A lot of the content on Less Wrong is describing various types of mental mistakes (biases and whatnot). In terms of this aspect of the site, Less Wrong is like a kind of Wikipedia for mental mistakes.
As with Wikipedia, it’s something that could be linked to from elsewhere – like if you wanted to use it help explain type of mistake to someone. There’s a lot of potential for using the site in this way, considering that the internet consists in a large part of discussions and discussions always involve some component of reasoning.
Seen in this way, the site is not just a community (who could have their own private terminology) but also an internet-wide resource. So we should think of any label as global, and I think that’s more of a reason to consider having no label at all.
I doubt those kings can be killed. I think victory against them comes more from inserting layers of suppression between them and action, to modulate and reduce their power. You might be able to think of those layers as governmental machinery.