What if people don’t believe in ‘duty’ - eg certain sorts of consequentialists?
tog
SSC discussion: growth mindset
Upvotes/downvotes on LW might take care of the quality worry.
How about moral realist consequentialism? Or a moral realist deontology with defeasible rules like a prohibition on murdering? These can certainly be coherent. I’m not sure what you require them to be non-arbitrary, but one case for consequentialism’s being non-arbitrary would be that it is based on a direct acquaintance with or perception of the badness of pain and goodness of happiness. (I find this case plausible.) For a paper on this, see http://philpapers.org/archive/SINTEA-3.pdf
Are you good to do these posts in the future? If not, is anyone else?
I largely agree with the post. Saying Robertson’s thought experiment was off limits and he was fantasising about beheading and raping atheists is silly. I think many people’s reaction was explained by their being frustrated with his faulty assumption that all atheists are necessarily (implicitly or explicitly) nihilists of the sort who’d say there’s nothing wrong with murder.
One amendment I’d make to the post is that many error theorists and non-cognitivists wouldn’t be on board with what the murderer is saying in the thought experiment. For example, they could be quasi-realists. I say this as someone who personally leans moral realist.
The latest from Scott:
I’m fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with “things i will regret writing”
In this thread some have also argued for not posting the most hot-button political writings.
Would anyone be up for doing this? Ataxerxes started with “Extremism in Thought Experiments is No Vice”
On fragmentation, I find Raemon’s comment fairly convincing:
2) Maybe it’ll split the comments? Sure, but the comments there are already huge and unwieldy (possibly more-than-dunbar’s number worth of commenters) so I’m actually fine with that. Discussion over there is already pretty split up among comment threads in a hard to follow fashion.
To be clear, I don’t have the time to do it personally, I’d just do it for any posts I’d particularly enjoy reading discussion on or discussing. So if someone else feels it’s a good idea and Scott’s cool with it, their doing it would be the best way to make it happen.
I would be more in favour of pushing SSC to have up/downvotes
That doesn’t look like a goer given Scott’s response that I quoted.
I would certainly be against linking every single post here given that some of them would be decisively off topic.
Noting that it may be best to exclude some posts as off topic.
I’m not sure those topics are outside the norms of LW, outside the puns. Cf. this discussion: http://lesswrong.com/r/discussion/lw/lj4/what_topics_are_appropriate_for_lesswrong/
There’s discussion of this on the LW Facebook group: https://www.facebook.com/groups/144017955332/permalink/10155300261480333/
It includes this comment from Scott:
I’ve unofficially polled readers about upvotes for comments and there’s been what looks like a strong consensus against it on some of the grounds Benjamin brings up. I’m willing to listen to other proposals for changing the comments, although if it’s not do-able via an easy WordPress plugin someone else will have to do it for me.
Slate Star Codex: alternative comment threads on LessWrong?
SCI used them some previous years.
Yes, LBTL actually doesn’t have any GiveWell charities this year, and also charges the charities a 10% fee plus thousands up front; we don’t take any cut. We’re officially partnered with SCI on this and are their preferred venue.
Join a major effective altruist fundraiser: get sponsored to eat for under $2.50 a day
Very sad. I enjoyed his books—I’d particularly recommend Small Gods for LessWrongers (it’s also the one I enjoyed most in general).
Has anyone seen anything on how he died?
What gets more viewership, an unpromoted post in main or a discussion post? Also, are there any LessWrong traffic stats available?
Great job! Evan is also creating an effective altruism digest: https://www.facebook.com/groups/dotimpact/permalink/415596685274654/
“Tit-for-tat is a better strategy than Cooperate-Bot.”
Can you use this premise in an explicit argument that expected reciprocation should be a factor in your decision to be nice toward others. How big a factor, relative to others (e.g. what maximises utility)? If there’s an easy link to such an argument, all the better!