An aspiring rationalist who has been involved in the Columbus Rationality community since January 2016.
J Thomas Moros
Book Review: Life 3.0: Being Human in the Age of Artificial Intelligence
Not going to sign up with some random site. If you are the author, post a copy that doesn’t require signup.
I think moving to frontpage might have broken it. I’ve put the link back on.
How Popper killed Particle Physics
I’m not sure I agree. Sure, there are lots of problems of the “papercut” kind, but I feel like the problems that concern me the most are much more of the “dragon kind”. For example:
There are lots of jobs in my career field in my city, but there don’t seem to be any that are actually do one of: do truly quality work, work on the latest technology where everything in the field will go in my opinion or produce a product/service that I care about. I’m not saying I can’t get those jobs, I’m saying in 15+ years working in this city I’ve never even heard of one. I could move across country and it might solve the job problem, but leaving my family and friends is a “dragon”.
Meeting women I want to date seems to be a dragon problem. I only know 2 women who I have met that meet my criteria.
I have projects I’d like to accomplish that will take many thousands of hours each. Given constraints of work, socializing, self care and trying to meet a girlfriend (see item 2), I’m looking at a really really long time before any of these projects nears completion even if I was able to be super dedicated to devoting a couple hours a day to them, which I have not been able to.
What is going on here? Copy me
Copy me
[Yes](http://hangouts.google.com)
*hello*
Can I write a linke here [Yes](http://hangouts.google.com)
You should probably clarify that your solution is assuming the variant where the god’s head explodes when given an unanswerable question. If I understand correctly, you are also assuming that the god will act to prevent their head from exploding if possible. That doesn’t have to be the case. The god could be suicidal but perhaps not be able to die in any other way and so given the opportunity by you to have their head explode they will take it.
Additionally, I think it would be clearer if you could offer a final English sentence statement of the complete question that doesn’t involve self referential variables. The variables formation is helpful for seeing the structure, but confusing in other ways.
Oh, sorry
A couple typos:
The date you give is “(11/30)” it should be “(10/30)”
“smedium” should be “medium”
I feel strongly that link posts are an important feature that needs to be kept. There will always be significant and interesting content created on non-rationalist or mainstream sites that we will want to be able to link to and discuss on LessWrong. Additionally, while we might hope that all rationalist bloggers would be ok with cross-posting their content to LessWrong, there will likely always be those who don’t want to and yet we may want to include their posts in the discussion here.
A comment of mine
What you label “implicit utility function” sounds like instrumental goals to me. Some of that is also covered under Basic AI Drives.
I’m not familiar with the pig that wants to be eaten, but I’m not sure I would describe that as a conflicted utility function. If one has a utility function that places maximum utility on an outcome that requires their death, then there is no conflict, that is the optimal choice. Though I think human’s who think they have such a utility function are usually mistaken, but that is a much more involved discussion.
Not sure what the point of a dynamic utility function is. Your values really shouldn’t change. I feel like you may be focused on instrumental goals that can and should change and thinking those are part of the utility function when they are not.
I’m not opposed to downvote limits, but I think they need to not be too low. There are situations where I am more likely to downvote many things just because I am more heavily moderating. For example, on comments on my own post I care more and am more likely to both upvote and downvote whereas other times I might just not care that much.
I have completed the survey and upvoted everyone else on this thread
At least as applied to most people, I agree with your claim that “in practice, and to a short-term, first-order approximation, moral realists and moral anti-realists seem very similar.” As a moral anti-realist myself, a likely explanation for this seems to be that they are engaging in the kind of moral reasoning that evolution wired into them. Both the realist and anti-realist are then offering post hoc explanations for their behavior.
With any broad claims about humans like this, there are bound to be exceptions. Thus all the qualifications you put into your statement. I think I am one of those exceptions among the moral anti-realist. Though, I don’t believe it in any way invalidates your “Argument A.” If you’re interested in hearing about a different kind of moral anti-realist, read on.
I’m known in my friend circle for advocating that rationalists should completely eschew the use of moral language (except as necessary to communicate to or manipulate people who do use it). I often find it difficult to have discussions of morality with both moral realists and anti-realists. I don’t often find that I “can continue to have conversations and debates that are not immediately pointless.” I often find people who claim to be moral anti-realists engaging in behavior and argument that seem antithetical to an anti-realist position. For example, when anti-realists exhibit intense moral outrage and think it justified/proper (esp. when they will never express that outrage to the offender, but only to disinterested third parties). If someone engages in a behavior that you would prefer they not, the question is how can you modify their behavior. You shouldn’t get angry when others do what they want, and it differs from what you want. Likewise, it doesn’t make sense to get mad at others for not behaving according to your moral intuitions (except possibly in their presence as a strategy for changing their behavior).
To a great extent, I have embraced the fact that my moral intuitions are an irrational set of preferences that don’t have to and never will be made consistent. Why should I expect my moral intuitions to be any more consistent than my preferences for food or whom I find physically attractive? I won’t claim I never engage in “moral learning,” but it is significantly reduced and more often of the form of learning I had mistaken beliefs about the world than changing moral categories. When debating the torture vs. dust specks problem with friends, I came to the following answer: I prefer dust specks. Why? Because my moral intuitions are fundamentally irrational, but I predict I would be happier with the dust specks outcome. I fully recognize that this is inconsistent with my other intuition that harms are somehow additive and the clear math that any strictly increasing function for combining the harm from dust specks admits of a number of people receiving dust specks in their eyes that tallies to significantly more harm than the torture. (Though there are other functions for calculating total utility that can lead to the dust specks answer.)