https://ninapanickssery.com/
Views purely my own unless clearly stated otherwise
https://ninapanickssery.com/
Views purely my own unless clearly stated otherwise
I assume by “health-optimizing genetic manipulation” you mean embryo selection (seeing as gene editing is not possible yet). Indeed, Rationalists are more likely to be interested in embryo selection. And indeed, it is costly. But I’d say this is different from costly parenting—it’s a one-time upfront cost to improve your child’s genetics.
I ~never hear the 2nd thing among rationalists (“improve your kid’s life outcomes by doing a lot of research and going through complicated procedures!”).
Homeschooling is often preferred not because it substantially improves life outcomes but because it’s nicer for the children and often parents. School involves a lot of wasted time/effort, and is frustrating and boring for many children. And so by homeschooling you can make their childhood nicer irrespective of life outcomes.
I was actually thinking to make a follow-up post like this. I basically agree.
Let’s talk about two kinds of choice:
choice in the moment
choice of what kind of agent to be
I think this is the main insight—depending on what you consider the goal of decision theory, you’re thinking about either (1) or (2) and they lead to conflicting conclusions. My implicit claim in the linked post is that when describing thought experiments like Newcomb’s Problem, or discussing decision theory in general, people appear to be referring to (1), at least in classical decision theory circles. But on LessWrong people often switch to discussing (2) in a confusing way.
the core problem in decision theory is reconciling these various cases and finding a theory which works generally
I don’t think this is a core problem because in this case it doesn’t make sense to look for a single theory that does best at two different goals.
I think those other types of startups also benefit from expertise and deep understanding of the relevant topics (for example, for advocacy, what are you advocating for and why, how well do you understand the surrounding arguments and thinking...). You don’t want someone who doesn’t understand the “field” working on “field-building”.
My bad, I read you as disagreeing with Neel’s point that it’s good to gain experience in the field or otherwise become very competent at the type of thing your org is tackling before founding an AI safety org.
That is, I read “I think that founding, like research, is best learned by doing” as “go straight into founding and learn as you go along”.
I naively expect the process of startup ideation and experimentation, aided by VC money
It’s very difficult to come with AI safety startup ideas that are VC-fundable. This seems like a recipe for coming up with nice-sounding but ultimately useless ideas, or wasting a lot of effort on stuff that looks good to VCs but doesn’t advance AI safety in any way.
I disagree with this frame. Founders should deeply understand the area they are founding an organization to deal with. It’s not enough to be “good at founding”.
This makes sense as a strategic choice, and thank you for explaining it clearly, but I think it’s bad for discussion norms because readers won’t automatically understand your intent as you’ve explained it here. Would it work to substitute the term “alignment target” or “developer’s goal”?
When I say “human values” without reference I mean “type of things that human-like mind can want and their extrapolations”
This is a reasonable concept, but should have a different handle from “human values”. (Because it makes common phrases like “we should optimize for human values” nonsensical. For example, human-like minds can want chocolate cake but that tells us nothing about the relative importance of chocolate cake and avoiding disease, which is relevant for decision making.)
What “human values” gesture at is distinction from values-in-general, while “preferences” might be about arbitrary values.
I don’t understand what this means.
Taking current wishes/wants/beliefs as the meaning of “preferences” or “values” (denying further development of values/preferences as part of the concept) is similarly misleading as taking “moral goodness” as meaning anything in particular that’s currently legible, because the things that are currently legible are not where potential development of values/preferences would end up in the limit.
Is your point here that “values” and “preferences” are based on what you would decide to prefer after some amount of thinking/reflection? If yes, my point is that this should be stated explicitly in discussions, for example like “here I am discussing the preferences you, the reader, would have, after thinking for many hours.”
If you want to additionally claim that these preferences are tied to moral obligation, this should also be stated explicitly.
Yeah that’s fair. I didn’t follow the “In other words” sentence (it doesn’t seem to be restating the rest of the comment in other words, but rather making a whole new (flawed) point).
Has this train of thought caused you to update away from “Human Values” as a useful construct?
I was curious so I read this comment thread, and am genuinely confused why Tsvi is so annoyed by the interaction (maybe I am being dumb and missing something). My interpretation of Wei Dai’s point is the following:
Tsvi is saying something like:
People have a tendency to defer too much (though deferring sometimes is necessary). They should consider deferring less and thinking for themselves more.
When one does defer, it’s good to be explicit about that fact, both to oneself and others.
As an example to illustrate his point, Tsvi mentions a case where he deferred to Yudkowsky. This is used as an example because Yudkowsky is considered a particularly good thinker on the topic Tsvi (and many others) deferred on, but nevertheless there was too much deference.
Wei Dai points out that he thinks the example is misleading, because to him it looks more like being wrong about who it’s worth deferring to, rather than deferring too much. The more general version of his point is “You, Tsvi, are noticing problems that occur from people deferring. However, I think these problems may be at least partially due to them deferring to the wrong people, rather than deferring at all.”
(If this is indeed the point Wei Dai is making, I happen to think Tsvi is more correct, but I don’t think WD’s contribution is meaningless or in bad faith.)
That’s a decision whose emotional motivation is usually mainly oxytocin IIUC.
I strongly doubt this, especially in men. I suspect it plays a role in promoting attachment to already-born kids but not in deciding to have them.
Oxytocin is one huge value-component which drives people to sink a large fraction of their attention and resources into local things which don’t pay off in anything much greater. It’s an easier alternative outlet to ambition. People can feel basically-satisfied with their mediocre performance in life so long as they feel that loving connection with the people around them, so they’re not very driven to move beyond mediocrity.
I know you are posting on LW which is a skewed audience, but most people are mediocre at most things and are unlikely to achieve great feats according to you, even with more ambition. Having a happy family is quite a reasonable ambition for most people. In fact, it is of the few things an everyday guy can do that “pays off in anything much greater” (i.e. the potential for a long generational line and family legacy).
(Also consider that stereotypically, women are the ones who spend the most effort on domestic and child-related matters, and are also less likely to be on the far right of bell curves.)
At risk of committing a Bulverism, I’ve noticed a tendency for people to see ethical bullet-biting as epistemically virtuous, like a demonstration of how rational/unswayed by emotion you are (biasing them to overconfidently bullet-bite). However, this makes less sense in ethics where intuitions like repugnance are a large proportion of what everything is based on in the first place.
Maybe I will make a (somewhat lazy) LessWrong post with my favorite quotes
Edit: I did it: https://www.lesswrong.com/posts/jAH4dYhbw3CkpoHz5/favorite-quotes-from-high-output-management
I think the only possible tension here is re. embryo selection. And it’s not a real tension. The claim is something like “if what’s giving you pause is the high demand on parents, just wing it and have kids anyway and anyhow” + “if you already know you want to have a kid and want to optimize their genes/happiness here are some ways to do it”. I think most Rationalists would agree that the life of an additional non-embryo-selected, ordinary-parented child is still worth creating. Or in other words, one set of claims is about the floor of how much effort you can put in per child and it still be a good idea to have the child. The other set of claims is about effective ways to put more effort in if you want to (mainly what’s discussed is embryo selection for health/intelligence).