Just noting the risk that the AIs could learn verifiable cooperation/coordination rather than kindness. This would probably be incentivized by the training (“you don’t profit from being nice to a cooperate-rock”), and could easily cut humans out of the trades that AI make with one another.
ahartell
Does this create a shortform feed?
It does! Alright I’ll leave this here because I think it will make me more likely to start using this feature.
ahartell’s Shortform
I’ve enjoyed listening to Nonlinear Library for recent posts.
Something that would really improve my experience would be for links to the original to be included in the description section of each “episode”.
Indeed, before dismissing it entirely, one would presumably want an account of why it features so prominently in our mental and social lives.
One aspect to this seems to be that clinging is a mechanism by which a portion of the network maintains its own activation. Given evolutionary dynamics, it’s unsurprising to see widespread greediness and self-recommendation among neurons/neural structures (cf Neurons Gone Wild).
Thanks, fixed!
Endorsed.
In addition to safety and contact, another dynamic was that I was generally not S1 expecting much value to come out of Dragon Army, so chafing more within the system seemed like pain, effort, and time spent for little expected gain.
Stag hunts, anyone?
Edit: Though, I will note that it can be hard to find the space between “I’m damaging the group by excluding my optimization power from the process” and “I’m being a Red Knight here and should be game for whatever the commander decides.” It may seem like the obvious split is “expressive in discussion and game in the field” but discussion time is actually really valuable. So it seems like the actual thing is “be game until the cost to you becomes great enough that something needs to change”. If you reduce the minimum size of misfit enough, then it becomes intractable to deal with everyone’s needs. But then you have to figure out if a recent failure was a result of things being seriously broken or just a sign that you need to Be Better in some operationalized and “doable” way. When do you bring up the problem? It’s hard.
Many of the Dragons who stepped into the role of the Ghost for a time did so softly and gradually, and it never felt like this level of absence was Notably Different from the previous level, in a paradox-of-the-heap sort of way. Set a bar, and set a gradient around that bar, and stay in contact.
As the person who fell most heavily into this role, the above resonates a lot. Below are some other thoughts on my experience.
I had the sense early on that I wasn’t getting very much value out of group activities, and felt not very connected to the house. In this way I think “Black Knight”-style considerations were major contributors to my Ghost behavior. Competing commitments and general depression were also relevant. I didn’t really feel like there was much the house could do to help me with that, but I don’t know whether that’s true. If it weren’t for the Black Knight dynamic, I think I would have prioritized DA over other commitments, but depression may have been sufficient for me to end up as a Ghost anyway.
Not Getting Value Out of Group Activities
The things that the whole house can do (or even a large subset) are unlikely to be the on the capability frontier of the individual in an area of serious interest for that individual. Everyone needs to be able to do the thing, and there will be more variance in skill in areas that are a major focus of some but not all of the group. Programming ability is an example.
Because of something like this, DA group activities rarely felt like they were on a growth-edge that I cared about. In particular, group exercise usually felt costly with little benefit, and I never managed to get EE to be especially valuable for me. Social things like our weekly house dinner (a substantial fraction of Dragon Army hours) felt less fun or less growthy than the likely alternatives, but I probably put unusually low value on this kind of bonding.
Now when I imagine a group that is striving for excellence, it seems like there are two ways it can work:
1) The members share a common major project and can work together towards that goal. Here it makes sense for the group to ask for a high time commitment from its members, since time put towards the group directly advances a major goal of the individual.
2) The members have different goals. In this case it seems like the group should ask for a smaller time commitment. Members can mutually draw inspiration from each other and can coordinate when there is a shared goal, but generally the group should offer affordances, not impose requirements.
Counter-evidence: I think I would have gotten a lot of value out of covering the bases on dimensions I care about. Exercise was supposed to do this, and would do it along Duncan’s version of the “capable well-rounded human” dimension. We discussed doing something like this for rationality skills, but we didn’t follow through.
In this case, all members share a common goal of reaching a minimum bar in some area. Still, this can be boring for those who are already above the bar, and for me this sort of “catching up”/”covering the bases” is much less exciting than pushing forward on a main area of interest. (Which means group-time still ends up as less-fun-than-the-alternative by default.)
There were experiments intended to incentivize Dragons to do solo work on things they considered high priority, but my impression was that there was little encouragement/accountability/useful structure. Things I was originally excited about turned into homework I had to do for DA.
[These don’t seem like cruxes to me, but are places where our models differ.]
[...]
a crux for some belief B is another belief C which if one changed one’s mind about C, one would change one’s mind about B.
[...]
A double crux is a particular case where two people disagree over B and have the same crux, albeit going in opposite directions. Say if Xenia believes B (because she believes C) and Yevgeny disbelieves B (because he does not believe C), then if Xenia stopped believing C, she would stop believing B (and thus agree with Yevgeny) and vice-versa.
[...]
Across most reasonable people on most recondite topics, ‘cruxes’ are rare, and ‘double cruxes’ (roughly) exponentially rarer.
It seems like your model might be missing a class of double cruxes:
It doesn’t have to be the case that, if my interlocutor and I drew up belief maps, we would both find a load-bearing belief C about which we disagree. Rather, it’s often the case that my interlocutor has some ‘crucial’ argument or belief which isn’t on my radar at all, but would indeed change my mind about B if I were convinced it were true. In another framing, I have an implicit crux for most beliefs that there is no extremely strong argument/evidence to the contrary, which can match up against any load-bearing belief the other person has. In this light, it seems to me that one should not be very surprised to find double cruxes pretty regularly.
Further, even when you have a belief map where the main belief rests on many small pieces of evidence, it is usually possible to move up a level of abstraction and summarize all of that evidence in a higher-level claim, which can serve as a crux. This does not address your point about relatively unimportant shifts around 49%/51%, but in practice it seems like a meaningful point.
[Note: This comment seems pretty pedantic in retrospect. Posting anyway to gauge reception, and because I’d still prefer clarity.]
On honest businesses, I’d expect successful ones to involve overconfidence on average because of winner’s curse.
I’m having trouble understanding this application of winner’s curse.
Are you saying something like the following:
People put in more resources and generally try harder when they estimate a higher chance of success. (Analogous to people bidding more in an auction when they estimate a higher value.)
These actions increase the chance of success, so overconfident people are overrepresented among successes.
This overrepresentation holds even if the “true chance of success” is the main factor. Overconfidence of founders just needs to shift the distribution of successes a bit, for “successful ones to involve overconfidence on average”.
First, this seems weird to me because I got the impression that you were arguing against overconfidence being useful.
Second, are you implying that successful businesses have on average “overpaid” for their successes in effort/resources? That is central to my understanding of winner’s curse, but maybe not yours.
Sorry if I’m totally missing your point.
Likewise.
This was a test comment for something very important.
Just finished. I’m sure my calibration was terrible though.
Hello Internet is a fun “two guys talking” podcast made by two popular youtubers including CGPGray, the guy who made this great video about the future of automation and employment. Low (almost no) informational content, but really enjoyable, and CGPGray will often say things that make it sound as if he’s read at least some of LessWrong/Overcoming Bias. At the very least he’s a transhumanist.
Completed!
There is also the possibility that sex would not have happened anyway but brining it up that that was your intention made them want to distance themselves from the situation. And the possibility that it would have happened if you hadn’t asked but only because the flirty/touchy behavior was leading them towards wanting to have sex but asking interrupted the process (this is distinct from the original claim in that the problem wasn’t asking but asking too soon).
Awesome.
Shouldn’t the last one refer to the one above it rather that the one two places above it though? I think it should be “and I love being able to recognize the costs and benefits of this uncertainty” rather than “and I love just what this drive to dispel uncertainty can do.”
I don’t know if they’re sure. Mostly I was just responding to the “who are they supposed to have learned that from?”. I think there are a lot of social, gender expectation-y things that would lead to women thinking that they were “supposed” to be less assertive.
Who are they supposed to have learned that from? They sure as hell didn’t learn that from me. And every man I know wishes women were more to the point. The stereotype criticism is “blah blah blah”, not abruptness. If you’re in charge, make decisions, and give orders. I’ll salute, and we’ll get something done.
No citations, but I’ve heard a lot of times that women in business positions are punished for being assertive or aggressive in situations where men are expected to do the same. I don’t know if this is true (I think it probably is), but either way I’ve definitely heard it enough times that it doesn’t surprise me that women would think they should try not to seem abrupt or bossy.
Thank you for this post and your work on the Jones Act!
Sorry this is my only further comment, but below are you conflating the reduction of cost for a barrel of oil vs a gallon of gas? There are 42 gallons in a barrel of crude oil, which google tells me corresponds to around 20 gallons of gas once refined.