Jobs, Relationships, and Other Cults
For years I (Elizabeth) have been trying to write out my grand unified theory of [good/âbad/âhigh-variance/âhigh-investment] [jobs/ârelationships/âreligions/âsocial groups]. In this dialogue me (Elizabeth) and Ruby throw a bunch of component models back and forth and get all the way to better defining the question. đ
About a year ago someone published Common Knowledge About Leverage Research, which IIRC had some information that was concerning but not devastating. You showed me a draft of a reply you wrote to that post, that pointed out lots of similar things Lightcone/âLessWrong did and how, yeah, they could look bad, but they could also be part of a fine trade-off. Before you could publish that, an ex-employee of Leverage published a much more damning account.
This feels to me like it encapsulates part of a larger system of trade-offs. Accomplishing big things sometimes requires weirdness, and sometimes sacrifice, but places telling you âwell weâre weird and high sacrifice but itâs worth itâ are usually covering something up. But theyâre also not wrong that certain extremely useful things canât get done within standard 9-5 norms. Which makes me think that improving social tech to make the trade-offs clearer and better implemented would be valuable.
Which makes me think that improving social tech to make the trade-offs clearer and better implemented would be valuable.
Seems right.
I donât remember the details of all the exchanges with the initial Leverage accusations. Not sure if it was me or someone else whoâd drafted the list of things that sounded equally bad, though I do remember something like that. My current vague recollection was feeling kind of mindkilled on the topic. There was external pressure regarding the anonymous post, maybe others internally were calling it bad and I felt I had to agree? I suppose thereâs the topic of handling accusations and surfacing info, but thatâs a somewhat different topic.
I think itâs possible to make Lightcone/âLessWrong sound bad but also I feel like there are meaningful differences between Lightcone and Leverage or Nonlinear. Itâd be interesting to me figure out the diagnostic questions which get at that.
One differentiating guess is that while Lightcone is a high commitment org that generally asks a for a piece of your soul [1], and if youâre around thereâs pressure to give more, my felt feeling is we will not make it âhard to get off the trainâ. I could imagine if that the org did decide we were moving to the Bahamas, we might have offered six-months severance to whoever didnât want to join, or something like that. There have been asks that Oli was very reluctant to make of the team (getting into community politics stuff) because that felt beyond scope of what people signed up for. Things like that meant although there were large asks, I havenât felt trapped by them even if Iâve felt socially pressured.
Sorry, just rambling some on my own initial thoughts. Happy to focus on helping you articulate points from your blog posts that youâd most like to get out. Thoughts on the tradeoffs youâd most like to get out there?
(One last stray thought: I do think there are lots of ways regular 9-5 jobs end up being absolutely awful for people without even trying to do ambitious or weird things, and Lightcone is bad in some of those ways, and generally I think theyâre a different term in the equation worth giving thought to and separating out.)
[1] Although actually the last few months have felt particular un-soul-asky relative to my five years with the team.
I think itâs possible to make Lightcone/âLessWrong sound bad but also I feel like there are meaningful differences between Lightcone and Leverage or Nonlinear. Itâd be interesting to me figure out the diagnostic questions which get at that.
I think thatâs true, and diagnostic questions are useful, because a lot of the solution to this dilemma is proper screening. But less than youâd hope because goodness-vs-cost is not static. I think a major difficulty is that bad-cult is an attractor state, and once you have left the protections of being a boring 9-5 organization you have to actively fight the pull or will be sucked into it.
To use an extreme example: if the Reverend Jim Jones had died in a car accident before moving to California, heâd be remembered as a regional champion of civil rights, at a time when that was a very brave thing to be. His church spearheaded local integration; he and his wife were the first white couple in their city to adopt a black child, who they named James Jones Jr. I really think they were sincere in their beliefs here. And if heâd had that car accident while in CA, heâd be remembered as a cult leader who did some serious harms, but was also a safe harbor for a lot of people fleeing the really fucked up cults. It wasnât until they moved to Jonestown in Guyana that things went truly apocalyptic.
I think youâre right that lots of normal jobs with reasonable hours and HR departments are harmful in the same way bad-cults are harmful. I also think grad school is a clearly cult and the military is so obviously the biggest cult that it shouldnât be a discussion (and this would be true even if it was used for 100% moral goals; the part about bombing people who donât deserve it is an almost entirely separate bad thing). But most cult checklists specifically exempt the military, and conventional clergy, and donât even think to talk about academia. I think of cult (negative valence) as kind of a pointer for âabusive relationships that scaleâ, and you can have shitty jobs that are harmful in same way as abusive jobs (but less so) in the same way you can have shitty relationships that are harmful in the same ways as abusive relationships (but less so). And itâs easier to talk about the strongest versions, but itâs useful to talk about in part because the problems are ubiquitous.
People often want to make this about intent, but I think one of the most important lessons is that intent is irrelevant in most aspects. Someone who is a demanding and unhelpful partner because they were severely injured in a car accident and are on enough pain killers to remove their inhibitions but not their pain is importantly morally different from someone who is demanding and unhelpful because they are an asshole. And you have more hope they will get better later. But if theyâre screaming insults at their partner I still expect it to hurt a lot.
To take this further: I think thereâs a good argument that jobs are coercive by default, especially if you live paycheck to paycheck and finding another job isnât instantaneous. They control your ability to feed and house your family, theyâre your social environment for more hours than anything else including your family, they greatly influence your ability to get your next job. Thatâs a lot of power.
But it would be insane to say âno one is allowed to work until they have six monthsâ expenses saved upâ. Because itâs not really the job thatâs coercive, itâs reality, and the job just happens to benefit from it.
But they really can abuse that vulnerability.
itâs not really the job thatâs coercive, itâs reality
I had that thought as you began typing that jobs are coercive. I think this is a good point.
I think a major difficulty is that bad-cult is an attractor state, and once you have left the protections of being a boring 9-5 organization you have to actively fight the pull or will be sucked into it
I donât currently see that it is an attractor state, I donât have a model otherwise, but also donât have model/âevidence that makes me see why you believe this. (Possible I can be convinced easily.)
I really think they were sincere in their beliefs here.
This makes me wonder about Player vs Character. I think the general point about intent not mattering all that much is key (I know my own good intent has not saved me from being a shitty friend/âpartner).
I am interested in a question of at least âwhy do some people who seem to have good intentions slide towards very abusive/âcoercive behavior?â Thinking about when I caused the most harm, it was that my own needs and wants were very salient to me and were occluding my sense of other peopleâs much less salient wants/âneeds, and that led to rationalization, and that led downhill.
Iâm currently entertaining the thought that motivated cognition is vastly more pernicious than we treat it, and human reasoning ends up biased in any case where there are any degrees of freedom in belief (like the empirical evidence isnât smacking you in the face). And this explains a lot. Possibly here too, âgood peopleâ struggle to reason clearly in the face of incentives. The 9-5 job shapes incentives to avoid harm, remove those safeguards, and the self-serving reasoning has nothing to check it.
I think Player vs. Character is a useful frame, but itâs also not necessary to my case for âintent is irrelevantâ and thereâs something important in that strong form.
Letâs say a good friend emotionally abuses you to get to donate a painful amount to their cause. Yelling, crying, calling you a terrible person. Morally, it matters a lot if that cause is their own bank account or an Alzheimers research charity, and if theyâre going to somehow get credit for your donation. But I contend that someone who is devastated by their grandmotherâs Alzheimers (and is pushing you to donate to a long term charity, so itâs clearly not even going to help their grandmother, they just want to spare other people this devastation), and who is never going to tell anyone about your donation or their influence, is still going to do a lot of damage.
Certainly their motivation isnât going to change the financial impact at all. If you donated next monthâs rent, you are equally fucked if it goes to Alzheimerâs research or your friendâs heroin habit or their European vacation. And the emotional impact of being cried at and insulted is similar. Maybe you take it better because you empathize with their pain, but that kind of thing will still leave a mark.
I guess you could argue this kind of emotional abuse only comes from unmet emotional needs, and so is always attempting to fulfill a selfish goal, but I feel like that pushes the word selfish beyond all meaning.
This seems right.
Iâm wondering how an important variable when seeking something from someone else is how much and how well you are modeling and optimizing for the other personâs wants and needs.
If you are using very tame means of persuasion (e.g. a salesperson on the sales floor just touting the benefits of this new vacuum cleaner and appealing to how much you probably hate vacuuming but this makes it easy), then you donât need to model the other person much because youâre unlikely to steamroll their needs.
But if you have more leverage or are doing more extreme stuff (e.g. youâre employer, partner, or crying/âyelling), then whether this is good for the person will depend on how much you are actually, practically [1] caring about what they want/âneed and are weighing this against what you want. I suppose the check might be âwould your actions change if the other personâs needs changedâ.
Reflecting back again on my own past cases of causing harm, my model is I generally believed I cared about the person, but my own needs were so great that I rationalized with little thought that what I was pushing for was what was best for them too, and didnât give much genuine thought to otherwise.
I think itâs also easy to underestimate the amount of leverage or sway you have over others that might cause them to placate you instead of advocating their own needs. This can be true for employers and partners. An employer might think that if the employee really doesnât like it, theyâll push back, not occupying the mental headspace of how soul-crushing and insecurity-producing job hunting is, or for that matter, the raw need for money.
In my case, Iâm noticing that although Iâm senior, skilled, well-paid and could likely get another job, I feel more locked in these days because of the large mortgage Iâve taken on. I could likely get paid the same amount elsewhere, but probably not the same amount plus equivalent amount of âmeaningâ, which has made me realize Iâm going to put up with quite a lot to stay at Lightcone.
[1] by âpractically caringâ, I mean to differentiate from âfelt caringâ where you emotionally care, but this fails to connect to your reasoning and actions.
Iâm interested in getting to any ideas you have for navigating this reality well, improving the trade-off, etc.
Man, itâs hard.
One of the easiest thing you can do is fire or not hire people who (will) find the job too challenging. Thatâs really all a manager owes the employee- hiring someone doesnât give you a lifelong responsibility for their emotions. And you can even give them a generous severance, which removes a great deal of the financial risk.
But maybe they really want the job, in ways severance pay doesnât change. That can transform âI only want people here if itâs genuinely good for themâ into âconvince me youâre happy here or Iâll fire youâ, which is a fertile ground for a lot of harm.
One big reason this can happen is that the employee is getting paid in meaning, impact, or future job opportunities, and donât see another path to getting these things. On a very mundane level: Iâve put up with conditions at impactful jobs I wouldnât have tolerated for bigtech bullshit jobs. And you can argue about whether I was right in those specific cases[1], but I 100% stand by the general equation that I should accept more pain while fighting against poverty or for vaccines than I should for Google advertising project #19357356. And I would be mad at anyone who pushed me to quit just because I was over my stress line for less important projects.
Iâve known a number of members of a number of ~cults. Some of those cults had huge variance in how members turned out. Of those, the members who did well were the ones who went in with their own plans. They might have changed them once they got involved with the group, and the group might have been integral to their new plans, but if they lost the group they knew theyâd be able to form and execute a new plan. This gave them both the ability to assess their groupsâ plans and to push to change them, secure in the knowledge of their BATNA.
The people who got really hurt in those ~cults were the ones who were looking to the group to provide them meaning/âpurpose. And if youâd told them âyou have to prove youâre happy here or weâll kick you outâ, they would have faked happiness harder and been even more fucked up by the experience, even if that rule was 100% sincerely meant for their benefit.
âif youâre not happy weâll work very hard to make you happyâ is better in some ways but worse in others. Participating in that process makes you quite vulnerable to the employer, and they need to be not just virtuous but skilled to make that pay off.
And this is for the simplest solution to the simplest case. It gets much thornier from there.
One concept Iâd love to popularize for reasoning about jobs in particular is a vulnerability budget. When people take on vulnerability to an org (in the form of a low paycheck, or living with co-workers, or working long hours on little sleep...) the impact isnât linear. I think for most people you get slightly worse and slightly worse until you cross the event horizon and get sucked into the black hole of a toxic work environment you cannot escape from or fix. In part because once things are that bad you lose the ability to notice or act on your preferences to change the environment. So no matter how good the trade looks locally, you shouldnât push people or let yourself be pushed to take on more vulnerability than your budget allows.
[Different people have different budgets and different costs for a given condition]
Given that, the question is âhow do you want to spend you(r employeesâ) vulnerability budget?â You can pay them poorly or have a group live-work house, but not both. You can ask the for insane hours, but then youâd better be paying luxuriously.
This is tricky of course, because you donât know what your employeesâ budget is and what a given request costs them. But I think itâs a useful frame for individuals, and employers should be cooperative in working with it (which sometimes means giving people a clean no).
One of the easiest thing you can do is fire or not hire people who are finding the job too challenging.
I donât think this is easy at all. Back to my previous point about motivated cognition: people already find it hard to fire people, even difficult, disruptive, and unproductive employees. Usually takes far too long. To get someone to conclude that a productive, albeit unhappy, employee should be fired? Thatâs too hard. I expect a bunch of effort to go into solving the causes of unhappiness, but rationalized and possibly unable to accept deep intractable causes of the unhappiness that mean the person should leave.
And you list well reasons on the employeeâs end for not wanting to quit even if itâs bad.
Iâm more hopeful about approaches that just cause conditions to generally be better. I like this âvulnerability budgetâ idea. That seems more promising.
One thought is that the 9-5 ordinary professional workspace norms are just a set of norms about whatâs reasonable and acceptable. Perhaps what we need is just an expanded set of norms thatâs more permissive without being too permissive. Employers donât need to do complicated modeling, they just follow the norms. Employees know that if the norms are being broken, theyâll get listened to and supported in having them upheld.
And maybe youâll say that the whole point is we want our orgs to be able to go be agentically consequentialist about all the things, and maybe, but my guess is thereâs room for norms like vulnerability budgets.
An alternative norm Iâd like to see in place is a broader cultural one around impact status. I think many people are harmed by the feeling they need a job with impact status at all times, just to be admitted to âsocietyâ, and like it wouldnât be at all okay to go work on at nice big tech company if thereâs no good opening in an impact org that actually works for you. This means people feel they have very few BATNA organizations to go to if they leave their current one (I feel it!). If we can say âhey, I respect taking care of yourself and hence leaving a job thatâs not good for you even if it means less impact right nowâ, then maybe people will feel a bit more willing to leave roles that arenât good for them.
Pedantic note: I think we might need better language than âhappyâ. Many people, myself included, donât think trying to be happy all the time (for some definition) is the right goal. Iâd rather pursue âsatisfactionâ and âmeaningâ. And once youâre okay with feeling unhappy for the right cause, I think it takes extra work to know that some kinds of unhappy but meaningful are healthy and some are not, and thatâs tricker to tell than happy vs not.
I know one of the difficulties for me personally is that there are certain trades of happiness for other goals I would want to make if they were instantaneous, but arenât safe if the unhappiness is prolonged specifically because Iâm so good at tanking misery for a cause. Once Iâm ~unhappy (which isnât quite the right word), I become very bad at evaluating whether Iâm actually getting what I wanted or was promised, or what could be done on the margin to make things easier. So misery-for-impact is a high-risk maneuver for me I will usually only make when the deal has a very sharp end date.
Which ties in to something else I think is important: feedback loops.
Perhaps what we need is just an expanded set of norms thatâs more permissive without being too permissive
My guess is this wonât work in all cases, because norm enforcement is usually yes/âno, and needs to be judged by people with little information. They canât handle âyou can do any 2 of these 5 things, but no moreâ or âyou can do this but only if you implement it really skillfullyâ. So either everyone is allowed to impose 80 hour weeks, or no one can work 80 hour weeks, and I donât like either of those options.
I think there are missions that canât be handled well with a set of work norms. What I would want from them is a much greater responsiveness to employee feelings, beyond what would be reasonable to expect from a 9-5 org. Either everyone agrees the job is a comfortable square hole and you will handle fitting yourself in it, or itâs a complicated jigsaw and the employer commits to adapting to you as you adapt to them, and that this will be an iterative process because you have left behind the protection of knowing youâre doing a basically okay thing.
You could call this a norm, but itâs not one that can be casually enforced. It takes a lot of context for someone to judge if an employer is upholding their end of the bargain. Or an employee, for that matter.
I would dearly love for people to chill out about status, and separately, for people to assess status over a longer time horizon. I have a toy hypothesis that a lot of the hunt for status is a pica for deeper social connection, so all we need to do to make status less important is make everyone feel safe and connected.
On the latter issue: Right now much of EA has a âyouâre only as good as your last pictureâ vibe. Even people who have attained fairly high status feel insecure if theyâre not doing something legibly impressive right this second[1]. This is bad for so many reasons. On the margin I expect it leads more people to do legible projects other people approve of, instead of going off on their own weird thing. It pushes people to stay in high status in-group jobs that are bad for them rather than chill at biotech somewhere (and I have to assume this pressure is much worse for people whose worst case scenario isnât six figures at google). I taught at Atlas this summer, and was forever telling kids about my friends who worked for high status orgs and chose to stop.
[1]âHighâ probably means âmidâ here. I think this is especially strong with borrowed status, where strangers light up when you say your employerâs name but you donât have a personal reputation.
My guess is this wonât work in all cases, because norm enforcement is usually yes/âno, and needs to be judgeable by people with little information. They canât handle âyou can do any 2 of these 5 things, but no moreâor âyou can do this but only if you implement it really skillfullyâ
Maybe, maybe not. I think our social/âprofessional bubble might be able to do something at that level of sophistication. Like if there was a very landmark post saying âhere be the normsâ and it got lots of attention and discussion, I think after that, we might see people scrutinize orgs who have 80 work weeks for the 2 out 5 thing.
What I would want from them is a much greater responsiveness to employee feelings
The voice of Habryka in my head doesnât like this at all and is deeply concerned about the incentives it creates. Iâm trying to remember his exact words in a conversation we had, but definitely something along the lines that if you reward people for suffering (e.g. giving them resources, etc.), people will want to signal suffering. (Also something like suffering correlating with low-productivity too.)
I do want to be sensitive to peopleâs wellbeing, but grant that thereâs failure mode here.
This pushes me in the direction of clearly-labeled âjoin at your own riskâ workplaces where itâs stated upfront this work does not come with strong safeguards on your wellbeing, does not necessarily take pains to respect your boundaries, etc., â only join if you are a person who thinks they will reliably leave if it becomes unacceptable. An employer applying such a label to themselves should expect fewer applicants, but those applications should expect less support if they ever complain.
Possibly we want this with indications of how strong the warning is, e.g. Lightcone level vs a âlive remotely with just your coworkersâ level.
I have a toy hypothesis that a lot of the hunt for status is a pica for deeper social connection, so all we need to do to make status less important is make everyone feel safe and connected at all times.
Oh, definitely a very good hypothesis. My model is our community is much higher on social insecurity and lower on deeper social community than many other social groups. I confess to having held a times a âif I am impressive/âimpactful enough, they will love meâ belief. Also a âif I am good enough/âmoral enough/âdedicated enoughâ belief.
all we need to do to make status less important is make everyone feel safe and connected
Absolutely. Iâm free next Tuesday if you wanted to knock that out together.
One challenge that seems hard to me with that is non-fixed boundary of the community, i.e., weâre a social group that continues to grow. And you canât just go handing out feelings of safety and connection to everyone [1], so youâve got to gate them on something.
However, that might not be the core problem. Core problem might be that most people actually donât even know what this deeper connection is?
Iâve heard it complained that people donât want to be friends in this community, they only want to be [poly] partners and that might be related to this. People learning the only context you get deep connection and intimacy (or for that matter friendship) is within romantic relationships. This is a digression, but feels related to the lack of deep social connection and related status pica.
[1] Thatâs how you get community centers of notoriety.
What I would want from them is a much greater responsiveness to employee feelings
The voice of Habryka in my head doesnât like this at all and is deeply concerned about the incentives it creates [...] something along the lines that if you reward people for suffering (e.g. giving them resources, etc.), people will want to signal suffering
I donât know what to tell you here. I agree suffering olympics are bad for everyone involved, but if youâre going to do weird experimental work shit you need to listen to how it affects people and be prepared to change in response to that data.
I think itâs fine to tell employees âlook this is how we do this and you can adjust or leaveâ, but there does need to be a feedback loop incorporating how things affect workers.
To give you an example: I had a gig that I took with the understanding that it had long hours and an early start. I would have preferred otherwise, but was willing to work with it. Once I started it turned out that the position required more extroversion than either of us expected, and that I was physically incapable of of those hours and that level of extroversion.
Their real choices here were to change the job or let me go. The right one depends on many things: if every staff member said the conditions were unsustainable, they should probably change the entire work environment. If I was the only one who couldnât hack it and there was a good replacement available, they should let me go, without a stain on anyoneâs honor. But often people will try to create a third option. âCanât you just...â when the employee has already said that they canât. Itâs that fake 3rd option I object to.
And that includes situations where the constraint is weaker than physical impossibility. Bosses do not need to keep employing people who refuse to work long hours, but theyâre not allowed to harass them, or keep manufacturing emergencies to increase their hours.
nod
Harassment definitely seems like a no-no, but also seems like a thing people would fail to notice themselves doing. No one thinks âIâll just harass them a bit to do what I want.â
Meta: at this point in the convo, Iâm not sure what weâre aiming at. Maybe good to pick something.
Possibly itâs drawing out more âand this what we think employers should do /â norms get upheldâ.
With regards to employee responsiveness, I think there are things that one might say are required, such as having periodic 1:1s with your staff so you know how theyâre doing, a clear exit policy/âprocess, perhaps having severance even for someone leaving voluntarily. Iâm not sure if this is interesting vs we should retrace our steps along the stack.
What do you think of going into âwhy bad-cult is an attractor stateâ ? I feel like thatâs going to underlie a lot of what actions I think are useful.
Ah yes, Iâm quite interested in that.
An important piece of my model here is that ~everything bad[1] is driven by self-reinforcing cycles around bad attractors.
Iâve known senior software engineers with modest expenses and enormous savings living in a software hub, who were too drained by their jobs to go out and job hunt. The fact that the job is shitty is what kept them in it. And SSEWMEAESLIASF is the among the best possible job hunting situations any human being has ever experienced.
Earlier I said âbad-cult is an attractor stateâ. I want to spell that out now.
Letâs say a group or company starts out basically good and slightly weird.
There is probably a leader. Someone can be a great, insightful leader when surrounded by people who are skeptical of them but prepared to listen, and terrible when surrounded by people too afraid or reverent to push back. Like the George Lucas effect but for group leadership.
Some of this is because the leaders become more sure of themselves and more pushy, but some is because members treat them differently, sometimes against the leaderâs will. A lot of people want a parent figure to tell them what to do and theyâre the most likely people to seek out emotional wisdom gurus.
I was going to say that this was less applicable to jobs than for emotional wisdom groups, and thatâs true in general but I think not for EA. Certainly someone who ~wanted to draw from that pool and didnât care that much about skills would find no lack of candidates who could be motivated by pseudoparental approval or the promise of
meaningimpact.I think Eliezer has avoided this by aggressively refusing to be a parent figure, or community leader, or hit on anyone. I would call this a success, but he still ends up with a lot of people mad at him for refusing to be the savior/âdaddy they wanted. So some of this has to be a property of followers rather than leaders.
Iâve talked to a number of people who joined the in-person lesswrong community very early, and did shit that makes Nonlinear sound tame. Things like flying to a different country and sleeping on the floor to work for free. A decade later, theyâre all extremely happy with their choices and think theyâd be much worse off if someone had âprotectedâ them from that option.
Thereâs a big selection effect here of course- the people who werenât happy left. But they still left instead of staying and writing callout posts.
My model is that the very early rationalist risk takers were motivated by something different than later risk takers. At the beginning there was no social group to get approval from, you would only go if you personally thought it was a good idea. I get the sense a lot of people in modern EA are seeking social approval/âconnection/âstatus first, and the impact goals are secondary.I think these people exist in rationality but it mostly shows up in being on the periphery and hoping for more. There isnât the same job ecosystem in which to seek approval.
When I look at, uh, letâs say âgroups with high variance in outcomesâ, the people who do well are usually the ones who had goals and plans before ever joining the group. They might change their plans upon contact, but they have some security in their ability to adapt if the group disappears. The people who do the worst are the ones who are dependent on the group to provide meaning and direction.
On a very mundane level; when I was a software engineer I job hopped more than any of my engineer friends. I wonder how much of that was downstream of getting fired fairly early in my career and being forcefully exposed to how easy job hunting was, and then contracting for a while to ingrain the skill. Interviewing was a leisure activity for me.
Another part is that emotional wisdom cults are often about helping people grow, and people who just had a growth spurt are very vulnerable and easy to manipulate.
My archetype here is Neo immediately after he was brought out of the Matrix. Heâs suddenly found out everything he knows is wrong and is dependent on this group of strangers to feed and protect him; most people become more emotionally pliant in that situation.
EA doesnât have this directly but it does create ~âmoral growth spurtâ which may have similar problems.
Once a group gets labeled as cultish or untrustworthy, people treat members badly, worsening isolation, which is used to support the accusation of culthood.
I saw this happen with both Leverage and MAPLE. Members would go to parties, be asked normal, hard-to-avoid questions like âwhat do you do?â, answer honestly, and the person talking to them would suddenly become less friendly and more suspicious. They might start interrogating the person, implicitly demanding they either condemn or wholly defend their org. Over time the group member goes to fewer and fewer non-group events, because theyâre tired of being harassed.
Even if you think the leadership of the group is malicious, this is an awful way to treat people you allegedly think are victims of abuse.
The longer youâre in weird EA/âx-risk jobs, the harder it is to go back to regular jobs.
If an org or group does something weird that violates one of your boundaries, and they say âoh, we do things differently because Xâ, that erodes a little bit of your ability to set boundaries even if X is correct and the weird thing is highly useful with no real costs. The more this happens, the worse people become at noticing when theyâre uncomfortable and acting on it.
Especially if you have few outgroup friends to tell you âwow that is fuckedâ, or youâve gotten used to tuning them out.
And of course, evaporative cooling
If you want to offer a job that will be positive EV for employees, and youâre forgoing normal business norms, you need to actively fight the bad-cult attractor state. So Iâd love to start debating specific norms, but one of my frames is going to be âhow does this interact with the attractor state?â
[1] Please donât make me write out the caveats for this.
Some of these dynamics sound right, but something feels like itâs missing.
First, I donât really want to spend much time defining âcultâ, seems worth tabooing the word slightly just to be on the same page. The relevant part of cult to me is something like: âa group situation where members end up violating their own boundaries/âwellbeing for the sake of the group (or groupâs leader)â.
Your claim is then that thereâs an attractor whereby when thereâs a group, there are forces that push it towards becoming a group where peopleâs boundaries end up violated, presumably in order to extract more of something from them.
Your list so far feels more like it explains mechanisms that facilitate this rather than the core underlying cause. Iâll summarize/âparaphrase some of what Iâm getting from you:
Once in a state of violated boundaries/âdecreased wellbeing, a person can lose the ability to get themselves to leave, thus perpetuating the state
Dynamics between leaders and followers allow for boundary violation, either because leaders are pushy or followers can be people who are just very vulnerable to it.
People can also be vulnerable to suggestion if theyâve just had personal growth spurt (I think this is quite applicable to EA, etc. where thereâs the spurt of âI can do so much good if I approach it analytically /â holy shit the world is on fireâ (okay, wise person who gave me this insight, how do I do it?)
Once people are in a state of boundary violation (or circumstances predictive of it), this can cause isolation that makes it further harder to get out of
Evaporative cooling.
(1), (4) and [5. in your list] are ways that once youâre in a cultish (boundary-violating/âwellbeing-degrading situation), itâs hard to leave. (2) and (3) just point at why people are vulnerable to it to start with.
These, plus what I didnât quite paraphrase, donât seem to get at the attraction of these states though. But I think itâs pretty simple but worth stating:
People are often working to get what they value. Company founders want their company to succeed. Emotional wisdom group leaders want their groups to succeed (followers, money, respect, etc) And so on. What people want is often in conflict and often zero sum (I canât have your time and money and you have it too). But the default is I try to get as much of what I want as I can, and there has to be something that stops me from taking stuff youâve got that I want (money, time, your body, etc). Options include:
I care about what you want as part of what I want, so I wouldnât take from you in a way that harms you (empathy)
Societal rules and enforcement that prevent me harming you
Societal rules and enforcement that prevent me from doing things that heuristically might harm you
You being able to resist me harming you. Can be via avoiding harmful situation in the first place or via being able to leave if you find yourself in it
(societal rules often focus around preserving this)
I think empathy is a shakey mechanism on its own because of motivated cognition. Even if I do literally care about your wellbeing, my own desires can crowd out that awareness and rationalize that what I want is good for you too.
But yeah, according to this picture itâs the default that I extract from you everything I can to get what I want, unless thereâs some force preventing me. One of the default forces preventing this is standard societal norms about workplace operation (these seem to be about preventing too much dependence which makes leaving harder).
And if you remove these counter-forces, then the default (me taking all I can get) prevails. And thatâs way âcultishnessâ is an attractor. Itâs something youâve got to pump against.
To rephrase for the audience where we started, the tradeoff is that this protection is restrictive. There can be âfalse positivesâ: things prevented that wouldnât have caused harm but would be very beneficial, or that at least have a benefit-to-cost ratio that all parties were okay with.
Maybe a good question for all orgs is âhow easy is it for your employees to resist things that would be very counter to their preferences/âwellbeing?â or alternatively âwhat mechanisms ensure that employee wellbeing isnât sacrificed to the goals of the org/âcompany/âleader/âgroup?â
An important personal observations/âinsight for myself this year was realizing that although Iâm often a pretty empathetic person, i.e., I dislike the suffering others and like their joy/âpreference satisfaction, my empathy naturally becomes weak whenever thereâs a conflict between what I want and what others want (often when itâs people around me!). Or if Iâm having strong emotions about something, I become less able to be aware of other peopleâs emotions even if I would have been quite attentive and caring about them if I myself werenât having strong feelings.
I predict this isnât uncommon and is at play when otherwise empathetic and caring people violate the boundaries and degrade the wellbeing of others.
I want to refine it a bit more.
Please do!
The relevant part of cult to me is something like: âa group situation where members end up violating their own boundaries/âwellbeing for the sake of the group (or groupâs leader)â.
Perhaps to better reflect that weâre talking about multiple social situations, I should rephrase this to say that an abusive social situation is one where one person ends up violating their own boundaries/âwellbeing for the sake of the âotherâ.
Second, there is the question of âviolating their boundaries/âwellbeing/âvalues whenâ.
I think the âwhenâ question becomes weaker if we elaborate on all the things that cults/âabusive social situations erode:
Conscious boundaries: e.g. boundaries a person usually maintains for their own wellbeing, values, etc. (this could be not working too many hours, being vegan)
âInherent boundariesâ: the things that the person requires for their healthy functioning and wellbeing whether or not theyâd have listed it as a boundary, e.g. sleep.
âFreedom from coercive incentivesâ: uncontroversial notion of an abusive relationship is one where I threaten to hurt you if you leave, but it can be more subtle such as you are dependent on the relationship for any number of reasons, and this forces you to to stay despite it being bad.
Resources to leave: e.g. stuff we discussed above.
Added by Elizabeth: Example: having gay sex. If you grow up the wrong kind of Christian, having gay sex violates a moral boundary. From my (Elizabeth) perspective, thatâs incorrect, and helping people realize it is incorrect so they can have the consensual sex they want is a moral good. From their culture of originâs POV, thatâs corrupting them. And if they ever converted back to Christianity, I expect them to feel I from their time in godless liberal/âlibertarian world.
My guess is only 2,3,4 of these hits the âwhenâ problem, and the others you can judge about a person while they are in the cult/âabusive relationship, even if theyâre angrily defending their desire to be in it.
The moral belief/ââmoral boundaryâ one doesnât feel core to cults or something? Like from the original cultureâs perspective, some social entity has caused someone to end up with a false [moral] belief, but to me thatâs not the hallmark of a cult. People believe wrong things and try to convince others of wrong things. The problem seems lie in the methods used to get people to believe things, and keep them believing them. I can definitely conceptualize a social entity that takes people from a situation where they have false beliefs, and causes them to have correct beliefs except in an abusive way, and Iâd still consider it a cult.
5 definitionally hits the when problem (are they violating their own values? depends on when you ask), so I assume thatâs an edit issue.
I donât think itâs obvious #1 doesnât hit when problems unless you define it very narrowly (in which case your list is incomplete), but I also donât think it matters. Not every harmful situation needs to hit every issue- if a leader is skillful enough (or a situation unlucky enough) to manipulate you entirely by making you think you want something, enough that you will protest loudly if anyone tries to take it away from you or criticize the leader, do we declare it definitely harmless? Do we define it as harmful only if the ~victim changes their mind and declares it harmful?
Feels like weâre getting technical here and Iâm not sure if itâs crucial to the overall topic, but maybe.
Hereâs a thought: we can say that these are two kinds of âboundary violationsâ:
(a) I cause you to change your beliefs via illegitimate means. For example, I apply strong incentives on you believing something like group acceptance, getting to have a job. Or I use drugs or sleep deprivation to make you amenable to a belief I want you to hold. Conversely, legitimate means of belief change is arguments and evidence that I myself find persuasive. (Kinda illegitimate but less so is is arguments and evidence that are knowingly biased or misleading.)
(b) I coerce you to contravene a current boundary that you have.
I then claim that we solve the âwhenâ problem by saying all boundary violations are of boundaries you have in the moment. If I changed your boundaries (e.g. making you think gay sex was okay) such that now you actually think something is okay, then your boundary is not being violated when you do it. However, the way in which your belief was changed may or may not have been a boundary violation.
If someone convinces you of something, you take actions because of it, later decide you think they were wrong and the action was wrong â I donât think thatâs inherently a boundary violation even though the you before and after think the action was wrong and wish you hadnât done it. Itâs important to me that convincing someone of something false is not inherently a boundary violation. Only the means make it so.
I think in practice, we see blends of the above. I half convince you of something via illegitimate means, e.g. group pressure, and then further pressure you into take the corresponding actions, e.g. by making group acceptance depend on it after I separately made you burn all your external social bridges. The boundary violation is a mix of shifting your beliefs illicitly and pressuring you to do something against what remains of your resistance to it.
I like that definition; it captures most if not all of what Iâm pointing at, and highlights our crux.
I think it makes sense to treat (B) as much more serious. And seems fine to not care much about (A) on an individual level; canât make an omelette, etc. But it is shockingly easy to make people report views they would previously and will in the future find repugnant, and against their own values and CEV. If you encounter one person reporting a terrible time with a group, that doesnât necessarily mean the group is at fault, and even if they are it might be a small enough problem for a big enough gain that you donât want to fix it. But if a group is wounding people at scale, itâs a problem regardless of intentionality. Society can only carry so many of those without collapsing. And I think (A) is the more interesting and useful case to discuss specifically because it doesnât have any bright lines to distinguish it from desirable entities.
I think the seriousness of (A) depends a lot on the severity of the means employed, and same for (B) really. Like if I lock you in a cellar, starve you, and sleep deprive you until you express a certain belief â thatâs hella serious.
But if Iâve understood your meaning, I agree that itâs easy to get (A) at scale and can often be done more subtly than (B). Also if youâre good enough at (A), you donât really need to do (B) that much?
Habryka has an essay (not published I think?) thatâs about how you get people to do crazy stuff, including people in EA/âRationality, and itâs like really just the extents that people will go to for social conformity/âacceptance. Arguably, people do (A) to themselves a lot.
If I model an EA startup getting you to do a lot of stuff you previously wouldnât have agreed to, a major component is that they convinced you they violating those previous boundaries was correct, even to the extent that youâd be angry at someone trying to convince you otherwise again.
I can imagine an EA startup offering a generous severance even for people who choose to leave, giving you good references to wherever you want to go, etc., but having convinced employees that the Right thing to do is work hours too long for them, take a low salary, such that employees end up trapped by the beliefs. Maybe this is what you call the dependence on meaning.
Yeah exactly.
One complication is that people are pretty variable. You and I both worked jobs with longer hours and lower salaries because the jobs were high on Meaning, and neither of us regret it. Iâm >5 years out of that org and wasnât even that productive there, but I still think my reasoning was correct, and I believe you feel the same. I donât want us to lose access to actions like that just because other companies bully other employees, or EA as a whole memed people into accepting trades they donât actually want.
Another complication is that people rarely have all the information. If I found out my former job was secretly kicking puppies or cheating customers, I would retroactively feel angry and violated about going the extra mile for them. Thatâs an extreme example, but itâs not that hard to think of ways my job could have pivoted that I would feel betrayed by, or at least wish I hadnât put so much work in.
Iâm thinking there might be a hierarchy to be constructed here.
Something like:
coercively violating peopleâs boundaries
failing to save people from themselves
ensuring people donât hurt themselves
And then we have a question of is your obligation (as an employer, partner, guru, etc) to not do (1), or is the requirement that youâre responsible for (3)?
And a challenge is once someone has been hurt, itâs tricky to answer whether you actively hurt them vs failed to save them (or some mixture).
Perhaps the answer is we want (3) but only to some reasonable finite level, and the question what is that level after which weâre like âyou did the reasonable stuff, if they got hurt, itâs more on themâ. Classically thatâs the 9-5, separation of work and personal, etc.
Above you were calling for âcheck in with employees on how theyâre doing, fire employees if they seem to be taking too much damageâ and stuff, which seems more like aiming for (3).
This is all an explorative frame here, not sure if this quite right.
I think those are three spots but there are important spots between them, especially between 1 and 2. Like last time I think the most interesting and important part lies in the area of greatest complexity, so Iâm going to zoom in on that.
If you force someone to drink poisoned koolaid at gun point, thatâs clearly #1. If you lie to them about what it is, still #1. If someone brings poisoned koolaid onto your property with the express purpose of drinking it, thatâs clearly #2.
But what if you leave the koolaid out, and they sneak into your yard and drink it? What if you properly labeled it âpoisonâ, but in a language they canât read? What if you leave out cans of alcohol that look and taste like soda, and a sober alcoholic drinks one by accident, ruining their sobriety and putting them into a downward spiral that takes years to recover from? What if the can was labeled as alcoholic if you paid attention, but you placed it with the soda cans and it blended in? What if you properly placed the alcoholic can but sometime during the 8 hour party someone, possibly a child, moved it in with the sodas? What if the sober alcoholic showed up in a terrible mood and youâre pretty sure they were looking for an excuse to drink, or picked up a can that could be confused with soda so they could hide their consumption?
What if itâs a work event and youâre up for the same promotion as an alcoholic. You benefit from him getting drunk. Youâd never force booze down his throat, but is it your job to stop him from making a mistake? To keep the drinks sorted? At what point are you actively participating in creating a situation that hurts them and benefits you, vs. merely not taking responsibility for their actions?
I think the fundamental set of constraints is:
we donât want people to benefit from harming others, even when itâs not a conscious plan/âtheir characterâs plan.
...but that obligation canât be infinite, or nothing will ever happen.
a given activity can have wildly different impacts on different people.
not everyone can be trusted to know how an activity will affect them, or be resilient if they are wrong.
A lot of obvious fixes make the problem worse. Telling people âhey this highly valuable, but only if youâre extremely special. Otherwise it will be harmfulâ has a terrible track record.
I think #3 mostly doesnât belong on this scale, because it requires
choosing someoneâs values for them
coercing them into doing the right thing by the values you chose.
Doing this for adults is I think net-harmful even when you mean well, and itâs a huge exploit for anyone who is less than totally beneficent.
With #3, I am also thinking of the kind of thing you advocated, or at least raised as possibilities, early on in this dialogue:
What I would want from them is a much greater responsiveness to employee feelings, beyond what would be reasonable to expect from a 9-5 org. Either everyone agrees the job is a comfortable square hole and you will handle fitting yourself in it, or itâs a complicated jigsaw and the employer commits to adapting to you as you adapt to them, and that this will be an iterative process because you have left behind the protection of knowing youâre doing a basically okay thing.
One of the easiest thing you can do is fire or not hire people who (will) find the job too challenging. Thatâs really all a manager owes the employee- hiring someone doesnât give you a lifelong responsibility for their emotions. And you can even give them a generous severance, which removes a great deal of the financial risk.
Firing someone when if they seem sufficiently unhappy feels like a âsave them from themselves kind of thingâ. I think itâs iffy and tricky, and maybe we only ever want level 2.5 on the scale, but then itâs useful to point to this kind of thing and say that your obligation is ânot thatâ.
But what if you leave the koolaid out, and they sneak into your yard and drink it? What if you properly labeled it âpoisonâ, but....
I agree with the complex questions here, I donât have thoughts yet. But am curious if you think this is a good frame to build on when it comes to figure out norms/ârequirements for jobs, relationships, and other cults.
Fwiw, my preferred relationship style (especially romantic but also friendship) does have a lot of #3. I want to save those Iâm close with myself from myself. I donât know that choice of values is especially complicated there though.
I think there is big difference between âtake responsibility for all harm to befell strangersâ and ârefuse to be used to harm people you love, even if they say they want it.â
I do agree that on a practical level, telling employees âif youâre not happy weâll fire youâ becomes a demand for emotional labor if someone needs the job (for money or meaning or anything else). You can try to handle this by only hiring people who donât need the job, but somehow my âjobs only for the independently wealthyâ platform has failed to take off.
More moderate examples:
Retreats for meditation/ârationality/âhallucinogens are fantastically valuable for some people and life-ruining for others. How do you screen to make sure most participants find it beneficial? Whatâs the acceptable false positive rate?
1099s and gig work are often used to extract value from employees to employers. But I love freelancing and get angry every time a rule designed to protect me makes it harder to work the way I want to.
Lots of people enjoy BDSM in a healthy way. But predators do use it as camouflage, and victims do use it to retraumatize themselves. How should organized groups filter around this? How should individuals think about this when picking partners? What are the acceptable false positive and false negative rates?
People vary a lot in how they want married life to look. Forcing your wife to be a SAHM against her will is awful; dividing home and paid labor in ways that leave both husband and wife happier are great. What if you told your then-gf you were career military so if you had children sheâd have to solo parent a lot of the time, and she accepted but is now unhappy with the deal?
I interviewed an ex-cult member here TODO LINK. His take was that it was strongly beneficial to him on net, and as soon as that stopped being true he left. He did pay a heavy price for his membership but was glad heâd done it. OTOH, there are enough unhappy ex-members that there are articles about this awful cult. Whatâs an acceptable success ratio?
Digging beneath the examples and everything weâve been discussing, perhaps the core questions are:
What restrictions do we place on people in order to prevent harm? This includes restrictions on voluntary agreements that people are allowed to enter into.
The restrictions might be âyou canât do Xâ or âyou can do X only so long as you also Yâ
Restrictions can be softer like ânormsâ that people will judge you for violating, or actually legal rules that can get you imprisoned, fined, etc
Who gets to make the judgments about what is and isnât okay?
In making restrictions, we are saying that individual are not always capable of deciding for themselves. Or sometimes they are and sometimes they arenât, so do we want to assist the versions of themselves they somehow overall ~endorse?
In terms of restrictions, given that weâre making them, big tradeoff is benefit of the many vs harms of the few. Are we going to restrict the freedom of the many to protect some number from [possibly] great harm?
I guess weâve got two angles here: (i) you have to do something like pick an exchange rate between benefits/âfreedoms and restricting freedom to prevent harm, (ii) you can try to improve the Pareto frontier of the tradeoffs.
Getting back to our particular examples, professional norms are these soft restrictions there at least in part to prevent harm. We might then say âbut to do great things we must forego these protectionsâ and then we can argue whether itâs really worth it or if there are things that allow the benefit without immense cost if we do some other thing, e.g. you have to be paying attention to your employees and being responsive to them (do Y to do X) or we reduce the restriction a bit but not fully (you can do 2 or 3 out of these 5 things but not all of them).
Providing warnings and making things much clearer opt-in feel like a Pareto-frontier pushing attempt. Allow things but give people more info. Iâd also include things like âget everyone to chill out about statusâ as an attempt to change incentives so people are less inclined to do things that are bad for them.
Iâm starting to brew some thoughts on interventions here, but need to think some more.
I guess weâve got two angles here: (i) you have to do something like pick an exchange rate between benefits/âfreedoms and restricting freedom to prevent harm, (ii) you can try to improve the Pareto frontier of the tradeoffs.
I think this is the heart of it, with the complication that the trade-offs and mitigation math vary a lot by person. Which is one reason large companies turn in to legible bureaucracies. But for small orgs asking extraordinary things from specific people, it does feel possible to do much better.
Not to try to ignore the fundamental challenge, Iâm curious what can be accomplished with Pareto-frontier expanding things like memes/âeducation. Just one idea, for example, is that maybe small orgs are free to make their extraordinary asks, but weâve broadcast a bunch of memes to people about understanding and defending their boundaries, this is what abuse looks like (akin to âthis is what drowning looks likeâ).
Aâs discussed in our side-channel, we maybe ought to conclude this document here and possibly continue elsewhere at some point.
In my ideal world Iâd go back through everything and pull out the parts that were most interesting to me, but unfortunately I donât have the Slack right now and donât want to block publication any longer.
This has been very interesting though. I appreciate the opportunity to help you express your models, and a chance to develop more of my own on the topic. Cheers! Thanks, Elizabeth.
I think this is a good topic to discuss, and the post has many good insights. But I kinda see the whole topic from a different angle. Worker well-being canât depend on the goodness of employers, because employers gonna be bad if they can get away with it. The true cause of worker well-being is supply/âdemand changes that favor workers. Examples: 1) unionizing was a supply control which led to 9-5 and the weekend, 2) big tech jobs became nice because good engineers were rare, 3) UBI would lead to fewer people seeking jobs and therefore make employers behave better.
To me these examples show that, apart from market luck, the way to improve worker well-being is coordinated action. So I mostly agree with banning 80 hour workweeks, regulating gig work, and the like. We need more such restrictions, not less. The 32-hour work week seems like an especially good proposal: it would both make people spend less time at work, and make jobs easier to find. (And also make people much happier, as trials have shown.)
Seems to me that debates about (de)regulation often conflate two different things, which probably are not clearly separated but exist on a continuum. One is that people are different. Another is cooperation vs defection in Prisonerâs Dilemma (also known as sacrifice to Moloch).
From the âpeople are differentâ perspective, the theoretical ideal would be to let everyone do their own thing, unless the advantages of cooperation clearly outweigh the benefits of freedom.
From the âMolochâ perspective, it would be best for the players if defection was banned/âpunished.
As an example, should it be okay for an employee to have a sexual relation with their boss? From the âpeople are differentâ perspective, hey, if two people genuinely desire to have sex with each other, why should they be forbidden to do so, if they are both consenting adults? From the âMolochâ perspective, we have just added âprovide sexual services to your boss and pretend that you like itâ to the list of things that desperate poor people have to do in order to get a job.
And both these perspectives are legitimate, for different people in different situations, and it is easy to forget that the other situation exists (and to have this blind spot supported by your bubble).
Simply asking people about their genuine preferences is not enough, because of possible preference falsification. Imagine the person who desperately needs the jobâif you asked them whether they are genuinely okay with having sex with their boss, they might conclude that saying ânoâ means not getting the job. People could lie even if a specific job is not on the line, simply because taking a certain position sends various social signals, such as âI feel economically (in)secureâ.
But if we cannot reliably find out peopleâs preferences, it is not possible to have a policy âit is OK only if it is really OK for youâ, and without an anonymous survey we canât even figure out which solution would be preferable for most people. (In near future, an AI will probably compile a list of your publicly stated opinions for HR before the job interview.) So we are left guessing.
Interesting, your comment follows the frame of the OP, rather than the economic frame that I proposed. In the economic frame, it almost doesnât matter whether you ban sexual relations at work or not. If the labor market is a sellerâs market, workers will just leave bad employers and flock to better ones, and the problem will solve itself. And if the labor market is a buyerâs market, employers will find a way to extract X value from workers, either by extorting sex or by other waysâyouâre never going to plug all the loopholes. The buyerâs market vs sellerâs market distinction is all that matters, and all thatâs worth changing. The great success of the union movement was because it actually shifted one side of the market, forcing the other side to shift as well.
I agree that in long term, sellerâs market is the answer (and in the era of AGI, keeping it so will probably require some kind of UBI). But the market is not perfect, so the ban is useful to address those cases. Sometimes people are inflexibleâI have seen people tolerate more than they should, considering their market position they apparently were not aware/âsure of. Transaction costs, imperfect information, etc.
One missing piece of context from this response is that a central case under discussion is the case where the employer is hypothetically aligned with the goals of its employees (as is often the case for small non-profits hiring heavily mission aligned employees).
By âhypotheticallyâ, I just mean that the employees (and likely the employeer) both think they are basically aligned in this way, but there are remaining concerns around mistakes, deception, bad general norms, etc.
Sure, but thereâs an important economic subtlety here: to the extent that work is goal-aligned, it doesnât need to be paid. You could do it independently, or as partners, or something. Whereas every hour worked doing the employerâs bidding, and every dollar paid for it, must be due to goals that arenât aligned or are differently weighted (for example, because the worker cares comparatively more about feeding their family). So it makes more sense to me to view every employment relationship, to the extent it exists, as transactional: the employer wants one thing, the worker another, and they exchange labor for money. I think itâs a simpler and more grounded way to think about work, at least when youâre a worker.
I mean, this is certainly not the relationship I have with my employer.
Here is an alternative approach you could use which would get you closer to this:
(Non-profit and values ~aligned) employeers pay competitive wages or (ideally) pay in impact equity.
Employees adopt the norm of maximizing (expected) profit. They can donate this to a charity of interest. (Including donating it back to the charity they work at, but this isnât an expectation.)
This seems like a good approach naively, but unfortunately, I think there are a number of inefficiencies with wages and impact assessment that imply the costs here arenât worth the benefits in clarity.
I agree with you that UBI is the solution to 98% of labor condition issues, and thatâs a major reason I support it. But some fields pay primarily in some other currency (impact, social status, connections), so youâd also need UBsocialsupport, UBfeelingImattertotheworld, etc.
I think this might be wrongâfor example, my understanding is that there are some kinds of jobs where itâs considered normal for people to work 80-hour weeks, and other kinds where it isnât. Maybe the issue is that the âkind of jobâ norms can easily operate on lets you pick out things like âfinanceâ but not âjobs that have already made one costly vulnerability bidâ?