Indeed, including the people who willingly caused it. But profiting from a problem is not the same as fixing it.
Emiya
Since I wrote my comment I had lots of chances to prod at the apathy of people to act against imminent horrible doom.
I do believe that a large obstacle it’s that going “well, maybe I should do something about it, then. Let’s actually do that” requires a sudden level of mental effort and responsibility that’s… well, it’s not quite as unlikely as oxygen turning into gold, but you shouldn’t just expect people doing that (it took me a ridiculous amount of time before starting to do so).
People are going to require a lot of prodding or an environment where taking personal responsibility for a collective crisis is the social norm to get moving. 10 millions would cont as lot of prodding, yeah. 100k… eh, I’d guess lots of people would still jump at that, but not many of those who are paid the same amount or more.
So a calculation like “I can enjoy my life more by doing nothing, lots of other people can try to save the world in my place” might be involved, even if not explicitly. It’s a mixture of the Tragedy of the Commons and of Bystander Apathy, two psychological mechanism with plenty of literature.
She gave me the answer of someone who had recently stopped liking fritos through an act of will. Her answer went something like this: “Just start noticing how greasy they are, and how the grease gets all over your fingers and coats the inside of the bag. Notice that you don’t want to eat things soaked in that much grease. Become repulsed by it, and then you won’t like them either.”
This woman’s technique stuck with me. She picked out a very specific property of a thing she wanted to stop enjoying and convinced herself that it repulsed her.
I completely stopped smoking four years ago with the exact same method. It’s pretty powerful, I’m definitely making a technique out of this.
I think I’ve been able to make outstanding progresses last year in improving rationality and starting to work on real problems mostly because of megalomaniac beliefs that were somewhat compartmentalised but that I was able to feel at a gut level each time I had to start working.
Lately, as a result of my progresses, I’ve started slowing down because I was able to come to terms with these megalomaniac beliefs and realise at a gut level they weren’t accurate, so a huge chunk of my drive faded, and my predictions about my goals updated on what I felt I could realise with the drive I was feeling, even if I knew that destroying these beliefs was a sign I really improved and was learning how actually hard it is to do world changing stuff...
I’ll definitely give this a trial run, trying to chain down those beliefs and pull them out as fuel when I need to.
Mh… I guess “holy madman” is a definition too vague to make a rational debate on it? I had interpreted it as “sacrifice everything that won’t negatively affect your utility function later on”. So the interpretation I imagined would be someone that won’t leave himself an inch of comfort more than what’s needed to keep the quality of his work constant.
I see slack as leaving yourself enough comfort that you’d be ready to use your free energy in ways you can’t see at the moment, so I guess I was automatically assuming an “holy madman” would optimise for outputting the current best effort he can in the long term, rather than sacrificing some current effort to bet on future chances to improve the future output.
I’d define someone who’s leaving this level of slack as someone who’s making a serious or full effort, but not an holy madman, but I guess this doesn’t means much.
If I were to try to summarise my thoughts on what would happen in reality if someone were to try these options… I think the slack one would work better in general, both by managing to avoid pitfalls and to better exploit your potential for growth.
I still feel there’s a lot of danger to oneself in trying to take ideas seriously though. If you start trying to act like it’s your responsibility to solve a problem that’s killing people, the moment you lose your grip on your thoughts it’s the moment you cut yourself badly, at least in my experience.
In these days I’ve managed to reduce the harm that some recurrent thoughts were doing by focusing on distinguish between 1) me legitimately wanting A and planning/acting to achieve A and 2) my worries related to not being able to get A or distress for things currently being not A, telling myself that 2) doesn’t helps me get what I want in the least, and that I can still make a full effort for 1), likely a better one, without paying to 2) much attention.
(I’m afraid I’ve started to slightly rant from this point. I’m leaving it because I still feel it might be useful)
This strategy worked for my gender transition.
I’m not sure how I’d react if I were to try telling myself I shouldn’t care/feel bad/worry if people die because I’m not managing to fix the problem, even if I KNOW that worrying myself about people dying hinders my effort to fix the problem because feeling sick and worried and tired wouldn’t in any way help toward actually working on the problem, I still don’t trust my corrupted hardware to not start running some guilt trip against me because I’m trying to be, in a sense that’s not utilitarian at all, callous, because I’m trying to not care/feel bad/worry about something like that.
Also, as a personal anecdote of possible pitfalls, trying to take personal responsibility for a global problem had drained my resources in ways I could’t foreseen easily. When I got jumped by an unrelated problem about my gender, I found myself without the emotional resources to deal with both stresses at once, so some recurrent thoughts started blaming me because I was letting a personal problem that was in no way as bad as being dead, and didn’t blipped at all on any screen in confront to a large number of deaths, screw up with my attempt of working on something that was actually relevant. I realised immediately that this was a stupid thing to think and in no way healthy, but that didn’t do much to stop it, and climbing out of that pit of stress and guilt took a while.In short, my emotional hardware is stupid and bugged and it irritates me to no end how it can just go ahead and ignore my attempts of thinking sanely about stuff.
I’m not sure if I’m just particularly bad at this, or if I just have expectations that are too high. An external view would likely tell me that it’s ridiculous for me to expect to be able to go from “lazy and detached” to “saving the world (read reducing X risk), while effortlessly holding at bay emotional problems that would trip most people”. I’d surely tell anyone that. On the other hand, it just feels like a stupid thing to not manage doing.
(end of the rant)
(in contrast to me; I’m closer to the standard 40 hours)
Can I ask if you have some sort of external force that makes you do these hours? If not, any advice on how to do that?
I’m coming from a really long tradition of not doing any work whatsoever, and so far I’m struggling to meet my current goal of 24 hours (also because the only deadlines are the ones I manage to give myself… and for reasons I guess I have explained above).
Getting to this was a massive improvement, but again, I feel like I’m exceptionally bad at working hard.
I think that the approaches based on being a holy madman greatly underestimates the difficulty on being a value maximiser running on corrupted, basic human hardware.
I’d be extremely skeptical on anyone who claims to have found a way to truly maximise it’s utility function, even if they claim to have avoided all the obvious pitfalls of burning out and so-so.
It would be extremely hard to conciliate “put forth your full effort” and staying rational enough to notice you’re burning yourself out or noticing that you’re getting stuck in some suboptimal route because you’re not leaving yourself enough slack to notice better opportunities.
The detached academic seems to me an odd way to describe Scott Alexander, who seems to make a really effective effort to spread his values and live his life rationally, for him most of the issues he talks about seem to be pretty practical and relevant, even if he often takes interest on what makes him curious and isn’t dropping everything to work on AI—maximise the number of competent people who would work on AI.
I’m currently in a now-nine-months-long attempt to move from detached-lazy-academic to make an extraordinary effort.
So far every attempt to accurately predict how much of a full effort I can make without getting backlash that makes me worse at it in the next period has failed.
Lots of my plans have failed, so if I had went along with plans that required me to make sacrifices, as taking an idea Seriously would require you to do, would have left me at a serious loss.
What worked most and obtained the most result was keeping a curious attitude toward plans and subjects that are related to my goal, studying to increase my competence in related areas even if I don’t see any immediate way it could be of help, and monitoring on how much “weight” I’m putting on the activities that produce the results I need.
I feel I started out being unbelievably bad at working seriously at something, but in nine months I got more results than in a lifetime (in a broad sense, not just related to my goal) and I feel like I went up a couple levels.
I try to avoid going toward any state that resembles a “holy madman” for fear of crashing hard, and I notice that what I’m doing already has me pass as one to even my most informed friends on related subjects, when I don’t censor to look normally modest and uninterested.
I might be just at such a low level in the skill of “actually working” that anything that would work great for a functional adult with a good work ethic is deadly to me.
But I’d strongly advise anyone trying the holy madman path to actively pump for as much “anti-holy-madmannes” as they can, since making the full effort to maximise for something seems to me the best way to make sure your ambition burns through any defence your naive, optimistic plans think you have put in place to protect your rationality and your mental health.
Cults are bad, becoming a one-man-cult is entirely possible and slightly worse.
The review seem pretty balanced and interesting, however the bit about Bailey struck me as really misguided.
I’ll try to explain why, I apologise if at some times I might come off as angry but the whole issue about autogynephilia annoys me both at a personal level as a trans person and at a professional level as a graduated in psychologist and scientist. Alice Dreger seems to have massively botched this part of her work.
In 2006, Dreger decided to investigate the controversy around J. Michael Bailey’s book The Man Who Would be Queen. The book is a popularized account of research on transgenderism, including a typology of transsexualism developed by Ray Blanchard. This typology differentiates between homosexual transsexuals, who are very feminine boys who grow up into gay men or straight trans women, and autogynephiles, men who are sexually aroused by imagining themselves as women and become transvestites or lesbian trans women.
Bailey’s position is that all transgender people deserve love and respect, and that sexual desire is as good a reason as any to transition. This position is so progressive that it could only cause outrage from self-proclaimed progressives.
Bailey’s position caused outrage in nearly every trans woman who read the book or heard the theory, and in a lot others trans persons who felt delegitimised and misrepresented by the implications.
If you are transgender, you are suffering from gender dysphoria and you aren’t transitioning for sexual reasons at all, though your sexual health would often improve. You are doing what science shows to be the one thing that solves your symptoms that are ruining your life and making you miserable.
But then, someone who’s not trans comes along and says “no, it’s really a sex thing” based on a single paper that presented no evidence whatsoever.
This person, rather than very rigorously trying to test the theory with careful research, which is what everyone, especially someone who’s not feeling what trans women are feeling and thus is extremely clueless about the subject because it’s really easy to misunderstand a sensation your brain isn’t capable of feeling, should do, bases one of the two clusters of the book mostly on a single case study of a trans woman, who has a sex life which isn’t representative of the average trans woman at all, but who makes for a very vivid, very peculiar account of sexual practices, and the rest of the “evidence” are just unstructured observations and interviews.
The book doesn’t talk at all about how most trans person, men and women and non-binary, discover they are trans, and doesn’t describes accurately their internal experience at all. It instead presents all trans women as being motivated by sex, and half of them by sexual tendencies that psychology depicts as pathological.
And then, somehow, this completely unfounded theory becomes one of the most known theories about trans women.
So, if you are a trans woman, best case is, your extremely progressive friends and family come to you and say “oh, we didn’t knew it was just a sex thing, you could have told us you had this very weird sexual tendencies rather than make up all of that stuff about how your body and how society’s way of treating you like a man makes you feel horribly, it’s fine, we understand and love you anyway”.
Worst and more common case, your friend, family, work associates and whatever, aren’t extremely progressive. They still believe Blanchard’s and Bailey’s theory about you, though.
And then, when the trans community starts yelling more or less in unison “what the hell?!” at what Bailey wrote in his book, the best response he can come up is saying that the trans women attacking him are in a narcissistic rage because they are narcissists whose pride has been wounded by the truths he wrote, and that they are autogynephiles in denial.Bailey attracted the ire of three prominent transgender activists who proceeded to falsely accuse him of a whole slew of crimes and research ethics violations. The three also threatened and harassed anyone who would defend Bailey; this group included mostly a lot of trans women who were grateful for Bailey’s work, and Alice Dreger.
I’m not aware if some transgender women tried to defend the book, but “a lot of transgender women” seem to be a more accurate description for the books detractors than its supporters.
I’m aware of the fact that the three activists mentioned went way too far to be justified in any way. But presenting those as the only critics he received is completely wrong, because there was a huge number of wounded people who saw their lives get worse because of the book.
Autogynephilia was made popular as a theory mostly by Bailey’s book, and trans exclusionary radical feminist groups, which are currently doing huge damages to trans rights and healthcare, are using it as one of their main arguments to delegitimate trans women and routinely attack trans women with it. Even if Bailey’s intentions were good, he failed miserably and produced far more harm than anything else.
I’ll try my best to express it, even if I feel it makes me look stupid:
Short version:
Trying to improve how activism is done, figuring out ways to maximise the positive impact activists and activism organisations can have to advance their cause and that can be reasonably taught.
Reasoning
Activism organisations that are composed by volunteers and that don’t hire professionals are limited in what they can learn about their craft. Typically, activists can figure out by trial and errors and by looking at others what seem to work or not, but only if there is a feedback one can correctly eyeball.
So there is no reason to believe that the efficiency of these activists and organisations can’t be improved.
An individual studying communication and organisation wouldn’t likely be able to improve the frontier of efficiency in marketing or professional organisations that deal in communication, but even bringing the efficiency of volunteers closer to the current efficiency of professionals would be a huge improvement, able to produce a lot of positive value for the world, if one choose the right organisations to boost.
Currently I’m focusing on the communication of main-stream causes that deal with x-risks related issues, the second step would be to use the strategies learned to boost non-mainstream causes that try to address stuff that’s even more relevant to x-risks (if anyone is involved in a similar attempt or cause already they are welcomed to contact me, I’d love a chance to talk about this and see if cooperation is possible).
First steps I should currently be doing
Essentially I think I should be developing a system in excel that would allow one to classify posts on social media according to their characteristics and that would allow to investigate at a statistical level what works and what not.
I’ve started but continuing slowly, because it’s hard, it’s something I’m really not familiar with and I have an unhealthy attitude of flinching away from anything that’s hard to do in a way that makes me feel stupid and out of my league.
The second thing I should be doing is an “inadequacy analysis” of the current processes in the organisation I’m in to see all the low hanging fruits one could pick to improve performance.I so far failed to identify any fruit but two (the statistical analysis is one, the second one is how work is distributed to volunteers and seem an easier fix), because I’m likely overly worried of “shooting my foot off and falling flat on my face in a way that makes me look stupid” so I’m flinching off again.
I did managed to correct some other major procrastination problem and I’m not able to reliably get hours of work done for this project, but I have so far oriented this work in too many directions (like trying to study negotiation tactics for the future, rationality, persuasion strategy, and communication strategies on social medias, all at once) and so I couldn’t really focus enough of a significant effort in actually making progress with something.
Trying to fix the problem by creating habits and incentives that would orient me toward the most important things I should be doing, rather than the most “interesting” things I could be doing that is somehow related to the project.
I might also need to learn more efficient ways to study and practice stuff, so far I’m still studying as if to pass a written exam on it.
I’m not 100% sure I understood the first paragraph, could you clarify it for me if I got it wrong?
Essentially, the “efficient-markets-as-high-status-authorities” mindset I was trying to describe seems to me that work as such:
Given a problem A, let’s say providing life saving medicine to max number of people, it assumes that letting agents motivated by profit act freely, unrestricted by regulations or policies that even be aimed to try fix problem A, would provide said medicine to more people than an intentional policy of a government that’s trying to provide said medicine to max number of people.
The market doesn’t seem to have a utility function in this model, but any agent in this market (that is able to survive in it) is motivated by an utility function that just wants to maximise profit.
Part of the reason for the assumption that “free market of agents motivated by profit” should be so good at producing solution for problem A (save lives with medicine) is that the “free market” is awesomely good at pricing actions and at finding ways to get profits, because a lot of agents are trying different things at their best to get profit and everything that works get copied. (If anyone has a roughly related theory and feels I butchered or got wrong the reasoning involved, you are welcomed to express it right, I’m genuinely interested).
My main objection to this is that I fail to see how this is different by asking an unaligned AI that’s not super intelligent, but still a lot smarter than you, to get your mother out of a burning building so you’d press the reward button the AI wants you to press.
If I understood your first paragraph correctly, we are both generally skeptic that a market of agents set about to maximise profit would be, on average in many different possible cases, good at generating value that’s different than maximising profit.
Thank you for the clarification between unregulated and free.
I was aware of how one wouldn’t lead to the other, but I’m now unsure about how many of the people I talked to about this had this distinction in mind.
I saw a lots of arguments for deregulation in political press that made appeals to the idea of the “free market”, so I think I usually assumed that one arguing for one of these positions would assume that a free market would be an unregulated one and not foresee this obvious problem.
I actually can’t recall seeing anyone make the mistake of treating efficient markets like high-status authorities in a social pecking order.
I’ve seen often enough, or at least I think I’ve seen often enough, people treating efficient markets or just “free, deregulated market” as some kind of benevolent godly being that is able to fix just any problem.
I admit that I came from the opposite corner and that I flinched at the first paragraphs of the explanation on efficient market, but I still feel that a lot of bright people aren’t asking the questions
“Is it more profit-efficient to fix the problem or to just cheat?”
“Can actors get more profit by causing damages worse than the benefits they provide?”
“Is the share of actors that, seeing that the cheaters niche of the market is already filled when they get there, would go on to do okayish profits by trying to genuinely fix the problem able to produce more public value than the damage cheaters produce?”
Asking an unregulated free market to fix a problem in exchange for rewards is like asking an unaligned human intelligence with thousands of brains to do it.
I have seen more blatant examples of this toward the concept of free market, but a lot of people still seem to interpret the notion of “efficient market” as “and given the wisdom of the efficient market, the economy would improve and produce more value for everyone”, and I feel the two views are related, though I might be wrong about how many people have a clear difference of the two concepts in their heads.
“If these investments really are bogus and will horribly crush the economy when they collapse, surely someone in the efficient market would have seen it coming” is the mindset I’m trying to describe, though this mindset seem to have a blurry idea of what an efficient market is about.
A journalist thinks that a candidate who talks about ending the War on Drugs isn’t a “serious candidate.” And the newspaper won’t cover that candidate because the newspaper itself wants to look serious… or they think voters won’t be interested because everyone knows that candidate can’t win, or something? Maybe in a US-style system, only contrarians and other people who lack the social skill of getting along with the System are voting for Carol, so Carol is uncool the same way Velcro is uncool and so are all her policies and ideas? I’m not sure exactly what the journalists are thinking subjectively, since I’m not a journalist. But if an existing politician talks about a policy outside of what journalists think is appealing to voters, the journalists think the politician has committed a gaffe, and they write about this sports blunder by the politician, and the actual voters take their cues from that. So no politician talks about things that a journalist believes it would be a blunder for a politician to talk about. The space of what it isn’t a “blunder” for a politician to talk about is conventionally termed the “Overton window.”
I’d agree with Simplicio that voter’s “stupidity” as in “ignorance and inability to discern correctly on even issues where a scientific consensus has been reached and it really feels like a good, intuitive idea to make an internet search of ten minutes and check what the most accredited institutions are saying on the matter” would interact a lot with the border of the Overton window.
If 90% of the voters were able to mock any “stupid” idea suggested, moving out of the Overton window by going down with the quality of idea discussed would be plain suicide, moving up would sometime reward. Attempts to shift the Overton window downward, such as “hey, let’s completely go against what (insert science field) says about (insert important issue), and lets (choose between prohibiting particular subgroup therapies even if science says it’s really a good idea to provide said therapies/talk against preventing key crisis that will product unbelievable damage in short term future/suggest completely unfounded model about social issue x works and propose solution unrelated to any actual finding on the matter that has a track history of failures)” would be harshly punished by the voters, while right now these seem to be roughly 30% of politics discussed.
Still, I guess Cecie’s theory can explain the source of this “stupidity” with systemic failures that happen in other parts of society such as information and education, while if we just ascribe this to widespread individual “stupidity” and “sheepness” we are not less confused, but perhaps more so.
I wonder about that.
I’d expect we’d first see a huge number of newspaper articles and internet websites trying to make health scares about “lab meat” and an ungodly about of memes about “real men eating real meat”, or “only real meat has real taste”, and then governments would ramp up subsidies to traditional farms because “cultural activities” and whatever. Oh, and a lot of jokes about the synthetic meat that many sci-fi dystopias have as an element.
Old, powerful lobbies don’t like the free market regulating itself, at all, and making harmful/obsolete stuff a cultural/identity/political tribes battle is the first strategy to hinder it.
I’d agree it eventually will become the solution, but I expect it to go slightly worse than the energetic transition.
Computer game characters also exhibit ”intentions” and such, but there’s nobody home a lot of the time, unless you’re playing against another person.
Yes, but what we know about the structure of a computer program is greatly different than what we know about the structure of an animal brain. More complex brains seem to share a lot of our own architecture, mammals brains are ridiculously complex, and mammals show a lot of behaviours that isn’t purely directed to acquiring food, reproducing and running from predators.
For animals such as frogs and bugs, which seem to be built more like a “sensory input goes in, reflex goes out” I’d accept more doubt on whether the “somebody’s home” metaphor can be considered true, for mammals and other smarter animals the doubt are a lot less believable.
It seems cows might be smarter than dogs and highly intelligent, and right now dogs are discussed as possibly having self-recognition, since they pass olfactory tests that require self recognition (from what I saw it seems the tests are a bit more complex than just requiring the dog to have a “this-is-your-urine-mark-for-your-territory.exe” in its brain).
Generally speaking, cows show to have long term social relations with each others, good problem solving skills, and long term effects on their emotional range from negative experiences. I haven’t been able to find information on cows passing or failing self-recognition tests, visual or not, but from the intelligence they show I’d put them pretty high on moral meaningfulness.
Pigs are notoriously smart and have passed the self-recognition test, as Pattern commented.
Though, I think my main point it’s that even simpler animals, as long as the brain architecture allows for doubts that our experience of “being home”, feeling pain and etc, is in some way generalisable to theirs, would have some scaled down moral weight.
If I had to lose my higher cognitive function and be reduced to animal levels of intelligence, I wouldn’t really be okay with agreeing to be subjected to significative pain in exchange for a trivial benefit now, on the ground that I wouldn’t be sapient.
Note: this isn’t really aimed at turning lesswrongers vegan. There are convincing reasons to be vegan based on the impact over humans, but if you are already trying to be an effective altruist by doing a hard job I can accept the need of conserving willpower and efficiency, though I guess one could consider if he/she/they could reduce consumption without risks.
I think the issue of the moral weight of animals should be considered independently from the consequences it might hold for one’s diet or behaviour, or we’re just back to plain rationalisation.
I do agree on everything you said.
Right now farming animals seems to be a huge risk for zoonosis, if I remember correctly Covid-19 could have spread from exotic animals being sold in high numbers, and it jumped from man to minks in farms, spread like wildfire in the packed environment, gathering all sort of mutations, and then jumped back to man.
Farming animal is also not sustainable at all with the level of tech, resources and consumption we have now. I’d expect the impact of farming to kill at least some tens-millions people in a moderately bad global warming scenario, it’s already producing humanitarian crises now, and I’m afraid global warming increases extinction risks due to how we would be more likely to botch AGI.
I had just suggested the rule for an entirely hypothetical scenario where we are asked to trade human lives against animal lives, because I was trying to discuss the moral situation “trade animal lives and suffering against human convenience” on its own.
I generally avoid commenting only if I feel I have nothing relevant to say. The only thing that makes me delete a comment mid-writing is realising that I’m writing something that’s wrong.
If I notice I made a mistake mid-discussion, or after I’ve already posted a comment people read, I admit it, and I’ve seen that usually up-votes show it’s appreciated.
Usually when I comment it’s because I have… let’s call them “political beliefs”, though they are always about concrete things and decisions, that are a lot more “left leaning” than the average position here. As long as I’m confident in my reasons for having such beliefs, I don’t seem to worry about my reputation at all, even if I think I’m about to say something “unpopular”. As long as I’m willing to explain myself and change my mind if I’m wrong, I think that holding back on expressing such ideas makes the site weaker and betrays its spirit (I do try to keep the discussion as much apolitical as possible). I don’t comment unpopular opinions if I don’t think I can put on the effort to explain them well.
Often commenting on LessWrong is an useful test for my belief in something, the thought of having to justify your disagreement with the “smart kids club” makes me check more on my reasons for believing things, and put forward some research work.
The reputation system seems to work fine for me, since it gets me to improve. The few times I tried discussing in PMs something, it turned out less confrontational and more productive, though, so I think that’s a good approach (and it’s much more enjoyable).
I also try to remember making short comments of agreement to make our kind cooperate.
I do feel stupid and irritated each time I get down-voted, since I try to not comment on stuff that I don’t know about or to write shallow statements I can’t help but think “wow, whoever this person was is very biased against my idea” which… likely isn’t a mature reaction. I’d like to know why I get down voted, though.
I’m a hundred times more self conscious about making posts, though. I feel the stress of having a post come under the scrutiny of the community would make me obsessively edit and quadruple-check everything, so at least four ideas for posts died out this way without any good reason (so far I’ve managed to post just two questions).
Using an anonymous account or something like that wouldn’t work at all, I’m not concerned about lesswrongers writing off Emiya as an idiot, I’m afraid of me thinking I’m an idiot because my ideas got shredded apart which… is not a way of thinking about this that’s in any way good or useful, and it’s hindering my progress, so I should really try to break through it.
“Meaningfully conscious” seem a tricky definition, and consciousness a rather slippery word.
Animals clearly aren’t sapient, but saying they aren’t conscious seems to also sneak in the connotation that there’s “nobody home” to feel the pain and the experience, like a philosophical zombie.
It’s pretty clear that animal seem to act like there’s somebody home, feeling sensations and emotions and having intentions, and what we know about neurology also suggests that.
Given how some animals even pass self-recognition tests, sapience seems the only hard cut-off we can trace between animals and humans.
I’d certainly agree that we should value life based on how “complex” it’s mental life is, (perhaps with a roof that we reach when we hit sapience that I’d like to introduce for our convenience), and it certainly makes sense we shouldn’t concern with the well being of stuff that has no mind at all, but it doesn’t seem intuitive that the lack of sapience should mean that whatever suffering strikes a mind has zero moral weight.
If we agree that the suffering of a mind has a certain weight, then yeah, the “flesh eating monster hell” is a quantitatively reduced version of doing the same thing to human beings (measuring in total moral wrongness, some consequences of doing it to humans would be totally absent and others wouldn’t be scaled down at all). We can of course discuss how much the moral wrongness is reduced.
One might argument that it’s certainly preferable to slaughter a cow than to have a human die of hunger, or to slaughter a cow (with the exact meat of a human for convenience of our example) to feed two humans and save them from starvation than to slaughter a human to save two humans, and I’d agree.
I’d even agree that one might have much more urgent thing to do for the wellbeing of others than become vegan.
But the fact that we value human lives more than animals, because of sapience, doesn’t implicate that animal lives and suffering have no value whatsoever, and as long as animal lives have some value, there are some trade-offs in animal pain for human convenience we should refuse or we’re not thinking quantitatively about morals.
Deontological rules such as “let’s let any number of animals die to save even a single human life” might be considered as a temporary placeholder to separate the issue of human lives from the issue of human convenience, I think it might make discussing the issue easier.
Ziz adheres to a moral principle which classifies all life which has even the potential to be sentient as people and believes that all beings with enough of a mind to possess some semblance of selfhood should have the same rights that are afforded to humans. To her, carnism is a literal holocaust, on ongoing and perpetual nightmare of torture, rape, and murder being conducted on a horrifyingly vast scale by a race of flesh eating monsters. If you’ve read Three Worlds Collide, Ziz seems to view most of humanity the way the humans view the babyeaters.
… well.
… I mean, letting aside the holocaust comparison which is just asking to have the whole discourse get pulled astray, can you really make rational arguments that it’s not at least as bad as a quantitatively reduced version of doing the same things to human beings?
Having said this, I’m just puzzled on why she seems to think that the “flesh-eating monster hell” would survive a positive singularity with a human-aligned AI.
I can’t really imagine a future with a positive singularity where there’s just not a more convenient solution to have meat than actually growing and butchering a live animal. Humans, save perhaps a handful of sadistic psychopaths or a few people really wanting to cling about fringe stuff like “the moral value of the hunt” or barbaric recipes who would supposedly increase taste, would choose to have their meat sans-suffering if that was an option. You’d have to model people worse than Quirrelmort does, because they wouldn’t even be able to role-play a good person act that simple as answering yes to that question.
Or should I interpret her utopia as making all life immortal as well, protecting animals from accidental deaths, each others and etc… ? I’d say it still seems like a trivial fix and not something to threaten or sabotage people working on singularity over.
I honestly felt a full cognitive assault reading the links with her writing and had to commit to never open links to her blog again, but I think this is mostly my own issues resonating hard.
Two months ago I kicked open the lid of my gender dysphoria, after having repressed it for some 15 years. I quickly found out that rationality+a kind of distress that doesn’t get better if you can think clearly about it (not to say that rationality can’t help making that going away in other ways) can quickly degenerate in paranoia, since you can’t seem to manage to push a stop button to whatever search process your mind is attempting to solve your pain.
I had overthought what I was feeling a dozen different ways, and the way she seems to model other people’s thoughts struck a lot of resonating chords with me about what people I talked to could actually be thinking about me.
I’m going to focus most of the post on the theme of trans people, I think it exemplifies the first of the two main problems behind a “social conservativism” approach.
Conservatives generally don’t provide good arguments for their worries.
The model they’ll present will rarely stand as a coherent reasoning or have moving parts you can examine. “Gay marriage → loss of societal cohesion”, with a not really explained “loss of validity for traditional marriage” in between.
When there are detailed models, usually scientific literature will prove most of the concerns wrong. Progressives are usually the ones that seem to be aligned with the science, at least in the recent struggles (if one can provide counterexamples they are free to do so. Currently the strongest one I could think of is transgender athletes in sport, where both sides are misaligned with the studies—transgender athletes seem to regain an advantage with the current guidelines, but it’s not enough to make women sport a one-sided battle that are dominated by trans athletes or to be reasonably certain it’s there, and treating all sports as being influenced in the same way it’s nonsense).
2. The current situation might well be running at full speed toward a crash.
A cautionary approach that says “don’t change anything, you don’t know what you could break in our society” would be considerable only if it seems we are at a really good and stable points.
But broken stuff in our society can carry costs and problems that compound, which seem to be pretty much the situation we are in at the moment given the number of crisis our society is facing, up to extinction risks, so stasis doesn’t seems an option.
I should add that none of this is hypothetical. Right now, as we speak, young people are being actively encouraged by progressive parents, teachers and activists to ask themselves the question if maybe they’ve been born in the wrong body. And while progressives insist that this can’t possibly do any harm because all sex-related matters are unique in being the only human traits that are fully genetic and on which environment has zero effect, my counterargument is that that’s horseshit.
People are being actively encouraged in knowing that there are some people born in the wrong body, not to ask if they are really cis. I’ve yet to see an activist write something that would encourage random persons to question their gender identity, save in the kind of internet places you go look if you are questioning your gender identity, and they generally say stuff like “if you think that these experiences match your own, you might want to keep questioning your gender identity, here are other experiences that aren’t related to that to help you differentiate”.
There is very strong evidence that people can’t change their sexual orientation or their gender identity on command, such as the staggering, complete rate of failures of every sort of therapy that ever tried to obtain that.
My (more or less informed) guess is that a lot of people have a sexual orientation that’s more or on less on the bisexual spectrum, and so could their environment would influence if they acknowledge and/or follow on it or not. But the environment can only work on cases where the innate preference isn’t too marked. If you’re bisexual with a 50⁄50 preference you’d have a hard time not noticing it. If you have a 90⁄10 preference, you might believe you are straight (or gay if the preference is same sex) if you grow up in an environment, or notice that 10% if you grow up in another.
Similarly, gender identity also seems to follow a continuous scale. Some people might go either way, since transitioning isn’t exactly easy environment would likely influence their decisions on the matter. A lot of people who will “pop out” as trans depending on the environment will likely be people who are very much trans, and that will notice they are because they are informed on the subject.
I do agree that some harm might come of it. The number of people who transition and then de-transition will rise as the social stigma, huge hassles, and other problems associated with transitioning will also increase.
But, given that the numbers seem to be hundreds of de-transitioners who would suffer from it and millions of trans people who would greatly reduce their sufferings, it seems that for now we should floor it on the “more education on transgenderism” and check on what follows. Being trans and not noticing/not being able to act on it is a real harm.
Even if you insist that the number of trans people is kept constant across time and space by some kind of universal law, their suicide rates are still some factor ~18 higher than the rest of society, and you cannot possibly expect me to believe that this has nothing to do with them being constantly told by trans activists that the world hates them and that there is nothing they can do about it (by the way, I don’t hate you.) So from my point of view, progressives are only making impressionable young people more miserable by convincing them that their current reality is intolerable and evil.
As a trans person this is not my experience at all. Trans activists usually provide more support to trans people than depressing content. My worries about how the world would treat me if I were trans decreased as trans activism became more prevalent, because you get a sense that a growing number of people would just accept you.
Transphobia is, by far, the most likely suspect in the suicide rates of trans people who have transitioned. Trans people are still shown to be heavily discriminated and at a much higher risk of assault or unemployment, discrimination correlates a lot with these increased suicide rates.
Locker room talk you hear in high school, transphobic medias of all kind, stuff like that will convince you that the current reality is intolerable and evil, even if you don’t hear about aggressions and discrimination about trans people in the news. In high school I went straight “nope, not worth even considering the question” because of these things, and trans visibility was basically zero (and by the way, thank you for expressing support!).
Activism will tell you “it’s bad, but it’s getting better, there are people and places that will accept you right now, and we can make this even better”. So, in this sense, it will make more trans people be out, and make more trans people realise I’m trans, which is not a bad thing because gender dysphoria and all that follows from it take the lion share in the suicide rates before transitioning. Even despite transphobia and discriminations, trans people are much more at risk of suicide before transitioning, gender dysphoria it’s just that bad.
The trans problem wouldn’t get worse if society tried to go the stasis route, but it would still mean paying a huge amount of sufferings and deaths with no good reason.
Alright, I don’t think I have any problem talking a bit about it in private with you, for the time being I’d rather avoid sharing more in public though.
If anyone else thinks information on this could be helpful they can contact me, put please only do so if you think it’s really relevant you know.
My working theory is that Putin could be worried about some kind of internal threat to himself and his power.
He’s betting a lot on his image of strong, dangerous leader to keep afloat. However the Russian constant propaganda that keeps up that image was starting to be more and more known and ineffective.
Europe also has been trying to get rid of Russian influence through gas for a while, and would likely have managed in a few more years. Then they’d be free to be less accepting of his anti-human rights antics.
Ukraine joining Nato would have made him look extremely weak, and it would have made it easier to make him look weak in the future.
Once his strong image faded he might have been worried of reforming forces within Russia to manage oust him out of office with an actual election and mass wide protests if the ball got rolling enough, or he might be worried about someone taking a more direct approach to eliminate him (he killed enough people to be extremely worried about being murdered, I think).
So this is his extreme move to deny weakness. Better to be seen as the tyrant who’s willing to do anything if provoked, than the ex-strong leader who can be taken out of office.