I need some help convincing the people in my “bubble” that it is now safe. Does anyone have any resources that explain—credibly, with citations to data, something that would convince a smart and skeptical person—why it is okay for fully vaccinated people to interact with potentially infected individuals?
someonewrongonthenet
So...I actually happen to have converged upon the same insight, and have actually tried to use this exact phrase in the wild.
Unfortunately (being an immigrant) people understandably often assume I was talking about nation-level differences involving my country of birth, rather than my particular family and the specialized microcosm of friends that I surround myself with. Any ideas for making the wording more precise so as to avoid this?
(I’ve tried modifications like “in my family” or “the way I grew up” or “how I was raised” but more or less the same problem occurs. “Among my friends and I” sort of works, sometimes? But mostly I’ve just given up on trying to reference culture in navigating misunderstandings.)
Here’s an instrumental rationality problem:
Wisdom teeth—preemptively remove them or not?
(risks of surgery / risks of having wisdom teeth / potential benefits of retaining them?)
Attempting to resuscitate a child, failing, and then going about one’s day is neither ruthless nor cruel, but I think I understand what you mean. It can be jarring for some people when doctors are seemingly unaffected by the high intensity situations they experience.
Doing good does sometimes require overriding instincts designed to prevent evil. For instance, a surgeon must overcome certain natural instincts not to hurt when she cuts into a patient’s flesh and blood pours out. The instinct says this is cruelty, the rational mind knows it will save the life of the patient.
There are hazards involved in overriding natural instincts, such as in C&P where the protagonist overrides natural instincts against murder because he is convinced that it is in the greater good, because instincts exist for good reason. There are also hazards involved in following natural instincts. Humans have the capacity for both.
Following instincts vs. overriding instincts, both variants are appropriate at different times. Putting correctly proportioned trust in reasoning vs. instinct is important. You need to consider when instincts mislead, but you also need to consider when reasoning misleads.
It would be a mistake to take a relatively clear cut case of the doctor’s override of natural sympathetic instinct (for which there is a great deal of training and precedent which establishes that it is a good idea) and turn it into a generalized principle of “trust reason over moral instinct” under uncertainty. There is no uncertainty in the doctors case, the correct path is obvious. Just because doctors are allowed to override instincts like “don’t cut into flesh” and “grieve when witnessing death” in a case where it has already been predecided that this is a good idea doesn’t mean they get free license to override just willy nilly whenever they’ve convinced themselves it’s for a greater good, they still have to undergo the deliberative process of asking whether they’ve rationalized themselves into something bad.
That sounds like a meaningful experience. Can you be more specific about the paradigm shift it caused and the questions you have about “upholding rationality”?
Another important thing that romance does is cause love.
Being loved (you know, that thing where you get to inject your utility function into another agents system, such that they now have a desire to fulfill your preferences) has many obvious instrumental uses, in addition to the inherent value of loving another person..
Wait a few months to a year. It usually goes away.
Leaving aside the bloody obvious things (universal basic income or other form of care, global internet access, etc)
Prediction market. They tried but it’s dead due to gambling laws. Someone should give it a second try.
So basically it is eternal september, then. It’s just that lesswrong’s “september” took the form of excessively/inappropriately contrarian people.
Thanks for the update! It’s hopeful / helpful to know that the quick recovery was indeed fairly permanent. Wish I could say my process was going that well!
Those examples of departing from left-canon (libertarian, “feminism-isn’t-perfect”, and “pua is often questionable in practice but not fundamentally bad from first principles”) are okay by me. I depart from the left-canon on those points myself and find the leftie moral outrage tactics on some of those fronts pretty annoying. All those things are still fundamentally egalitarian in values, just different in implementation. The homogeneity I was referring to was in egalitarianism and a certain type of emotional stance, a certain agreement concerning which first principles are valid and which goals are worthy, despite diversity in implementation.
(But, as ChristainKI pointed out, Moldbug himself was a commentator, and that predates me, so it’s true that the seed has always been there.)
Huh. Oh right. I knew about the Moldbug thing, and I still said that.
I’m wrong. Mind changed. Good catch.
I mean, I still value diversity by default. Valuing homogeneity is something I’ve kind of come around to slowly and suspiciously (whereas before I just assumed it was bad by default.)
I suppose it could be so. It doesn’t matter really, since the end result is the same. Still, I doubt it because Lesswrong is overwhelmingly left wing (and continues to be according to the polls—the right wing and NRx voices belong to just a few very prolific accounts.) And pretty much all the founding members of Lesswrong and, going back further, transhumanism in general, were of a certain sort which I hesitate to call “left” or “liberal” but… - socialists, libertarians, anarchists, all those were represented, and certainly many early users were hostile to social justice’s extremeties, which is to be expected among smart people who are exposed to leftie stupidity much more often than other kinds of stupidity… but those were differences in implementation. We all essentially agreed on the core principles of egalitarianism and not hurting people, and agreed that prejudice against race and gender expression is bad (which was an entirely separate topic from whether they’re equal in aptitude), and that conservatives, nationalists, and those sort of people were fundamentally wrongheaded in some way. It wasn’t controversial, just taken for granted that anyone who had penetrated this far into the dialogue believed that these things to be true.… in the same sense that we continue to take for granted that no one here believes in a literal theist God. (And right now, I know many former users have retreated into other more obscure spin off forums, and everything I said here pretty much remains true in those forums and blogs.)
But I’m less interested in who broke the walled garden / started eternal september / whatever you want to call it (after all, I’m not mad that they came here, I got to learn about an interesting philosophy) and more interested in the meta-level principle: per my understanding of Neoreactionary philosophy, when one finds oneself in the powerful majority, one aught to just go ahead and exert that power and not worry about the underdog (which I still don’t agree with but I’m not sure why). And, homogeneity is often more valuable than diversity in many cases, that’s something I’ve actually kind of accepted.
Right, but you’re not literally disregarding the consequences—Krishna was very much in favor of consequentialism over deontological constraints (In this scenario, the deontological constraint was “thou shalt not murder” and Krishna said “except for the greater good”) … at least within that particular dialogue. The consequences are all that matter.
What you’re doing is not being attached to the consequences. To put it in effective altruist terms, disregarding the ego makes you favor utility over warm fuzzies: Warm fuzzies appeal to your ego, which is tied to the visceral sensation that helping has on you, rather than the actual external objective measures of helping.
(Ultimately, of course, squeezing philosophy out of thousand year old texts is a little like reading tea leaves, and the chosen interpretation generally says more about the reader than the writer. It’s not a coincidence that my interpretation happens to line up with what I think anyway.)
The cultural meme for non-violence for vedics is pretty strong. As far as I know, it’s the only culture for which vegetarianism is a traditional moral value (though I suppose the availability of lentils might have contributed to making that a more feasible option.)
Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.
I guess I’m not sure how these concerns could possibly be addressed by any platform meant for promoting ideas. You cannot run a lab in your pocket. You can have citations to evidence found by people who do run labs...but that’s really all you can do. Everything else must necessarily be a thought experiment.
So my question is, can you envision a better version, and what would be some of the ways that it would be different? (Because if you can, it aught to be created.)
Sure! But I think theism is irrelevant in this case. And this isn’t mine, just the standard folksy Hinduism, the sort of wisdom you might get from a religious old lady. (And non-Abrahamic religions often do not map well to “atheism/theism” dichotomies. You won’t really capture the way Indians think about differences in beliefs by using those terms, it’s often not an important distinction to them.)
Now, keep in mind that a lot of what I’m saying is modern hindu exegenesis of the Gita. As in, this is what the Gita means to many Hindus—I can’t speak to whether this interpretation actually reflects what people in ~5 BC would have read in it.
In the story Arjun doesn’t want to kill his cousins (they’re at war over who will rule) because he loves them and violence is wrong. We have to assume for the sake of argument that Arjun should kill his cousins.
Post-Upanishads and spread of Buddhism, a recurring theme in Vedic religion is duty vs. detachment..
Arjun first argues that he’s emotionally attached to his cousins and therefore can’t fight, and Krishna shoots it down with all the usual arguments against attachment that you’re likely quite familiar with.
Then Arjun argues that it’s his duty not to kill, that it would be a sin. Krishna replies with some arguments which could fairly be called consequentialist.
Finally Arjun argues that he’s detached from the world and therefore he has no need to do bloody things, because he doesn’t care about the outcome of the stupid war in the first place. To that, Krishna says “You do your duty without being attached to the consequences.”
That bolded phrase is taken as the central, abstract principle of the Gita...it’s the part people cherry pick, and we like to ignore or minimize the fact that it was originally spoken in support of violence. If you are feeling sad about a failure, an elderly person might come and try to console you with this aphorism. That idea has a life bigger than the Gita itself, growing up I heard it from people who’ve never read the Gita. (Just like many Christians don’t actually read the bible, but have various notions about what it says).
Which is how it relates to our discussion—You can be very driven and truth+outcome oriented without actually tying your ego to the outcome. Loss of ego need not imply loss of drive.
I don’t really think this is a principled solution. I care care more about elephants and dolphins than I do about, I dunno, pottos or something. This is gonna get really awkward if we ever meet any sentient aliens. And from the perspective of rhetoric, I doubt a geneticist with a graph is going to be what finally hammers in the universal brotherhood message.
And what are you gonna do about HeLa cells?
We have very similar threads running parallel right now. We both converged on the important thing being that ego is tied to something outside of oneself, rather than self-referential self conception. I called it “truth+outcome orientation” and you called it “external”. Do you have thoughts on my conceptualization of it?
Unlike yours, I think ego size is irrelevant. A person with a small ego cares not what others think, nor do they really care what they think of themselves and thus live free from pain and guilt but also pride… however, they can still care about underlying reality a lot in a consequentialist sense.
Whereas, a person with a very large ego might have virtue-ethics style self perceptions tied to how they behaved in a certain scenario, which comes out to the same thing if they’re philosophically consequentialists. Essentially rendering ego size irrelevant except as a personality difference which will manifest in social presentation and emotions.
Your external vs internal dichotomy means “self opinion vs. others opinions”.
But truth+outcome orientation with low ego means “focusing primarily on the effect you have on reality, disregarding both the opinions of others and your self perceptions.”
and truth+outcome orientation with high ego means “tying your self perception to the effect you have on reality, disregarding the opinions of others, and not trying to trick your own self perception but still being emotionally driven by it.”
Best reference materials for calculating children’s risk ratio? (Interested in both risk to children themselves in terms of symptoms, and a measure of risk of children being carriers who spread it such as a positive test)