My comment may be considered low effort, but this is a fascinating article. Thank you for posting it.
SomeoneYouOnceKnew
While I find the Socrates analogy vivid and effective, I propose considering critics on posts under the same bucket as lawyers. Where Socrates had a certain set of so-called principles—choosing to die for arbitrary reasons, I find that most people are not half as dogmatic as Socrates, and so the analogy/metaphor seems to slip short.
While my post is sitting at negative two, and no comments or feedback… Modeling commenters as if they were lawyers might be better? When the rules lawyers have to follow shows up, lawyers (usually) do change their behavior, though they naturally poke and prod as far as they can within the bounds of the social game that is the court system.
But also, everyone who is sane hates lawyers.
Part of the problem with verifying this is that the number of machine learning people who got into machine learning due to lesswrong. We need more machine learning people whom were able to come to doom conclusions of their own accord, independent of hpmor etc, as a control group.
As far as I can tell, the number worried about doom overlap 1:1 with lesswrong posters/readers, and if it was such a threat, we’d expect there to be some number of people coming to the conclusions independently/of their own accord.
This post was inspired by parasitic language games.
Lawyers And World-Models
That framing makes sense to me.
Is knowing someone’s being an asshole an aspect of hyperstition?
I met an ex-Oracle sales guy-turned medium-bigwig at other companies once.
He justified it by calling it “selling ahead”, and it started because the reality is that if you tell customers no, you don’t get the deal. They told the customers they would have requested features. The devs would later get notice when the deal was signed, and no one on management ever complained, and everyone else on his “team” was doing it.
How do we measure intent?
Unless you mean to say a person who actively and verbally attempts to shun the truth?
Any preferred critiques of Graham’s Hierarchy of Disagreements?
https://en.m.wikipedia.org/wiki/File:Graham’s_Hierarchy_of_Disagreement-en.svg
Extra layers or reorderings?
Does the data note whether the shift is among new machine learning researchers? Among those who have a p(Doom) > 5%, I wonder how many would come to that conclusion without having read lesswrong or the associated rationalist fiction.
I’m avoiding terms like “epistemic” and “consequential” and such in this answer, and instead attempting to give a colloquial one, to what I think is the spiritual question.
(I’m also deliberately avoiding iterating over the harms of blind traditionalism and religious thinking. Assuming since you’re atheist, and you don’t reject most of the criticisms of religion)
(Also also, I am being brief. For more detail I would point you at the library, to go reading on Christianity’s role for the rise of the working and uneducated classes in the 1600s-1800s, and perhaps some anthropologist’s works for more modern iterations)
Feel free to delete/downvote if this is unwanted.
It’s hard to say “all religion is bad”, when, without Christianity, when, for ex, Gregor Mendez’ Pea studies might have come about a decade+ later. In absentia of strong institutions, Christian religion often provided structure and basic education where there was none. Long before the government began to provide schooling and basic education.
Sect leaders needed you to know how to read to read the bible, and would often teach you how to write as well. Due to this, it’s hard to refute the usefulness of Christianity as an easy means of cultural through-line, staying culturally updated and locally-connected/invested in the people around their constituents.
Because the various sects of Christianity benefited greatly when their local populace was well-read and understood the bible. Religious leaders and pastors etc were incentivized to educate and build up the people around them. Whatever one might think about said leaders etc being unethical, they did provide a service, and they often encouraged and taught people skills or information they did not have before, because they were naturally invested in the local communities.
Their constituents being more wealthy and happier and having more connections and more well-socialized meant they were more able to coordinate. If you confess your concerns to your pastor, as coordination-problem-overcomers, they would often get you in contact with people in your local area who have the means and ability to help you with your problem- from rebuilding a burned-down barn, to putting in a wheelchair ramp for disabled people in trailer parks.
That is… I have no qualms with: “if it feels good, and doesn’t harm others or impinge on their rights, it’s okay to do it, with caveats*.”
When the platonic ideal of the communal Christian Fellowship operates, it is well worth the time and energy spent. One need only listen to the song being sung, to tell if it is from Eru Illuvitar, or Morgoth’s discord.
Perhaps “term” is the wrong, ahem, term.
Maybe you want “metrics”? There’s lots of non-GDP metrics that could be used to track ai’s impact on the world.
Instead of the failure mode of saying “well, GDP didn’t track typists being replaced with computers,” maybe the flipside question is “what metrics would have shown typists being replaced?”
Have you tried cgp grey’s themes?
What material policy changes are being advocated for, here? I am having trouble imagining how this won’t turn into a witch-hunt.
If you find yourself leaning into conspiracy theories, one should consider whether they’re stuck in a particular genre and need to artificially inject more variety into their intellectual, media, and audio diets.
Confirmation bias leads to one feeling like every song has the same melody, but there are many other modes of thought, and, imo, sliding into a slot that checks more boxes with <ingroup> is an indicator our information feeds/sensors are bad more than that we are stumbling on truth.
Not gonna lie, I lost track of the argument on this line of comments, but pushing back on word-bloat is good.
Thanks! Though, hm.
Now I’m noodling how one would measure goodharting.
What kind of information would you look out for, that would make you change your mind about alignment-by-default?
What information would cause you to inverse again? What information would cause you to adjust 50% down? 25%?
I know that most probability mass is some measure of gutfeel, and I don’t want to introduce nickel-and-diming, more get a feel for what information you’re looking for.
This was an extremely enjoyable read.
Good fun.