Strong downvoted. This seems naively useful but knowing someone had a CRM for our friendship would make me feel quite uncomfortable, objectified, and annoyed, and I would likely stop being friends with that person, and I’m confident that the majority of (most?) people who aren’t pretty rationalist would feel similarly.
devansh
(I promised I’d publish this last night no matter what state it was in, and then didn’t get very far before the deadline. I will go back and edit and improve it later.)
I feel like I keep, over and over, hearing a complaint from people who get most of their information about college admissions from WhatsApp groups or their parents’ friends or a certain extraordinarily pervasive subreddit (you all know what I’m talking about). Something like “College admissions is ridiculous! Look at this person, who was top of his math class and took 10 AP classes and started lots of clubs, he didn’t get into a single Ivy, he’s going to UCLA!” I think the closest allegory I can find for this is something like “look at this guy, he’s 7 feet tall, didn’t even make it to the NBA!” There’s something important that they’re both missing, some fundamental confusion of a tiny part of the overall metric from reality.
This list is quite good—https://mecfsroadmap.altervista.org/ Feel free to DM me if you want to chat more.
Epistemic Status: Rant. Very rapidly written and upon reflection uncertain if I fully endorse; Cunningham’s Law says that this is the best way to get good takes quickly.
Rationalists should win. If you have contorted yourself into alternative decision theories that leave you vulnerable to Roko’s Basilisk or whatever, and normal CDT or whatever actual humans implement in real life wouldn’t leave you vulnerable to stuff like this, then you have failed and you need to go back to trying to be a normal person using normal decision procedures instead of mathing your way into being “forever acausally tortured by a powerful intelligent robot.
If the average Joe on the street would not succumb to their mind being hacked by Eliezer Yudkowsky, or hell, by a late 2022 chatbot, and you potentially would (by virtue of being a part of the reference class of LessWrong users or whatever)—then you have failed and it is not obvious you can make an expected positive contribution to the field of AI risk reduction at all without becoming far more, for lack of a better word, normal. I don’t understand how people think that spending your time working on increasingly elaborate pseudophilosophical things that they then call “AI alignment” works if they are also the type of people who are highly vulnerable to getting mindhacked by ChatGPT—perhaps this is a bucket error or I’m attacking a strawman? I don’t think Eliezer or Nate or whatever would fall to this failure mode but in general the more philosophical parts of alignment to me feel worrying (and specifically I mean the MIRI-CFAR-sphere, although again maybe worried about attacking a strawman), because the potential negatives of “having people close to alignment solutions be unusually vulnerable to being hacked by AI.”
Yeah, this is basically the thing I’m terrified about. If someone has been convinced of AI risk with arguments which do not track truth, then I find it incredibly hard to believe that they’d ever be able to contribute useful alignment research, not to mention the general fact that if you recruit using techniques that select for people with bad epistemics you will end up with a community with shitty epistemics and wonder what went wrong.
Cool, I feel a lot more comfortable with your elaboration; thank you!
I feel pretty scared by the tone and implication of this comment. I’m extremely worried about selecting our arguments here for truth instead of for convincingness, and mentioning a type of propaganda and then talking about how we should use it to make people listen to our arguments feels incredibly symmetric. If the strength our arguments for why AI risk is real do not hinge on whether or not those arguments are centrally true, we should burn them with fire.
FWIW, for most people who are smart enough to get into MIT, it’s reasonably trivial to get good grades in high school (I went to an unusually difficult high school, took the hardest possible courseload, and was able to shunt this to <5 hours of Actual Work a week / spent most of my class time doing more useful things).
Most people are disconnected from reality, most of the time. This is most noticeable to me when it manifests itself in scope insensitivity, but it appears in other ways too. In this case, you choosing to spend two hours walking to save costs is not a “keep in touch with reality” measure, it is a “lsusr is wasting his time” measure. Two hours of your time could be spent on things that really matter to you. Don’t quit Robotics Club if you like Robotics Club, but recognize that you do it for fuzzies and not for utils.
The average person in a developed country is probably net-neutral or even slightly net-positive to humans as a whole. I agree with you that evil happens when you are separated from the pain you inflict on other people. But your opportunity costs are real actual costs too. If you make decisions (like quitting a project) that affect lots of people because you’re constrained on not having enough hours in a day, and then waste some of the hours in a day that you do have on a misguided idea of “staying in touch with reality,” you have failed to stay in touch with reality.
Still, I think parts of your core message are really important. Evil does happen when you separate yourself from the pain you inflict, because it’s very easy to abstract it away. This is how child slavery and other moral atrocities continue. Also, it’s actually important to stay in touch with reality and not become the “longtermist Chad” or something. You stay in touch with reality by being careful about the decisions you make, being cognizant of what you’re giving up and trading off against, and yes, by being willing to be the boots on the ground whenever it’s needed. But you gain no points by doing it when it’s not, when it’s actively harmful, when your time is limited and you have more valuable things to do.
Continuing the metaphor, what the authors are saying looks to some extent similar to stochastic gradient descent (which would be the real way you minimize the distance to finish in the maze analogy.)
The concept of “world war” doesn’t need to mean “most of the world’s population is involved in this war,” not when nuclear weapons are at stake. A nuclear exchange between NATO and Russia is world-shifting in a way that a nuclear exchange between Pakistan and India is not. Calling nuclear war between major Western powers (which will almost certainly have devastating economic and physical effects on the entire world) a “world war” seems perfectly reasonable at that point, even if most of the world is not directly involved.
Every submission must be a 26-letter combination of random lowercase letters with no spaces. The entry that is closest to a randomly generated submission wins.
Gain = (Benefits − Costs) ∗ Probability
Would be more like gain = benefits*probability of those benefits—costs*probability of those costs, especially if there are failure modes that exist. I’d also try to avoid framing it as “benefits are almost unlimited while costs are finite;” while an IAL is great, the benefits of an IAL are just as finite as the costs are.
That being said, I think that if you can make an IAL that is exceptionally good on many dimensions and get enough interest/funding behind it, it would be an extremely worthwhile project.
Not a direct answer, so I’m leaving this as a comment, but the United States has for some time now been able to control the vast majority of its populace through military force if they wanted. The idea that citizens can stop a coup or revolt with guns seems relatively absurd considering the gap in “the weaponry citizens have, like rifles” and “the weaponry the military has, like tanks” although I’d be happy to have someone prove me wrong.
A few things that were touched on, but I’d like to see further discussion on;
If Omicron is importantly less severe than Delta, does it continue to pose any sort of humanity-wide threat other than the obvious potential overreaction and politicians doing things to seem like they have a handle on the situation? Conditional on Omicron being more vaccine avoidant and less severe, is there any good reason not to simply continue reopening and work on better booster/Paxlovid distribution systems, instead of trying to use mask mandates/lockdowns?
Moreover, how much of the immune evasion could be due to just… erosion? We know that vaccines are getting less effective over time, and that’s doubly true for non-mRNA vaccines like J&J/AZ. How much stock can we put in the hypothesis that people with boosters will get infected with Omicron at a ~similar rate to which people with two doses of the vaccine got infected with Delta?
Until new information comes out which clarifies the infectivity and severity of Omicron, especially against the vaccinated, I’m potentially more worried about outsized and concerning responses to the new variant than I am about Omicron itself. To be clear, this isn’t diminishing the potential bad results of Omicron—but in terms of actual infectivity and severity, I don’t expect it to be a lot worse than Delta. Vaccine resistance is more concerning, especially considering original antigenic sin.
That being said, my school (and California) has been requiring masking throughout the pandemic, and we closed schools for more than a year. I’m deeply concerned about the potential mental health effects of going back to virtual school, both on myself and people I care about. The current masking requirements for (even 80+ percent vaccinated) schools being more stringent than the masking requirements for basically anything else, including bars, is absolutely ridiculous to me. This is only going to get worse, as parents in my district will do absolutely anything to make sure their whims are satisfied. I’m fairly confident that if Omicron looks to be a general threat, regardless of its actual danger to students, they will close en masse.
I think your model is largely accurate, and the only one I would disagree with is the last—where I’d put the chances, considering vaccines. Paxlovid. and Fluvoxamine, as <10%. I’d add one final chance at ~40% that schools from kindergarten to colleges widely close for >3 months.
Done! This is a very cool opportunity.
I’m a Davidson YS and have access to the general email list. Is there a somewhat standard intro to EA that I could modify and post there without seeming like I’m proselytizing?
There’s no way world governments would coordinate around this, especially since it is a) a problem that most people barely understand and b) would completely cut off all human technological progress. No one would support this policy. Hell, even if ridiculously powerful aliens à la God came and told us that we weren’t allowed to build AGI on the threat of eternal suffering, I’m not sure world governments would coordinate around this.
If alignment was impossible, we might just be doomed.
Does buying shorter-term OTM derivatives each year not work here?