My view of PC’s P(Doom) came from (IIRC) Scott Alexander’s posts on Christiano vs Yudkowsky, where I remember a Christiano quote saying that although he imagines there’ll be multiple AI competing as opposed to one emerging through a singularity, this would possibly be a worse outcome because it’d be much harder to control. From that, I concluded “Christiano thinks P(doom) > 50%”, which I realize is pretty sloppy reasoning.
I will go back to those articles to check whether I misrepresented his views. For now I’ll remove his name from the post 👌🏻
Aorou
Hey! Thanks for sharing the debate with LeCun, I found it very interesting and I’ll do more research on his views.
Thanks for pointing out that even a 1% existential risk is worth worrying about, I imagine it’s true even in my moral system, if I just realize that ie 1% probability that humanity wipes = 70 million expected deaths (1% of 7 billions) plus all the expected humans that wouldn’t come to be.
That’s logically.
Emotionally, I find it WAY harder to care for a 1% X-risk. Scope insensitivity. I want to think about where else in my thinking this is causing output errors.
Yes..… except that it was only given once.
I was catching up then; so I didn’t want discord access and get all the spoilers. And now I can’t find the link.
[Question] Why Do AI researchers Rate the Probability of Doom So Low?
Unrelated, but I don’t know where to ask-
Could somebody here provide me with the link to Mad Investor Chaos’s discord, please?
I’m looking for the discord link! It was linked at some point, but I was catching up then, so I didn’t want to see spoilers and didn’t click or save the link.
But now I’d like to find it, and so far all my attempts have failed.
TLDR+question:
I appreciate you for writing that article. Humans seem bad at choosing what to work on. Is there a sub-field in AI alignment where a group of researchers solely focus on finding the most relevant questions to work on, make a list, and others pick from that list?
• • •
(I don’t think this is an original take)Man, I genuinely think you make me smarter and better at my goals. I love reading from you. I appreciate you for writing this.
I notice how easily I do “let’s go right into thinking about this question”/”let’s go right into doing this thing” instead of first asking “is this relevant? How do I know that? How does it help my goals better than something else?” and then “drop if not relevant”.
In (one of) Eliezer’s doom article(s), he says that he needs to literally stand behind people while they work for them to do anything sensible (misquote, and maybe hyperbolic on his end). From that, I judge that people in AI (and me, and probably people generally), even very rigorous people when it comes to execution, do NOT have sufficient rigor when it comes to choosing which task to do.From what EY says, from what you say, I make the judgment that AI researchers (and people in general, and me) choose tasks more in terms of “what has keywords in common with my goals” + “what sounds cool” + “what sounds reasonable” + “what excites me”, which is definitely NOT the same as “what optimizes my goals given my resources”, and there aren’t many excuses for doing that.
Except maybe: this is a bias that we realized humans have. You didn’t know. Now that you know, stop doing it.
(I understand that researchers walk a difficult line where they may ALSO need to optimize for “projects that will get funded”, which may involve “sounds cool to grant-givers”. But the point still holds I believe.)There may be the additional problem of humility whereby people assume that already-selected problems must be relevant “because my smart colleagues wouldn’t be working on them if not”, instead of just having it be the policy that reasons for doing projects are known and challengeable.
Hi,
Somehow unrelated, my question is about dissolution. What is the empirical evidence behind it? Could someone point me to it, preferably something short about brain structures?
Otherwise, it would seem to be subject too much to hindsight bias: you’ve seen people make a mistake, and you build a brain model that makes that mistake. But it could be another brain model, you just don’t know because your dissolution is unfalsifiable.
Thank you!
Ideal format for beginning rationalists, thank you so much for that. I am reading it every day, going to full articles when wanting some more depth. It’s also helped me “recruit” new rationalists among my friends. I think that this work may have wide and long-lasting effects.
It would be extra-nice, and I don’t have the skills to do it myself, to have the links go to this LW − 2.0. Maybe you have reasons against it that I haven’t considered?
Thank you :))