[Note: I am writing from my personal epistemic point of view from which pretty much all the content of the OP reads as obvious obviousness 101.]
The reason why people don’t know this is not because it’s hard to know it. This is some kind of common fallacy: “if I say true things that people apparently don’t know, they will be shocked and turn their lives around”. But in fact most people around here have more than enough theoretical capacity to figure out this, and much more, without any help. The real bottleneck is human psychology, which is not able to take certain beliefs seriously without much difficult work at the fundamentals. So “fact” posts about x-risk are mostly preaching to the choir. At best, you get some people acting out of scrupulosity and social pressure, and this is pretty useless.
Of course I still like your post a lot, and I think it’s doing some good on the margin. It’s just that it seems like you’re wasting energy on fighting the wrong battle.
Note to everyone else: the least you can do is share this post until everyone you know is sick of it.
Note to everyone else: the least you can do is share this post until everyone you know is sick of it.
I would feel averse to this post being shared outside LW circles much, given its claims about AGI in the near future being plausible. I agree with the claim but not really for the reasons provided in the post; I think it’s reasonable to put some (say 10-20%) probability on AGI in the next couple of decades due to the possibility of unexpectedly fast progress and the fact that we don’t actually know what would be needed for AGI. But that isn’t really spelled out in the post, and the general impression one gets from the post is that “recent machine learning advances suggest that AGI will be here within a few decades with high probability”.
This is a pretty radical claim which many relevant experts would disagree with, but which is not really supported or argued for in the post. I would expect that many experts who saw this post would lower their credence in AI risk as a result, as they would see a view they strongly disagreed with, didn’t see any supporting arguments they’d consider credible, and end up thinking that Raemon (and by extension AI risk people) didn’t know what they were talking about.
I do mostly agree with not sharing this as a public-facing document. This post is designed to be read after you’ve read the sequences and/or Superintelligence and are already mostly on board.
I’m sympathetic to this. I do think there’s something important about making all of this stuff common knowledge in addition to making it psychologically palatable to take seriously.
But I know that I got something very valuable out of the conversations in question, which wasn’t about social pressure or scrupulosity, but… just actually taking the thing seriously. This depended on my psychological state in the past year, and at least somewhat depended on psychological effects of having a serious conversation about xrisk with a serious xrisk person. My hope is that at least some of the benefits of that could be captured in written form.
If that turns out to just not be possible, well, fair. But I think if at least a couple people in the right-life-circumstances gets 25% of the value I got from the original conversation(s) from reading this, it’ll have been a good use of time.
I also disagree slightly with the “the reason people don’t know this isn’t that it’s hard to know.” It’s definitely achievable to figure out most of the content here. But there’s a large search space of things worth figuring out, and not all of it is obvious.
[Note: I am writing from my personal epistemic point of view from which pretty much all the content of the OP reads as obvious obviousness 101.]
The reason why people don’t know this is not because it’s hard to know it. This is some kind of common fallacy: “if I say true things that people apparently don’t know, they will be shocked and turn their lives around”. But in fact most people around here have more than enough theoretical capacity to figure out this, and much more, without any help. The real bottleneck is human psychology, which is not able to take certain beliefs seriously without much difficult work at the fundamentals. So “fact” posts about x-risk are mostly preaching to the choir. At best, you get some people acting out of scrupulosity and social pressure, and this is pretty useless.
Of course I still like your post a lot, and I think it’s doing some good on the margin. It’s just that it seems like you’re wasting energy on fighting the wrong battle.
Note to everyone else: the least you can do is share this post until everyone you know is sick of it.
I would feel averse to this post being shared outside LW circles much, given its claims about AGI in the near future being plausible. I agree with the claim but not really for the reasons provided in the post; I think it’s reasonable to put some (say 10-20%) probability on AGI in the next couple of decades due to the possibility of unexpectedly fast progress and the fact that we don’t actually know what would be needed for AGI. But that isn’t really spelled out in the post, and the general impression one gets from the post is that “recent machine learning advances suggest that AGI will be here within a few decades with high probability”.
This is a pretty radical claim which many relevant experts would disagree with, but which is not really supported or argued for in the post. I would expect that many experts who saw this post would lower their credence in AI risk as a result, as they would see a view they strongly disagreed with, didn’t see any supporting arguments they’d consider credible, and end up thinking that Raemon (and by extension AI risk people) didn’t know what they were talking about.
I do mostly agree with not sharing this as a public-facing document. This post is designed to be read after you’ve read the sequences and/or Superintelligence and are already mostly on board.
I’m sympathetic to this. I do think there’s something important about making all of this stuff common knowledge in addition to making it psychologically palatable to take seriously.
Generally, yeah.
But I know that I got something very valuable out of the conversations in question, which wasn’t about social pressure or scrupulosity, but… just actually taking the thing seriously. This depended on my psychological state in the past year, and at least somewhat depended on psychological effects of having a serious conversation about xrisk with a serious xrisk person. My hope is that at least some of the benefits of that could be captured in written form.
If that turns out to just not be possible, well, fair. But I think if at least a couple people in the right-life-circumstances gets 25% of the value I got from the original conversation(s) from reading this, it’ll have been a good use of time.
I also disagree slightly with the “the reason people don’t know this isn’t that it’s hard to know.” It’s definitely achievable to figure out most of the content here. But there’s a large search space of things worth figuring out, and not all of it is obvious.
All sounds sensible.
Also, reminds me of the 2nd Law of Owen: