I’d like to distinguish between two things. (Bear with me on the vocabulary. I think it probably exists, but I am not really hitting the nail on the head with the terms I am using.)
Understanding why something is true. Eg. at the gears level, or somewhat close to the gears level.
Having good reason to believe that something is true.
Consider this example. I believe that the big bang was real. Why do I believe this? Well, there are other people who believe it and seem to have a very good grasp on the gears level reasons. These people seem to be reliable. Many others also judge that they are reliable. Yada yada yada. So then, I myself adopt this belief that the big bang is real, and I am quite confident in it.
But despite having watched the Cosmos episode at some point in the past, I really have no clue how it works at the gears level. The knowledge isn’t Truly A Part of Me.
The situations with AI is very similar. Despite having hung out on LessWrong for so long, I really don’t have much of a gears level understanding at all. But there are people who I have a very high epistemic (and moral) respect for who do seem to have a grasp on things at the gears level, and are claiming to be highly confident about things like short timelines and us being very far and not on pace to solve the alignment problem. Furthermore, lots of other people who I respect also have adopted this as their belief, eg. other LessWrongers who are in a similar boat as me with not having expertise in AI. And as a cherry on top of that, I spoke with a friend the other day who isn’t a LessWronger but for whom I have a very high amount of epistemic respect for. I explained the situation to him, and he judged all the grim talk to be, for lack of a better term, legit. It’s nice to get an “outsider’s” perspective as a guard against things like groupthink.
So in short, I’m in the boat of having 2 but not 1. And it seems appropriate to me more generally to be able to have 2 but not 1. It’d be hard to get along in life if you always required a 1 to go hand in hand with 2. (Not to discourage anyone from also pursuing 1. Just that I don’t think it should be a requirement.)
Coming back to the OP, it seems to be mostly asking about 1, but kinda conflating it with 2. My claim is that these are different things that should kinda be talked about separately, and that assuming that you too have a good amount of epistemic trust for Eliezer and all of the other people making these claims, you should probably adopt their beliefs as well.
Thanks for the reminder that belief and understanding are two seperate (but related) concepts. I’ll try to keep that in mind for the future.
Assuming that you too have a good amount of epistemic trust for Eliezer and all of the other people making these claims, you should probably adopt their beliefs as well.
I don’t think I can fully agree with you on that one. I do place high epistemic trust in many members of the rationalist community, but I also place high epistemic trust on many people who are not members of this community. For example, I place extremely high value on the insights of Roger Penrose, based on his incredible work on multiple scientific, mathematical, and artistic subjects that he’s been a pioneer in. At the same time, Penrose argues in his book The Emperor’s New Mind that consciousness is not “algorithmic,” which for obvious reasons I find myself doubting. Likewise, I tend to trust the CDC, but when push came to shove during the pandemic, I found myself agreeing with people’s analysis here.
I don’t think that argument from authority is a meaningful response here, because there are more authorities than just those in the rationalist community., and even if there weren’t, sometimes authorities can be wrong. To blindly follow whatever Eliezer says would, I think, be antithetical to following what Eliezer teaches.
I think a good understanding of 1 would be really helpful for advocacy. If I don’t understand why AI alignment is a big issue, I can’t explain it to anybody else, and they won’t be convinced by me saying that I trust the people who say AI alignment is a big issue.
and I sloppily merged the two together in 8, which thanks to FinalFormal2 and other’s comments, I no longer believe needs to be a necessary belief of AGI pessimists.
I’d like to distinguish between two things. (Bear with me on the vocabulary. I think it probably exists, but I am not really hitting the nail on the head with the terms I am using.)
Understanding why something is true. Eg. at the gears level, or somewhat close to the gears level.
Having good reason to believe that something is true.
Consider this example. I believe that the big bang was real. Why do I believe this? Well, there are other people who believe it and seem to have a very good grasp on the gears level reasons. These people seem to be reliable. Many others also judge that they are reliable. Yada yada yada. So then, I myself adopt this belief that the big bang is real, and I am quite confident in it.
But despite having watched the Cosmos episode at some point in the past, I really have no clue how it works at the gears level. The knowledge isn’t Truly A Part of Me.
The situations with AI is very similar. Despite having hung out on LessWrong for so long, I really don’t have much of a gears level understanding at all. But there are people who I have a very high epistemic (and moral) respect for who do seem to have a grasp on things at the gears level, and are claiming to be highly confident about things like short timelines and us being very far and not on pace to solve the alignment problem. Furthermore, lots of other people who I respect also have adopted this as their belief, eg. other LessWrongers who are in a similar boat as me with not having expertise in AI. And as a cherry on top of that, I spoke with a friend the other day who isn’t a LessWronger but for whom I have a very high amount of epistemic respect for. I explained the situation to him, and he judged all the grim talk to be, for lack of a better term, legit. It’s nice to get an “outsider’s” perspective as a guard against things like groupthink.
So in short, I’m in the boat of having 2 but not 1. And it seems appropriate to me more generally to be able to have 2 but not 1. It’d be hard to get along in life if you always required a 1 to go hand in hand with 2. (Not to discourage anyone from also pursuing 1. Just that I don’t think it should be a requirement.)
Coming back to the OP, it seems to be mostly asking about 1, but kinda conflating it with 2. My claim is that these are different things that should kinda be talked about separately, and that assuming that you too have a good amount of epistemic trust for Eliezer and all of the other people making these claims, you should probably adopt their beliefs as well.
Thanks for the reminder that belief and understanding are two seperate (but related) concepts. I’ll try to keep that in mind for the future.
I don’t think I can fully agree with you on that one. I do place high epistemic trust in many members of the rationalist community, but I also place high epistemic trust on many people who are not members of this community. For example, I place extremely high value on the insights of Roger Penrose, based on his incredible work on multiple scientific, mathematical, and artistic subjects that he’s been a pioneer in. At the same time, Penrose argues in his book The Emperor’s New Mind that consciousness is not “algorithmic,” which for obvious reasons I find myself doubting. Likewise, I tend to trust the CDC, but when push came to shove during the pandemic, I found myself agreeing with people’s analysis here.
I don’t think that argument from authority is a meaningful response here, because there are more authorities than just those in the rationalist community., and even if there weren’t, sometimes authorities can be wrong. To blindly follow whatever Eliezer says would, I think, be antithetical to following what Eliezer teaches.
Agreed fully. I didn’t mean to imply otherwise in my OP, even though I did.
I think a good understanding of 1 would be really helpful for advocacy. If I don’t understand why AI alignment is a big issue, I can’t explain it to anybody else, and they won’t be convinced by me saying that I trust the people who say AI alignment is a big issue.
Agreed. It’s just a separate question.
and I sloppily merged the two together in 8, which thanks to FinalFormal2 and other’s comments, I no longer believe needs to be a necessary belief of AGI pessimists.