I think you should potentially question your own epistemics if they lead you to the conclusion that you and your friends are some of the only competent-at-living-on-the-object-level people in the world, especially when what you’re describing is such an obviously-valuable skill that would be instrumentally useful for basically all real world impact. (If that’s not what you were saying, feel free to ignore this.)
People in your social circles are right about AI risk. Others are wrong. I understand the desire to try to find explanations for that. There are lots of explanations that don’t require beliefs like “we’re better at thinking than everyone else”. For example, you can believe that human civilization incentivizes lots of smart people searching thought-space in different directions, and you happen to have been adjacent to a fruitful vein that others have thus far failed to recognize. Believing instead that you have been successful “because nobody else really thinks on the object level anymore” is going to make it impossible to cooperate with or learn from the other smart serious people who do in fact exist. Whether or not you were planning to participate in that external cooperation, it’s a bad communal norm to be dismissive of potential allies.
Yeah, you’re right, that is what my point boils down to. I think it’s a bad viewpoint to advocate one’s tribe endorse publicly independent of whether one believes it’s true.
Maybe you can consider LW a non-public space, as far as “speaking candid thoughts”, and you’d have better data than me. But for example, I can promise you that if I try to send this post to the average persuadable ML person, they will basically check out when they read something like that. And that’s a real concrete cost, that shouldn’t just be waived away with “but I think it’s true and thus to promote good communication norms I should let that belief be public.”
EDIT: Nvm, I misunderstood the point, I thought the parent comment was arguing that people were good at being concrete, but apparently that was not the point, see followup thread with Ben
Hmm, it seems like the story (to which I am quite sympathetic) is “people are very competent at being concrete in domains where they have tons of feedback from reality, but stop being concrete as soon as you move to a domain in which that’s not the case”.
This story has people being good at the skill when it is actually important for their jobs, so it’s no longer subject to the critique “but this skill is so instrumentally useful that everyone would use it”.
I definitely think Eliezer’s claim is very hyperbolic in its implications[1], but I do think it is pointing at some real phenomenon where many people don’t particularly try to be concrete in domains they don’t have lived experience in.
Though who knows if it is literally false—what does it mean to be in a “lineage”? How many is implied by “one of the last”? I didn’t learn concreteness from Feynman, I can remember using it in random philosophical conversations in high school, long before I knew who Feynman was or what EA / rationality were. Does that mean I wouldn’t count as “one of the last of the lineage”, even if I have the skill?
idk, I feel like Lightcone team talks about concrete things all the time?
I did think specifically of teammates when I said next-to-none.
I think you should potentially question your own epistemics if they lead you to the conclusion that you and your friends are some of the only competent-at-living-on-the-object-level people in the world, especially when what you’re describing is such an obviously-valuable skill that would be instrumentally useful for basically all real world impact. (If that’s not what you were saying, feel free to ignore this.)
People in your social circles are right about AI risk. Others are wrong. I understand the desire to try to find explanations for that. There are lots of explanations that don’t require beliefs like “we’re better at thinking than everyone else”. For example, you can believe that human civilization incentivizes lots of smart people searching thought-space in different directions, and you happen to have been adjacent to a fruitful vein that others have thus far failed to recognize. Believing instead that you have been successful “because nobody else really thinks on the object level anymore” is going to make it impossible to cooperate with or learn from the other smart serious people who do in fact exist. Whether or not you were planning to participate in that external cooperation, it’s a bad communal norm to be dismissive of potential allies.
I do question my own epistemics? Not sure about your argument regarding why I should, but I do.
Your second paragraph reads to me as “don’t have these beliefs because it would be socially costly”.
Yeah, you’re right, that is what my point boils down to. I think it’s a bad viewpoint to advocate one’s tribe endorse publicly independent of whether one believes it’s true.
Maybe you can consider LW a non-public space, as far as “speaking candid thoughts”, and you’d have better data than me. But for example, I can promise you that if I try to send this post to the average persuadable ML person, they will basically check out when they read something like that. And that’s a real concrete cost, that shouldn’t just be waived away with “but I think it’s true and thus to promote good communication norms I should let that belief be public.”
Oh. I do. Why don’t you?
EDIT: Nvm, I misunderstood the point, I thought the parent comment was arguing that people were good at being concrete, but apparently that was not the point, see followup thread with Ben
Hmm, it seems like the story (to which I am quite sympathetic) is “people are very competent at being concrete in domains where they have tons of feedback from reality, but stop being concrete as soon as you move to a domain in which that’s not the case”.
This story has people being good at the skill when it is actually important for their jobs, so it’s no longer subject to the critique “but this skill is so instrumentally useful that everyone would use it”.
I definitely think Eliezer’s claim is very hyperbolic in its implications[1], but I do think it is pointing at some real phenomenon where many people don’t particularly try to be concrete in domains they don’t have lived experience in.
Though who knows if it is literally false—what does it mean to be in a “lineage”? How many is implied by “one of the last”? I didn’t learn concreteness from Feynman, I can remember using it in random philosophical conversations in high school, long before I knew who Feynman was or what EA / rationality were. Does that mean I wouldn’t count as “one of the last of the lineage”, even if I have the skill?