There are a lot of smart people outside of “the community” (AI, rationality, EA, etc.). To throw out a name, say Warren Buffett. It seems that an incredibly small number of them are even remotely as concerned about AI as we are. Why is that?
I suspect that a good amount of people, both inside and outside of our community, observe that the Warren Buffett’s of the world aren’t panicking, and then adopt that position themselves.
Most high status people, including Warren Buffett, straightforwardly haven’t considered these issues much. However, among the ones I’ve heard of who have bothered to weigh in on the issue, like Stephen Hawking, Bill Gates, Demis Hassibis, etc.; they do seem to come in favor of the side of “this is a serious problem”. On the other hand, some of them get tripped up on one of the many intellectual land mines, like Yann Lecunn.
I don’t think that’s unexpected. Intellectual land mines exist, and complicated arguments like the ones supporting AGI risk prevention are bound to cause people to make wrong decisions.
Most high status people, including Warren Buffett, straightforwardly haven’t considered these issues much.
Not that I think you’re wrong, but what are you basing this off of and how confident are you?
However, among the ones I’ve heard of who have bothered to weigh in on the issue, like Stephen Hawking, Bill Gates, Demis Hassibis, etc.; they do seem to come in favor of the side of “this is a serious problem”.
I’ve heard this too, but at the same time I don’t see any of them spending even a small fraction of their wealth on working on it, in which case I think we’re back to the original question: why the lack of concern?
On the other hand, some of them get tripped up on one of the many intellectual land mines, like Yann Lecunn. I don’t think that’s unexpected. Intellectual land mines exist, and complicated arguments like the ones supporting AGI risk prevention are bound to cause people to make wrong decisions.
Yeah, agreed. I’m just confused about the extent of it. I’d expect a lot, perhaps even a majority of “outsider” smart people to get tripped up by intellectual land mines, but instead of being 60% of these people it feels like it’s 99.99%.
For the specific example of Warren Buffet, I suspect that he probably hasn’t spent that much time thinking about it nor does he probably feel much compulsion to understand the topic as he doesn’t currently see it as a threat. I know he doesn’t really invest in tech, because he doesn’t feel that he understands it sufficiently, so I wouldn’t be surprised if his position were along the lines of “I don’t really understand it, let others can understand it think about it”.
People like Warren Buffet have made their fortune by assuming that we will continue to operate with “business as usual”. Warren Buffet is a particularly bad person to list as an example for AGI risk, because he is famously technology-averse; as an investor, he missed most of the internet revolution (Google/Amazon/Facebook/Netflix) as well.
But in general, most people, even very smart people, naturally assume that the world will continue to operate the way it always has, unless they have a very good reason to believe otherwise. One cannot expect non-technically-minded people who have not examined the risks of AGI in detail to be concerned.
By analogy, the risks of climate change have been very well established scientifically (much more so than AGI), those risks are relatively severe, the risks have been described in detail every 5 years in IPCC reports, there is massive worldwide scientific consensus, lots and LOTS of smart people are extremely worried, and yet the Warren Buffets of the world still continue with business as usual anyway. There’s a lot of social inertia.
When I say smart people, I am trying to point to intelligence that is general instead of narrow. Some people are really good at ie. investing but not actually good at other things. That would be a narrow intelligence. A general intelligence, to me, is where you have more broadly applicable skills.
Regarding Warren Buffet, I’m not actually sure if he is a good example or not. I don’t know too much about him. Ray Dalio is probably a good example.
One reason might be that AGIs are really not that concerning and the EA,rationality community has developed a mistaken model of the world that assigns a much higher probability to doom by AGI than it should, and those smart people outside the group do not hold the same beliefs.
Generally speaking, they haven’t really thought about these risks in detail, so the fact that they don’t hold “the MIRI position” is not really as much evidence as you’d think.
There are a lot of smart people outside of “the community” (AI, rationality, EA, etc.). To throw out a name, say Warren Buffett. It seems that an incredibly small number of them are even remotely as concerned about AI as we are. Why is that?
I suspect that a good amount of people, both inside and outside of our community, observe that the Warren Buffett’s of the world aren’t panicking, and then adopt that position themselves.
Most high status people, including Warren Buffett, straightforwardly haven’t considered these issues much. However, among the ones I’ve heard of who have bothered to weigh in on the issue, like Stephen Hawking, Bill Gates, Demis Hassibis, etc.; they do seem to come in favor of the side of “this is a serious problem”. On the other hand, some of them get tripped up on one of the many intellectual land mines, like Yann Lecunn.
I don’t think that’s unexpected. Intellectual land mines exist, and complicated arguments like the ones supporting AGI risk prevention are bound to cause people to make wrong decisions.
Not that I think you’re wrong, but what are you basing this off of and how confident are you?
I’ve heard this too, but at the same time I don’t see any of them spending even a small fraction of their wealth on working on it, in which case I think we’re back to the original question: why the lack of concern?
Yeah, agreed. I’m just confused about the extent of it. I’d expect a lot, perhaps even a majority of “outsider” smart people to get tripped up by intellectual land mines, but instead of being 60% of these people it feels like it’s 99.99%.
Can you be more specific about what you mean by “intellectual landmines”?
For the specific example of Warren Buffet, I suspect that he probably hasn’t spent that much time thinking about it nor does he probably feel much compulsion to understand the topic as he doesn’t currently see it as a threat. I know he doesn’t really invest in tech, because he doesn’t feel that he understands it sufficiently, so I wouldn’t be surprised if his position were along the lines of “I don’t really understand it, let others can understand it think about it”.
People like Warren Buffet have made their fortune by assuming that we will continue to operate with “business as usual”. Warren Buffet is a particularly bad person to list as an example for AGI risk, because he is famously technology-averse; as an investor, he missed most of the internet revolution (Google/Amazon/Facebook/Netflix) as well.
But in general, most people, even very smart people, naturally assume that the world will continue to operate the way it always has, unless they have a very good reason to believe otherwise. One cannot expect non-technically-minded people who have not examined the risks of AGI in detail to be concerned.
By analogy, the risks of climate change have been very well established scientifically (much more so than AGI), those risks are relatively severe, the risks have been described in detail every 5 years in IPCC reports, there is massive worldwide scientific consensus, lots and LOTS of smart people are extremely worried, and yet the Warren Buffets of the world still continue with business as usual anyway. There’s a lot of social inertia.
When I say smart people, I am trying to point to intelligence that is general instead of narrow. Some people are really good at ie. investing but not actually good at other things. That would be a narrow intelligence. A general intelligence, to me, is where you have more broadly applicable skills.
Regarding Warren Buffet, I’m not actually sure if he is a good example or not. I don’t know too much about him. Ray Dalio is probably a good example.
One reason might be that AGIs are really not that concerning and the EA,rationality community has developed a mistaken model of the world that assigns a much higher probability to doom by AGI than it should, and those smart people outside the group do not hold the same beliefs.
Generally speaking, they haven’t really thought about these risks in detail, so the fact that they don’t hold “the MIRI position” is not really as much evidence as you’d think.