Yeah, maybe it’s not really EMH-based but rather EMH-inspired or EMH-adjacent. The core idea is that if AI is close lots of big corporations are really messing up big time; it’s in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively. And the other part of the core idea is that that’s implausible.
And the other part of the core idea is that that’s implausible.
I don’t see why that’s implausible. The condition I gave is also my explanation for why the EMH fulfills (in markets where it does), and it doesn’t explain why big corporations should be good at predicting AGI.
it’s in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively
So the questions I’m curious about here are:
What mechanism is supposed to causes big corporations to be good at predicting AGI?
How come that mechanism doesn’t also cause big corporations to understand the existential risk concerns?
I think the idea is that in general they are good at doing things that are in their self-interest, and since they don’t currently think AI is an existential threat, they should think it’s in their self-interest to make AGI if possible, and if it is possible, they should be able to recognise that since the relevant expertise in AI and AI forecasting is something they can acquire.
To be honest, I don’t put much stock in this argument, which is why I’m asking this question.
Yeah, maybe it’s not really EMH-based but rather EMH-inspired or EMH-adjacent. The core idea is that if AI is close lots of big corporations are really messing up big time; it’s in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively. And the other part of the core idea is that that’s implausible.
I don’t see why that’s implausible. The condition I gave is also my explanation for why the EMH fulfills (in markets where it does), and it doesn’t explain why big corporations should be good at predicting AGI.
So the questions I’m curious about here are:
What mechanism is supposed to causes big corporations to be good at predicting AGI?
How come that mechanism doesn’t also cause big corporations to understand the existential risk concerns?
I think the idea is that in general they are good at doing things that are in their self-interest, and since they don’t currently think AI is an existential threat, they should think it’s in their self-interest to make AGI if possible, and if it is possible, they should be able to recognise that since the relevant expertise in AI and AI forecasting is something they can acquire.
To be honest, I don’t put much stock in this argument, which is why I’m asking this question.