(I’m not an economist but my understanding is that...) The EMH works in markets that fulfill the following condition: If Alice is way better than the market at predicting future prices, she can use her superior prediction capability to gain more and more control over the market, until the point where her control over the market makes the market prices reflect her prediction capability.
If Alice is way better than anyone else at predicting AGI, how can she use her superior prediction capability to gain more control over big corporations? I don’t see how the EMH an EMH-based argument applies here.
Yeah, maybe it’s not really EMH-based but rather EMH-inspired or EMH-adjacent. The core idea is that if AI is close lots of big corporations are really messing up big time; it’s in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively. And the other part of the core idea is that that’s implausible.
And the other part of the core idea is that that’s implausible.
I don’t see why that’s implausible. The condition I gave is also my explanation for why the EMH fulfills (in markets where it does), and it doesn’t explain why big corporations should be good at predicting AGI.
it’s in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively
So the questions I’m curious about here are:
What mechanism is supposed to causes big corporations to be good at predicting AGI?
How come that mechanism doesn’t also cause big corporations to understand the existential risk concerns?
I think the idea is that in general they are good at doing things that are in their self-interest, and since they don’t currently think AI is an existential threat, they should think it’s in their self-interest to make AGI if possible, and if it is possible, they should be able to recognise that since the relevant expertise in AI and AI forecasting is something they can acquire.
To be honest, I don’t put much stock in this argument, which is why I’m asking this question.
(I’m not an economist but my understanding is that...) The EMH works in markets that fulfill the following condition: If Alice is way better than the market at predicting future prices, she can use her superior prediction capability to gain more and more control over the market, until the point where her control over the market makes the market prices reflect her prediction capability.
If Alice is way better than anyone else at predicting AGI, how can she use her superior prediction capability to gain more control over big corporations? I don’t see how
the EMHan EMH-based argument applies here.Yeah, maybe it’s not really EMH-based but rather EMH-inspired or EMH-adjacent. The core idea is that if AI is close lots of big corporations are really messing up big time; it’s in their self-interest (at least, given their lack of concern for AI risk) to pursue it aggressively. And the other part of the core idea is that that’s implausible.
I don’t see why that’s implausible. The condition I gave is also my explanation for why the EMH fulfills (in markets where it does), and it doesn’t explain why big corporations should be good at predicting AGI.
So the questions I’m curious about here are:
What mechanism is supposed to causes big corporations to be good at predicting AGI?
How come that mechanism doesn’t also cause big corporations to understand the existential risk concerns?
I think the idea is that in general they are good at doing things that are in their self-interest, and since they don’t currently think AI is an existential threat, they should think it’s in their self-interest to make AGI if possible, and if it is possible, they should be able to recognise that since the relevant expertise in AI and AI forecasting is something they can acquire.
To be honest, I don’t put much stock in this argument, which is why I’m asking this question.