“Like if we increased yearly economic growth by 5% (for example 2% to 2.1%), what effect would you expect that to have?”
From my personal experience, academics have a tendency and preference to work on superficially-beneficial problems; Manhattan Projects and AI alignment groups both exist (detrimental and non-obviously beneficial, respectively), but for the most part we have projects like eco-friendly technology and efficient resource allocation in specified domains.
Due to this, greater economic growth means more resources to bring to bear for other scientific/engineering problems, due to research on superficially-beneficial subjects like power-generation, efficiency, quantum computing, etc. As noted in my previous comment, the economic growth (and these increased resources as well) will also lead to an increased number of researchers and engineers.
Fields of study considered as X-risks are often popular enough that development to dangerous levels is actually an urgent possibility. As such, I would expect them to be bounded by academic development rather than resource availability (increased hardware capabilities might be a bottleneck for AGI development, but at this point I doubt it, as at least one [not-vetted-by-me] analysis I’ve encountered suggests (assuming perfectly-efficient computation using parallel graph-based operations) that modern supercomputers are only 1 or 2 orders of magnitude away from the raw computational ability of the human brain). (Increased personnel is beneficial to these fields, but that’s addressed below and in the second part of this comment.)
So the changes caused by these increased resources would mostly occur in other fields, which are generally geared towards either increased life/quality-of-life (which encourages less ‘practical’ pursuits like philosophy and unusual worldviews (e.g. Effective Altruism), potentially increasing deviation from the economic incentives promoting dangerous technology, and also feeds back into economic growth) or better general understanding of the world (which accelerates dangerous, non-dangerous, and anti-X-risk (e.g. alignment) research to a similar degree).
Regarding that second category, many conventional fields are actually working directly on possible solutions to X-risk problems, whether or not they believe in the dangers. Climate change, resource shortages, and asteroid risk are all partly addressed by space research, and the first two are also relevant to ecological research. Progress in fields like psychology/neurology & sociology/game-theory is potentially applicable to AI alignment, and can also be used to help encourage large-scale coordination between organizations. The benefits from these partially counterbalance what impact the economic growth does have on more dangerous fields like directed AGI research.
And on a separate note, I would consider “dying with dignity” to also mean “not giving up on improving people’s lives just because we’re eventually all going to die”. This is likely not what Eliezer meant in his post, but I doubt he (or most people) would be actively opposed to the idea. From this perspective, many conventional research directions (which economic growth tends to help) are useful for dying with dignity, even the ones that don’t directly apply to X-risk.
“I suspect the impact is net-negative because increasing both amounts of researchers shortens the timelines and longer timelines increase our odds as EA and AI safety are becoming much more established.”
This is going into more speculative territory, since I doubt either of us are experienced professional sociologists. Still, to my knowledge paradigm-changes in a field are rarely a result of convincing the current members of an issue; they usually involve new entrants, without predefined biases and frameworks, leaning towards the new way of looking at things.
So the rate of EA & AI safety becoming established would also increase significantly if there was a large influx of new academics with an interest in altruistic academic efforts (since their communities were helped by such efforts), meaning the increase in research population should be more balanced towards safety/alignment than the current population is.
Whether this change in proportion is sufficiently unbalanced to counteract the changes in progress of technologies like AGI is difficult to judge. For one thing, due to threshold effects I’d expect research progress vs research population to be something like an irregular step-function with sigmoid-shaped inter-step transitions on either the base level or one of the lower-level differentials, meaning population doesn’t have a direct relation to progress levels. For another, as you mentioned, other talented individuals in this influx would be pushed towards these fields because of the challenges and income they offer, and while this seems at first glance to be the weaker of the two incentives, it may well be the greater and thus falsify my assumption that EA/alignment would come out better in population growth.
In a surface-level analysis like this I generally assume equivalence in the important aspects (research progress, in this case) for such ambiguous situations, but you are correct that it might be weighted towards the less-desirable outcome.
“Like if we increased yearly economic growth by 5% (for example 2% to 2.1%), what effect would you expect that to have?”
From my personal experience, academics have a tendency and preference to work on superficially-beneficial problems; Manhattan Projects and AI alignment groups both exist (detrimental and non-obviously beneficial, respectively), but for the most part we have projects like eco-friendly technology and efficient resource allocation in specified domains.
Due to this, greater economic growth means more resources to bring to bear for other scientific/engineering problems, due to research on superficially-beneficial subjects like power-generation, efficiency, quantum computing, etc. As noted in my previous comment, the economic growth (and these increased resources as well) will also lead to an increased number of researchers and engineers.
Fields of study considered as X-risks are often popular enough that development to dangerous levels is actually an urgent possibility. As such, I would expect them to be bounded by academic development rather than resource availability (increased hardware capabilities might be a bottleneck for AGI development, but at this point I doubt it, as at least one [not-vetted-by-me] analysis I’ve encountered suggests (assuming perfectly-efficient computation using parallel graph-based operations) that modern supercomputers are only 1 or 2 orders of magnitude away from the raw computational ability of the human brain).
(Increased personnel is beneficial to these fields, but that’s addressed below and in the second part of this comment.)
So the changes caused by these increased resources would mostly occur in other fields, which are generally geared towards either increased life/quality-of-life (which encourages less ‘practical’ pursuits like philosophy and unusual worldviews (e.g. Effective Altruism), potentially increasing deviation from the economic incentives promoting dangerous technology, and also feeds back into economic growth) or better general understanding of the world (which accelerates dangerous, non-dangerous, and anti-X-risk (e.g. alignment) research to a similar degree).
Regarding that second category, many conventional fields are actually working directly on possible solutions to X-risk problems, whether or not they believe in the dangers. Climate change, resource shortages, and asteroid risk are all partly addressed by space research, and the first two are also relevant to ecological research. Progress in fields like psychology/neurology & sociology/game-theory is potentially applicable to AI alignment, and can also be used to help encourage large-scale coordination between organizations. The benefits from these partially counterbalance what impact the economic growth does have on more dangerous fields like directed AGI research.
And on a separate note, I would consider “dying with dignity” to also mean “not giving up on improving people’s lives just because we’re eventually all going to die”. This is likely not what Eliezer meant in his post, but I doubt he (or most people) would be actively opposed to the idea. From this perspective, many conventional research directions (which economic growth tends to help) are useful for dying with dignity, even the ones that don’t directly apply to X-risk.
“I suspect the impact is net-negative because increasing both amounts of researchers shortens the timelines and longer timelines increase our odds as EA and AI safety are becoming much more established.”
This is going into more speculative territory, since I doubt either of us are experienced professional sociologists. Still, to my knowledge paradigm-changes in a field are rarely a result of convincing the current members of an issue; they usually involve new entrants, without predefined biases and frameworks, leaning towards the new way of looking at things.
So the rate of EA & AI safety becoming established would also increase significantly if there was a large influx of new academics with an interest in altruistic academic efforts (since their communities were helped by such efforts), meaning the increase in research population should be more balanced towards safety/alignment than the current population is.
Whether this change in proportion is sufficiently unbalanced to counteract the changes in progress of technologies like AGI is difficult to judge.
For one thing, due to threshold effects I’d expect research progress vs research population to be something like an irregular step-function with sigmoid-shaped inter-step transitions on either the base level or one of the lower-level differentials, meaning population doesn’t have a direct relation to progress levels.
For another, as you mentioned, other talented individuals in this influx would be pushed towards these fields because of the challenges and income they offer, and while this seems at first glance to be the weaker of the two incentives, it may well be the greater and thus falsify my assumption that EA/alignment would come out better in population growth.
In a surface-level analysis like this I generally assume equivalence in the important aspects (research progress, in this case) for such ambiguous situations, but you are correct that it might be weighted towards the less-desirable outcome.