Greater economic growth means more money for AI research from companies and governments and if you think that AI will probably go wrong then this is a source of trouble. But there are benefits as well including increased charitable contributions for organizations that reduce existential risk and better educational systems in India and China which might produce people who end up helping MIRI. Overall, I’m not sure how this nets out.
Catch up growth is not necessarily safe because it will increase the demand for products that use AI and so increase the amount of resources companies such as Google devote to AI.
The only safe path is someone developing a mathematically sound theory of friendly AI, but this will be easier if we get (probably via China) intelligence enhancement with eugenics.
Haven’t read your book so not sure if you have already answered this.
what is your assessment of miri’s current opinion that increasing the global economic growth rate is a source of existential risk?
How much risk is increased for what increase in growth?
Are there safe paths? (Maybe catch up growth in india and china is safe??)
Greater economic growth means more money for AI research from companies and governments and if you think that AI will probably go wrong then this is a source of trouble. But there are benefits as well including increased charitable contributions for organizations that reduce existential risk and better educational systems in India and China which might produce people who end up helping MIRI. Overall, I’m not sure how this nets out.
Catch up growth is not necessarily safe because it will increase the demand for products that use AI and so increase the amount of resources companies such as Google devote to AI.
The only safe path is someone developing a mathematically sound theory of friendly AI, but this will be easier if we get (probably via China) intelligence enhancement with eugenics.