If I understand correctly, discussions of superintelligence imply that a ‘friendly’ AGI would provide for exponentially increasing TFP growth while effective number of researchers could remain flat or decline.
Additionally, number of researchers as a share of the total human population could be flat or decline, because AGI would do all the thinking, and do it better than any human or assemblage of humans could.
If AGI points out that physics does not permit overcoming energy scarcity, and space travel/colonization is not viable for humans due to engineering challenges actually being insurmountable, then an engineered population crash is the logical thing to do in order to prolong human existence.
So a friendly AI would, in that environment, end up presiding over a declining mass of ignorant humans with the assistance of a small elite of AI technicians who keep the machine running.
I don’t think that my first two paragraphs here are correct, but I think that puts me in a minority position here.
If I understand correctly, discussions of superintelligence imply that a ‘friendly’ AGI would provide for exponentially increasing TFP growth while effective number of researchers could remain flat or decline.
Additionally, number of researchers as a share of the total human population could be flat or decline, because AGI would do all the thinking, and do it better than any human or assemblage of humans could.
If AGI points out that physics does not permit overcoming energy scarcity, and space travel/colonization is not viable for humans due to engineering challenges actually being insurmountable, then an engineered population crash is the logical thing to do in order to prolong human existence.
So a friendly AI would, in that environment, end up presiding over a declining mass of ignorant humans with the assistance of a small elite of AI technicians who keep the machine running.
I don’t think that my first two paragraphs here are correct, but I think that puts me in a minority position here.