In the 18th century, it’s conscious beings competing against conscious beings. There is no reason that needs to continue to hold. Once all the problems for general intelligence to solve have been solved, only something on the level of expert systems will be needed. Or something else will become wasteful. Maybe the most efficient things will be minds, but they will be cold, dark minds—they could easily lack love, curiosity, boredom, play, or other fundamental human values. They will have no surplus, and so no leisure.
That does beg the question of “efficient for what?”, though. If we discount the idea that nonsapient agents are likely to end up as better fully general optimizers than sapient ones (and thus destined to inherit the universe whether we like it or not), we’re left with a mess of sapient agents that presumably want to further their own interests. They could potentially outcompete themselves in limited domains by introducing subsapient category optimizers, but in most cases I can’t think of a reason why they’d want to.
Is Blindsight some kind of cool science fiction thing that I should know about?
Reproducing and taking resources from other agents are enough to cause the apocalypse. If sapients band together to notice and eliminate resource-takers, they can prevent this sort of apocalypse. To do so they will essentially create a singleton, finally ending evolution / banishing the blind idiot God.
In the 18th century, it’s conscious beings competing against conscious beings. There is no reason that needs to continue to hold. Once all the problems for general intelligence to solve have been solved, only something on the level of expert systems will be needed. Or something else will become wasteful. Maybe the most efficient things will be minds, but they will be cold, dark minds—they could easily lack love, curiosity, boredom, play, or other fundamental human values. They will have no surplus, and so no leisure.
Shades of Blindsight there.
That does beg the question of “efficient for what?”, though. If we discount the idea that nonsapient agents are likely to end up as better fully general optimizers than sapient ones (and thus destined to inherit the universe whether we like it or not), we’re left with a mess of sapient agents that presumably want to further their own interests. They could potentially outcompete themselves in limited domains by introducing subsapient category optimizers, but in most cases I can’t think of a reason why they’d want to.
Is Blindsight some kind of cool science fiction thing that I should know about?
Reproducing and taking resources from other agents are enough to cause the apocalypse. If sapients band together to notice and eliminate resource-takers, they can prevent this sort of apocalypse. To do so they will essentially create a singleton, finally ending evolution / banishing the blind idiot God.
Ah, sorry. Novel by Peter Watts, available here. Touches on some of the issues you introduced.
These are all good points.