The ancient prophecies of paperclip maximizers seem to point towards #1.
But it seems to me that #4 has the greatest incentive to be general, to work across many different domains, because you have many different kinds of companies on stock market. -- For example, if we look at nanotechnology, #2, #3 and #5 need to be able to compose a short interesting story about nanotechnology. But that’s about human psychology; the story has to be interesting, not realistic. On the other hand, #4 needs to be able to look at a company that claims to produce nanotechnology and evaluate whether their projects are realistic or just nice-sounding nonsense. (#6 and #7 also feel too narrow.) -- Of course, “having an incentive” is not the same as “having the problem solved”.
The battle between #4 (or other general machine) and #8 will probably depend on the state of hardware, our knowledge of neurobiology, and our knowledge of intelligent algorithms. If we will understand “the essence of intelligence” formally enough, we may be able to write an intelligent code. However, if we will not get much close to useful formal definitions, but we will have insanely poweful hardware and we will know which parts of human brain physiology are important, the uploads may be first. -- Note that the brain uploads may not be recursively self-improving, so we may get uploads first, but some de novo AGI may still surpass them later.
The ancient prophecies of paperclip maximizers seem to point towards #1.
But it seems to me that #4 has the greatest incentive to be general, to work across many different domains, because you have many different kinds of companies on stock market. -- For example, if we look at nanotechnology, #2, #3 and #5 need to be able to compose a short interesting story about nanotechnology. But that’s about human psychology; the story has to be interesting, not realistic. On the other hand, #4 needs to be able to look at a company that claims to produce nanotechnology and evaluate whether their projects are realistic or just nice-sounding nonsense. (#6 and #7 also feel too narrow.) -- Of course, “having an incentive” is not the same as “having the problem solved”.
The battle between #4 (or other general machine) and #8 will probably depend on the state of hardware, our knowledge of neurobiology, and our knowledge of intelligent algorithms. If we will understand “the essence of intelligence” formally enough, we may be able to write an intelligent code. However, if we will not get much close to useful formal definitions, but we will have insanely poweful hardware and we will know which parts of human brain physiology are important, the uploads may be first. -- Note that the brain uploads may not be recursively self-improving, so we may get uploads first, but some de novo AGI may still surpass them later.