Fly,
Anders Sandberg has a great articulation of this, in that he implies developing the right large institutions that check and balance each other (presumably super intelligent AI powered to do what follows) may allow us humans to survive coexisting with superintelligent AI, just as we survive and even have decent quality of life in a world of markets, governments, religions, and corporations, alll of which can check each other from abuse and degredation of quality of human life. I like the analogy, because I think it’s possible that subsets of the aforementioned may already be entities functionally more intelligent than the smartest individual humans, just as subsets of humans (competent scientists, for example) may be functionally smarter than the most effectively survivalist unicellular organism.
So we may already be surviving in a world of things smarter than us, throught their own checks and balances of each other.
Of course, we could just be in a transitionary period, rather than in a permanently good or better period, or the analogy may not hold. I wouldn’t be suprised if the substrate jump to digital happens at the level of governments, corporations, or markets, rather than human minds first. In fact, with regards to markets, it arguably has already occured. Similary to Eliezer’s AI in a box, markets could be described as using incentives to get us to engage in nano-manufacturing. We’ll see if it ends in a cure for aging, or for a reassembly of the species (and the planet, solar system, etc.) into something that will more efficiently maximize the persistence odds of the most effective market algorithms.
Fly, Anders Sandberg has a great articulation of this, in that he implies developing the right large institutions that check and balance each other (presumably super intelligent AI powered to do what follows) may allow us humans to survive coexisting with superintelligent AI, just as we survive and even have decent quality of life in a world of markets, governments, religions, and corporations, alll of which can check each other from abuse and degredation of quality of human life. I like the analogy, because I think it’s possible that subsets of the aforementioned may already be entities functionally more intelligent than the smartest individual humans, just as subsets of humans (competent scientists, for example) may be functionally smarter than the most effectively survivalist unicellular organism.
So we may already be surviving in a world of things smarter than us, throught their own checks and balances of each other.
Of course, we could just be in a transitionary period, rather than in a permanently good or better period, or the analogy may not hold. I wouldn’t be suprised if the substrate jump to digital happens at the level of governments, corporations, or markets, rather than human minds first. In fact, with regards to markets, it arguably has already occured. Similary to Eliezer’s AI in a box, markets could be described as using incentives to get us to engage in nano-manufacturing. We’ll see if it ends in a cure for aging, or for a reassembly of the species (and the planet, solar system, etc.) into something that will more efficiently maximize the persistence odds of the most effective market algorithms.
But not of cours eif they adopt your morality.