Favorable? I don’t know why you’d think that. Seems to me the charitable interpretation of Hanson’s view has him thinking of ems as naturally Friendly, or near-Friendly. (My analysis didn’t mention the chance of us getting FAI without working for it.)
If we get two unFriendly AIs that individually have the power to kill humanity, and if acting quickly means they don’t have to negotiate with anyone else from this planet, they’ll divide Earth between them. If we somehow get trillions of uFAIs with practically different goals, then of course the expected value of killing humanity goes way down. But it still sounds greater than the expected value of cooperating with us, by Hanson’s analysis. And if we get one FAI out of a trillion AGIs, I think that leads to either war or a compromise like the one the Super-Happies offered the Babyeaters. We might get a one-trillionth slice of the available matter, with no more thought given to (say) the aesthetics of the Moon than we give to any random person who’d like to see his name there in green neon every night. (Still a better deal than we’d offer any one upload according to Hanson. But maybe I’ve misunderstood him?)
I also don’t understand how you get that much memory and processing power without some designer that seems awfully close to an AI-programming AI. But as a layman I may be thinking of that in the wrong way.
Oh, and I lean towards P(genocide) somewhat under .4 without FAI theory. Right now I’m just arguing that it exceeds .05 per XiXiDu’s comment. You may have misread his “5%” there.
Favorable? I don’t know why you’d think that. Seems to me the charitable interpretation of Hanson’s view has him thinking of ems as naturally Friendly, or near-Friendly
You would have to tell me what friendly and unfriendly means in this context. Hanson expects ems to be very numerous and very poor. I doubt he expects any one of them to have the resources available to what’s usually called an fai. Is a human being running at human speeds F or UF?
If we somehow get trillions of uFAIs with practically different goals, then of course the expected value of killing humanity goes way down. But it still sounds greater than the expected value of cooperating with us, by Hanson’s analysis.
I don’t think the notion of “cooperating with us” is coherent. Just as the trillions of ems might have practically different goals, so might the billions of live humans.
I also don’t understand how you get that much memory and processing power without some designer that seems awfully close to an AI-programming AI.
Possibly, being poor, they would not have that much memory and processing power.
Taking the last part first for context: this layman thinks that just simulating a conscious brain (experiencing something other than pure terror or slow insanity) would take a lot of resources using the copy-an-airplane-with-bullet-holes approach where you don’t know what the parts actually do, at least not well enough to make a self-reflective programming AI from scratch.
As to the rest, I’m assuming my previous claims hold for the case of a single AGI because you seemed to argue that simply introducing a lot more AGIs changes the argument. (“Cooperating with us” therefore means not killing us all.) I started out by granting that the nature of the AIs could make a big difference. The number seems almost irrelevant. It seems like you’re arguing for the possibility that no single em would have enough resources to produce super-intelligence (making assumptions about what that requires), since they might find themselves sharing a medium with trillions of competing ems before they get that far. But this appears to mean that giving more resources to any one of them (or any group with consistent goals) could easily produce super-intelligence. Someone would eventually do this. Indeed, Hanson seems to argue that a workforce of ems would help produce better technology and thus better ems.
I do have to address the possibility that the normal ems themselves could stop a self-modifying AI because they would think faster than ordinary humans. That situation would certainly decrease the risk of killing humanity. But again, for that to make sense you have to assume that effective self-modification requires vast resources (or you just get trillions of self-modifiers by assumption). You may also need to assume that a super-intelligence needs even more resources to work out a plan for killing us—otherwise the rest of the ems would seemingly have no way to discover the plan before it went into motion, except by chance. (A superior intelligence would try to include their later actions in the plan.) Note that even these assumptions do not yield near-certainty of survival given uFAI, not with the observed stupidity of humanity. Seems like you’d at least need the additional assumption that no biological human who the uFAI can reach and fool has the power to trigger our demise.
And then of course we have the relative difficulty of emulation and new designs for reflective AI. It took no time at all to find someone arguing for the impossibility of the former, on the grounds that normal emulation requires knowing what the original does well enough to copy one part and not another. If we get that knowledge it increases the likelihood of new AGI—indeed, it almost seems to require making ‘narrow AIs’ along the way, and by assumption this happens before we know what each one can do.
My main reason for doubting this part, however, lies in the fact that it suggests we can avoid otherwise difficult work, and the underlying belief seems to have grown in popularity along with our grasp of said difficulty.
Favorable? I don’t know why you’d think that. Seems to me the charitable interpretation of Hanson’s view has him thinking of ems as naturally Friendly, or near-Friendly. (My analysis didn’t mention the chance of us getting FAI without working for it.)
If we get two unFriendly AIs that individually have the power to kill humanity, and if acting quickly means they don’t have to negotiate with anyone else from this planet, they’ll divide Earth between them. If we somehow get trillions of uFAIs with practically different goals, then of course the expected value of killing humanity goes way down. But it still sounds greater than the expected value of cooperating with us, by Hanson’s analysis. And if we get one FAI out of a trillion AGIs, I think that leads to either war or a compromise like the one the Super-Happies offered the Babyeaters. We might get a one-trillionth slice of the available matter, with no more thought given to (say) the aesthetics of the Moon than we give to any random person who’d like to see his name there in green neon every night. (Still a better deal than we’d offer any one upload according to Hanson. But maybe I’ve misunderstood him?)
I also don’t understand how you get that much memory and processing power without some designer that seems awfully close to an AI-programming AI. But as a layman I may be thinking of that in the wrong way.
Oh, and I lean towards P(genocide) somewhat under .4 without FAI theory. Right now I’m just arguing that it exceeds .05 per XiXiDu’s comment. You may have misread his “5%” there.
You would have to tell me what friendly and unfriendly means in this context. Hanson expects ems to be very numerous and very poor. I doubt he expects any one of them to have the resources available to what’s usually called an fai. Is a human being running at human speeds F or UF?
I don’t think the notion of “cooperating with us” is coherent. Just as the trillions of ems might have practically different goals, so might the billions of live humans.
Possibly, being poor, they would not have that much memory and processing power.
Taking the last part first for context: this layman thinks that just simulating a conscious brain (experiencing something other than pure terror or slow insanity) would take a lot of resources using the copy-an-airplane-with-bullet-holes approach where you don’t know what the parts actually do, at least not well enough to make a self-reflective programming AI from scratch.
As to the rest, I’m assuming my previous claims hold for the case of a single AGI because you seemed to argue that simply introducing a lot more AGIs changes the argument. (“Cooperating with us” therefore means not killing us all.) I started out by granting that the nature of the AIs could make a big difference. The number seems almost irrelevant. It seems like you’re arguing for the possibility that no single em would have enough resources to produce super-intelligence (making assumptions about what that requires), since they might find themselves sharing a medium with trillions of competing ems before they get that far. But this appears to mean that giving more resources to any one of them (or any group with consistent goals) could easily produce super-intelligence. Someone would eventually do this. Indeed, Hanson seems to argue that a workforce of ems would help produce better technology and thus better ems.
I do have to address the possibility that the normal ems themselves could stop a self-modifying AI because they would think faster than ordinary humans. That situation would certainly decrease the risk of killing humanity. But again, for that to make sense you have to assume that effective self-modification requires vast resources (or you just get trillions of self-modifiers by assumption). You may also need to assume that a super-intelligence needs even more resources to work out a plan for killing us—otherwise the rest of the ems would seemingly have no way to discover the plan before it went into motion, except by chance. (A superior intelligence would try to include their later actions in the plan.) Note that even these assumptions do not yield near-certainty of survival given uFAI, not with the observed stupidity of humanity. Seems like you’d at least need the additional assumption that no biological human who the uFAI can reach and fool has the power to trigger our demise.
And then of course we have the relative difficulty of emulation and new designs for reflective AI. It took no time at all to find someone arguing for the impossibility of the former, on the grounds that normal emulation requires knowing what the original does well enough to copy one part and not another. If we get that knowledge it increases the likelihood of new AGI—indeed, it almost seems to require making ‘narrow AIs’ along the way, and by assumption this happens before we know what each one can do.
My main reason for doubting this part, however, lies in the fact that it suggests we can avoid otherwise difficult work, and the underlying belief seems to have grown in popularity along with our grasp of said difficulty.