Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
2) zero
Right. E. coli don’t offer us anything we can’t do for ourselves, that we can’t just whip up a batch of E. coli for on demand.
The AGI is missing out on tremendous opportunities if it bypasses positive-sum games of potentially infinite length and utility for a short-term finite gain
If I’m a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn’t complain or rebel.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
It’s like you’re a murder investigator opening up the phonebook to a random place and saying, “well, we haven’t ruled out the possibility that this guy did it”, and when people quite reasonably point out that there is no connection between that random guy and the murder, you reply, “yeah, but I just called this guy, and he has no alibi.” (That is, you’re ignoring the fact that a huge number of people in that phonebook will also have no alibi, so your “evidence” isn’t actually increasing the expected probability that that guy did it.)
And that’s why you’re getting so many downvotes: in LW terms, you are failing basic reasoning.
But that is not a shameful thing: any normal human being fails basic reasoning, by default, in exactly the same way. Our brains simply aren’t built to do reasoning: they’re built to argue, by finding the most persuasive evidence that supports our pre-existing beliefs and hypotheses, rather than trying to find out what is true.
When I first got here, I argued for some of my pet hypotheses in the exact same way, although I was righteously certain that I was not doing such a thing. It took a long time before I really “got” Bayesian reasoning sufficiently to understand what I was doing wrong, and before that, I couldn’t have said here what you were doing wrong either.
Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
If the overall price (including time, gaining requisite knowledge, etc) of co-operation is less expensive than the AGI doing it itself, the AGI should co-operate. No?
If I’m a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn’t complain or rebel.
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
There are also useful pseudo-moral arguments of the type of pre-committing to a benevolent strategy so that others (bigger than you) will be benevolent to you.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
Agreed. So your argument is that I’m not adequately presenting evidence for singling out that hypothesis. That’s a useful criticism. Thanks!
And that’s why you’re getting so many downvotes: in LW terms, you are failing basic reasoning.
I disagree. I believe that I am failing to successfully communicate my reasoning. I understand your arguments perfectly well (and appreciate them) and agree with them if that is what I was trying to do. Since they are not what I’m trying to do—although they apparently are what I AM doing—I’m assuming (yes, ASS-U-ME) that I’m failing elsewhere and am currently placing the blame on my communication skills.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
And, once again, thank you for already taking the time to give such a detailed thoughtful response.
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly?
Yes. The nano bots that you could build out of my dismantled raw materials.There is something humbling to realise that my complete submission and wholehearted support is worth less to a non-friendly AI than my spleen.
Oh, worth much much less than your spleen. It might be a fun exercise to take the numbers from Seth Lloyd and figure out how molecules (optimistically, the volume of a cell or two) your brain is worth.
Utility for what purpose? If we’re talking about say, a paperclip maximizer, then its utility for human beings will be measured in paperclip production.
Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
It won’t be as efficient as specialized paperclip-production machines will, for the production of paperclips.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
Yes, but you’re unlikely to be happy with it: read the sequences, or at least the parts of them that deal with reasoning, the use of words, and inferential distances. (For now at least, you can skip the quantum mechanics, AI, and Fun Theory parts.)
At minimum, this will help you understand LW’s standards for basic reasoning, and how much higher a bar they are than what constitutes “reasoning” pretty much anywhere else.
If you’re reasoning as well as you say, then the material will be a breeze, and you’ll be able to make your arguments in terms that the rest of us can understand. Or, if you’re not, then you’ll probably learn that along the way.
Comparative advantage explains how to make use of inefficient agents, so that ignoring them is a worse option. But if you can convert them into something else, you are no longer comparing the gain from trading with them to indifference of ignoring them, you are comparing the gain from trading with them to the gain from converting them. And if they can be cheaply converted into something much more efficient than they are, converting them is the winning move. This is a move largely not available to the present society, hence its absence is a reasonable assumption for now but one that breaks when you consider indifferent smart AGI.
The law of comparative advantage relies on some implicit assumptions that are not likely to hold between a superintelligence and humans:
The transactions costs must be small enough not to negate the gains from trade. A superintelligence may require more resources to issue a trade request to slow thinking humans and to receive the result, while possibly letting processes idle while waiting for the result, than to just do it itself.
Your trading partner must not have the option of building a more desirable trading partner out of your component parts. A superintelligence could get more productivity of atoms arranged as an extension of itself than atoms arranged as humans. (ETA: See Nesov’s comment.)
A sufficiently clever AI should understand Comparative Advantage
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
It really doesn’t take that much human-level intelligence to change how things are done—all it takes is a lack of attachment to the current ways.
And that’s perhaps the biggest “natural resource” an AI has: the lack of status quo bias.
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
I don’t understand what are you arguing for. That people become better off doing something different, doesn’t necessarily imply that they become obsolete, or even that they can’t continue doing the less-efficient thing.
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
I’m not sure I understand what “shift the comparative advantage” could mean, and I have no idea why this is supposed to be a response to my point.
Maybe I didn’t make my point clearly enough. My contention is that even if an AI is better at absolutely everything than a human being, it could still be better off trading with human beings for certain goods, for the simple reason that it can’t do everything, and in such a scenario both human beings and the AI would get gains from trade.
As Nesov points out, if the AI has the option of, say, converting human beings into computational substrate and using them to simulate new versions of itself, then this ceases to be relevant.
Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
Right. E. coli don’t offer us anything we can’t do for ourselves, that we can’t just whip up a batch of E. coli for on demand.
If I’m a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn’t complain or rebel.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
It’s like you’re a murder investigator opening up the phonebook to a random place and saying, “well, we haven’t ruled out the possibility that this guy did it”, and when people quite reasonably point out that there is no connection between that random guy and the murder, you reply, “yeah, but I just called this guy, and he has no alibi.” (That is, you’re ignoring the fact that a huge number of people in that phonebook will also have no alibi, so your “evidence” isn’t actually increasing the expected probability that that guy did it.)
And that’s why you’re getting so many downvotes: in LW terms, you are failing basic reasoning.
But that is not a shameful thing: any normal human being fails basic reasoning, by default, in exactly the same way. Our brains simply aren’t built to do reasoning: they’re built to argue, by finding the most persuasive evidence that supports our pre-existing beliefs and hypotheses, rather than trying to find out what is true.
When I first got here, I argued for some of my pet hypotheses in the exact same way, although I was righteously certain that I was not doing such a thing. It took a long time before I really “got” Bayesian reasoning sufficiently to understand what I was doing wrong, and before that, I couldn’t have said here what you were doing wrong either.
If the overall price (including time, gaining requisite knowledge, etc) of co-operation is less expensive than the AGI doing it itself, the AGI should co-operate. No?
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
There are also useful pseudo-moral arguments of the type of pre-committing to a benevolent strategy so that others (bigger than you) will be benevolent to you.
Agreed. So your argument is that I’m not adequately presenting evidence for singling out that hypothesis. That’s a useful criticism. Thanks!
I disagree. I believe that I am failing to successfully communicate my reasoning. I understand your arguments perfectly well (and appreciate them) and agree with them if that is what I was trying to do. Since they are not what I’m trying to do—although they apparently are what I AM doing—I’m assuming (yes, ASS-U-ME) that I’m failing elsewhere and am currently placing the blame on my communication skills.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
And, once again, thank you for already taking the time to give such a detailed thoughtful response.
Yes. The nano bots that you could build out of my dismantled raw materials.There is something humbling to realise that my complete submission and wholehearted support is worth less to a non-friendly AI than my spleen.
Oh, worth much much less than your spleen. It might be a fun exercise to take the numbers from Seth Lloyd and figure out how molecules (optimistically, the volume of a cell or two) your brain is worth.
Utility for what purpose? If we’re talking about say, a paperclip maximizer, then its utility for human beings will be measured in paperclip production.
It won’t be as efficient as specialized paperclip-production machines will, for the production of paperclips.
Yes, but you’re unlikely to be happy with it: read the sequences, or at least the parts of them that deal with reasoning, the use of words, and inferential distances. (For now at least, you can skip the quantum mechanics, AI, and Fun Theory parts.)
At minimum, this will help you understand LW’s standards for basic reasoning, and how much higher a bar they are than what constitutes “reasoning” pretty much anywhere else.
If you’re reasoning as well as you say, then the material will be a breeze, and you’ll be able to make your arguments in terms that the rest of us can understand. Or, if you’re not, then you’ll probably learn that along the way.
A sufficiently clever AI should understand Comparative Advantage
Comparative advantage explains how to make use of inefficient agents, so that ignoring them is a worse option. But if you can convert them into something else, you are no longer comparing the gain from trading with them to indifference of ignoring them, you are comparing the gain from trading with them to the gain from converting them. And if they can be cheaply converted into something much more efficient than they are, converting them is the winning move. This is a move largely not available to the present society, hence its absence is a reasonable assumption for now but one that breaks when you consider indifferent smart AGI.
The law of comparative advantage relies on some implicit assumptions that are not likely to hold between a superintelligence and humans:
The transactions costs must be small enough not to negate the gains from trade. A superintelligence may require more resources to issue a trade request to slow thinking humans and to receive the result, while possibly letting processes idle while waiting for the result, than to just do it itself.
Your trading partner must not have the option of building a more desirable trading partner out of your component parts. A superintelligence could get more productivity of atoms arranged as an extension of itself than atoms arranged as humans. (ETA: See Nesov’s comment.)
And a sufficiently clever human should realize that clever humans can and do routinely increase the efficiencies of their industry enough to shift the comparative advantage.
It really doesn’t take that much human-level intelligence to change how things are done—all it takes is a lack of attachment to the current ways.
And that’s perhaps the biggest “natural resource” an AI has: the lack of status quo bias.
I don’t understand what are you arguing for. That people become better off doing something different, doesn’t necessarily imply that they become obsolete, or even that they can’t continue doing the less-efficient thing.
I’m not sure I understand what “shift the comparative advantage” could mean, and I have no idea why this is supposed to be a response to my point.
Maybe I didn’t make my point clearly enough. My contention is that even if an AI is better at absolutely everything than a human being, it could still be better off trading with human beings for certain goods, for the simple reason that it can’t do everything, and in such a scenario both human beings and the AI would get gains from trade.
As Nesov points out, if the AI has the option of, say, converting human beings into computational substrate and using them to simulate new versions of itself, then this ceases to be relevant.