Please give an example of why the AGI should co-operate with something that cannot do anything the AGI itself cannot.
If the overall price (including time, gaining requisite knowledge, etc) of co-operation is less expensive than the AGI doing it itself, the AGI should co-operate. No?
If I’m a god, what would I need a human for? If I need humans, I can just make some. Better still, I could replace them with something more efficient that doesn’t complain or rebel.
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
There are also useful pseudo-moral arguments of the type of pre-committing to a benevolent strategy so that others (bigger than you) will be benevolent to you.
The fundamental flaw in your reasoning here is that you keep trying to construct paths through probability space that could support your hypothesis, but ONLY if you had presented some evidence for singling out that hypothesis in the first place!
Agreed. So your argument is that I’m not adequately presenting evidence for singling out that hypothesis. That’s a useful criticism. Thanks!
And that’s why you’re getting so many downvotes: in LW terms, you are failing basic reasoning.
I disagree. I believe that I am failing to successfully communicate my reasoning. I understand your arguments perfectly well (and appreciate them) and agree with them if that is what I was trying to do. Since they are not what I’m trying to do—although they apparently are what I AM doing—I’m assuming (yes, ASS-U-ME) that I’m failing elsewhere and am currently placing the blame on my communication skills.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
And, once again, thank you for already taking the time to give such a detailed thoughtful response.
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly?
Yes. The nano bots that you could build out of my dismantled raw materials.There is something humbling to realise that my complete submission and wholehearted support is worth less to a non-friendly AI than my spleen.
Oh, worth much much less than your spleen. It might be a fun exercise to take the numbers from Seth Lloyd and figure out how molecules (optimistically, the volume of a cell or two) your brain is worth.
Utility for what purpose? If we’re talking about say, a paperclip maximizer, then its utility for human beings will be measured in paperclip production.
Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
It won’t be as efficient as specialized paperclip-production machines will, for the production of paperclips.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
Yes, but you’re unlikely to be happy with it: read the sequences, or at least the parts of them that deal with reasoning, the use of words, and inferential distances. (For now at least, you can skip the quantum mechanics, AI, and Fun Theory parts.)
At minimum, this will help you understand LW’s standards for basic reasoning, and how much higher a bar they are than what constitutes “reasoning” pretty much anywhere else.
If you’re reasoning as well as you say, then the material will be a breeze, and you’ll be able to make your arguments in terms that the rest of us can understand. Or, if you’re not, then you’ll probably learn that along the way.
If the overall price (including time, gaining requisite knowledge, etc) of co-operation is less expensive than the AGI doing it itself, the AGI should co-operate. No?
How expensive is making humans vs. their utility? Is there something markedly more efficient that won’t complain or rebel if you treat it poorly? How efficient/useful could a human be if you treated it well?
There are also useful pseudo-moral arguments of the type of pre-committing to a benevolent strategy so that others (bigger than you) will be benevolent to you.
Agreed. So your argument is that I’m not adequately presenting evidence for singling out that hypothesis. That’s a useful criticism. Thanks!
I disagree. I believe that I am failing to successfully communicate my reasoning. I understand your arguments perfectly well (and appreciate them) and agree with them if that is what I was trying to do. Since they are not what I’m trying to do—although they apparently are what I AM doing—I’m assuming (yes, ASS-U-ME) that I’m failing elsewhere and am currently placing the blame on my communication skills.
Are you willing to accept that premise and see if you can draw any helpful conclusions or give any helpful advice?
And, once again, thank you for already taking the time to give such a detailed thoughtful response.
Yes. The nano bots that you could build out of my dismantled raw materials.There is something humbling to realise that my complete submission and wholehearted support is worth less to a non-friendly AI than my spleen.
Oh, worth much much less than your spleen. It might be a fun exercise to take the numbers from Seth Lloyd and figure out how molecules (optimistically, the volume of a cell or two) your brain is worth.
Utility for what purpose? If we’re talking about say, a paperclip maximizer, then its utility for human beings will be measured in paperclip production.
It won’t be as efficient as specialized paperclip-production machines will, for the production of paperclips.
Yes, but you’re unlikely to be happy with it: read the sequences, or at least the parts of them that deal with reasoning, the use of words, and inferential distances. (For now at least, you can skip the quantum mechanics, AI, and Fun Theory parts.)
At minimum, this will help you understand LW’s standards for basic reasoning, and how much higher a bar they are than what constitutes “reasoning” pretty much anywhere else.
If you’re reasoning as well as you say, then the material will be a breeze, and you’ll be able to make your arguments in terms that the rest of us can understand. Or, if you’re not, then you’ll probably learn that along the way.