Your comment seemed to be equating Xyrik’s scenario with the Soviet system, implying that for that reason it’s not desirable. I’m pointing out that the two systems cannot be equated.
The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.
The entire premise of Xyrik’s scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work. He might as well call it elven magic as an AGI or “everyone decides to do the right thing”. There are no moving parts in his conception. It is like trying to solve a problem by suggesting that one should solve the problem.
I tried to ask him about mechanism here, but the only response so far has been a downvote.
The entire premise of Xyrik’s scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work.
Well, to be fair, I never claimed that I had any ideas for how to actually achieve a scenario with a flawless AGI, and I don’t think I even said I was under the impression that this would be a good idea, although in the case that we DID have a flawless AGI, I would be open to a reasoning that proclaimed so.
But all I was asking was what potential downsides this could have, and people have risen to the occasion.
You know, this seems amusingly analogous to the scene in the seventh Harry Potter novel in which Xenophillius Lovegood asks Hermione to falsify the existence of the Resurrection Stone.
aspired =/= achieved.
Your comment seemed to be equating Xyrik’s scenario with the Soviet system, implying that for that reason it’s not desirable. I’m pointing out that the two systems cannot be equated.
My point is that the Soviet system wanted to be like Xyrik’s scenario and tried to get as close to it as it could.
The assertion that an AI would make everything hunky-dory is not falsifiable. It’s just a different term for elven magic.
Huh? Of course it’s falsifiable. The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.
The entire premise of Xyrik’s scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work. He might as well call it elven magic as an AGI or “everyone decides to do the right thing”. There are no moving parts in his conception. It is like trying to solve a problem by suggesting that one should solve the problem.
I tried to ask him about mechanism here, but the only response so far has been a downvote.
Well, to be fair, I never claimed that I had any ideas for how to actually achieve a scenario with a flawless AGI, and I don’t think I even said I was under the impression that this would be a good idea, although in the case that we DID have a flawless AGI, I would be open to a reasoning that proclaimed so.
But all I was asking was what potential downsides this could have, and people have risen to the occasion.
Demonstrate, please.
You know, this seems amusingly analogous to the scene in the seventh Harry Potter novel in which Xenophillius Lovegood asks Hermione to falsify the existence of the Resurrection Stone.