Yeah, part of what I was intending in the scenario would be that everyone realizes that we could make much faster technological advances (At least, that’s the theory) if we didn’t bother with keeping track of who owes who. We need resources such as metals, we get them, make the MacGuffin, and continue.
I suppose the real problem with this is some form of a game-plan, determining who needs what. So I guess what I’m thinking is a system that would require some flawless AGI to determine what group needs what resource at what time, to further the general human endeavor, rather than people getting what they want/need based on how much money they can amass, which is as we know a flawed system, or people like Donald Trump would not exist, while people starve to death in Third-World countries.
But the idea would be to use some system like this to vastly accelerate our speed of technological advancement, so that we can colonize the galaxy, become immortal, and eventually figure out how the world works. However that’s not to say that I’m trying to really come UP with a system, because I’m sure such systems are already postulated, but just don’t work, because of the whole ‘greed’ thing, but yeah. My query was mainly whether there could be problems not in developing the system, but in actually enacting such a system, even if it worked as intended.
I suppose the real problem with this is some form of a game-plan, determining who needs what. So I guess what I’m thinking is a system that would require some flawless AGI to determine what group needs what resource at what time, to further the general human endeavor, rather than people getting what they want/need based on how much money they can amass,
Your comment seemed to be equating Xyrik’s scenario with the Soviet system, implying that for that reason it’s not desirable. I’m pointing out that the two systems cannot be equated.
The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.
The entire premise of Xyrik’s scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work. He might as well call it elven magic as an AGI or “everyone decides to do the right thing”. There are no moving parts in his conception. It is like trying to solve a problem by suggesting that one should solve the problem.
I tried to ask him about mechanism here, but the only response so far has been a downvote.
The entire premise of Xyrik’s scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work.
Well, to be fair, I never claimed that I had any ideas for how to actually achieve a scenario with a flawless AGI, and I don’t think I even said I was under the impression that this would be a good idea, although in the case that we DID have a flawless AGI, I would be open to a reasoning that proclaimed so.
But all I was asking was what potential downsides this could have, and people have risen to the occasion.
You know, this seems amusingly analogous to the scene in the seventh Harry Potter novel in which Xenophillius Lovegood asks Hermione to falsify the existence of the Resurrection Stone.
No, even apart from greed no one has postulated such a system. Money isn’t a perfect way to allocate resources, but no one yet has invented a better one, even assuming that people are perfectly altruistic.
Yeah, part of what I was intending in the scenario would be that everyone realizes that we could make much faster technological advances (At least, that’s the theory) if we didn’t bother with keeping track of who owes who.
Except you need to keep track of who (or which algorithm if we want to be sufficiently abstract) is doing the most to contribute and being most efficient so that his success can be repeated in other parts of the system.
Yeah, part of what I was intending in the scenario would be that everyone realizes that we could make much faster technological advances (At least, that’s the theory) if we didn’t bother with keeping track of who owes who. We need resources such as metals, we get them, make the MacGuffin, and continue.
I suppose the real problem with this is some form of a game-plan, determining who needs what. So I guess what I’m thinking is a system that would require some flawless AGI to determine what group needs what resource at what time, to further the general human endeavor, rather than people getting what they want/need based on how much money they can amass, which is as we know a flawed system, or people like Donald Trump would not exist, while people starve to death in Third-World countries.
But the idea would be to use some system like this to vastly accelerate our speed of technological advancement, so that we can colonize the galaxy, become immortal, and eventually figure out how the world works. However that’s not to say that I’m trying to really come UP with a system, because I’m sure such systems are already postulated, but just don’t work, because of the whole ‘greed’ thing, but yeah. My query was mainly whether there could be problems not in developing the system, but in actually enacting such a system, even if it worked as intended.
So the AGI gives out the instructions, and the humans, or some of them, say “screw that”. What happens next?
You’re re-inventing Soviet central planning.
I had no idea that the communist party was a flawless AGI...
The flawless AGI under the name of Gosplan was the limit to which the Soviet Union aspired.
aspired =/= achieved.
Your comment seemed to be equating Xyrik’s scenario with the Soviet system, implying that for that reason it’s not desirable. I’m pointing out that the two systems cannot be equated.
My point is that the Soviet system wanted to be like Xyrik’s scenario and tried to get as close to it as it could.
The assertion that an AI would make everything hunky-dory is not falsifiable. It’s just a different term for elven magic.
Huh? Of course it’s falsifiable. The entire premise of MIRI and CFAR is that this assertion is going to be falsified unless we take action.
The entire premise of Xyrik’s scenario is that everything will be hunky-dory. Xyrik is just making a wish, and not thinking about how anything will actually work. He might as well call it elven magic as an AGI or “everyone decides to do the right thing”. There are no moving parts in his conception. It is like trying to solve a problem by suggesting that one should solve the problem.
I tried to ask him about mechanism here, but the only response so far has been a downvote.
Well, to be fair, I never claimed that I had any ideas for how to actually achieve a scenario with a flawless AGI, and I don’t think I even said I was under the impression that this would be a good idea, although in the case that we DID have a flawless AGI, I would be open to a reasoning that proclaimed so.
But all I was asking was what potential downsides this could have, and people have risen to the occasion.
Demonstrate, please.
You know, this seems amusingly analogous to the scene in the seventh Harry Potter novel in which Xenophillius Lovegood asks Hermione to falsify the existence of the Resurrection Stone.
No, even apart from greed no one has postulated such a system. Money isn’t a perfect way to allocate resources, but no one yet has invented a better one, even assuming that people are perfectly altruistic.
Except you need to keep track of who (or which algorithm if we want to be sufficiently abstract) is doing the most to contribute and being most efficient so that his success can be repeated in other parts of the system.