Yes, I’m not suggesting that he is just signaling all that he wrote in the sequences to persuade people to trust him. I’m just saying that when you consider what people are doing for much less than shaping the whole universe to their liking, one might consider some sort of public or third-party examination before anyone is allowed to launch a fooming AI.
It will probably never come to it anyway. Not because the SIAI is not going to succeed but if it told anyone that it is even close to implementing something like CEV then the whole might of the world would crush it (if the world didn’t turn rational until then). Because to say that you are going to run a fooming AI will be interpreted as trying to take over all power and rule the universe. I suppose this is also the most likely reason for the SIAI to fail. The idea is out and once people notice that fooming AI isn’t just science fiction they will do everything to stop anyone from either implementing one at all or to run their own before anyone else does. And who’ll be the first competitor to take out in the race to take over the universe? The SIAI of course, just search Google. I guess it would have been a better idea to make this a stealth project from day one. But that train has left.
Anyway, if the SIAI does succeed one can only hope that Yudkowsky is not Dr. Evil in disguise. But even that would still be better than a paperclip maximizer. I assign more utility to a universe adjusted to Yudkowsky’s volition (or the SIAI) than paperclips (I suppose even if that means I’ll not “like” what happens to me then).
I’m just saying that when you consider what people are doing for much less than shaping the whole universe to their liking, one might consider some sort of public or third-party examination before anyone is allowed to launch a fooming AI.
I don’t see who is going to enforce that. Probably nobody.
What we are fairly likely to see is open-source projects getting more limelight. It is hard to gather mindshare if your strategy is: trust the code to us. Relatively few programmers are likely to buy into such projects—unless you pay them to do so.
Yes, I’m not suggesting that he is just signaling all that he wrote in the sequences to persuade people to trust him. I’m just saying that when you consider what people are doing for much less than shaping the whole universe to their liking, one might consider some sort of public or third-party examination before anyone is allowed to launch a fooming AI.
The hard part there is determining who’s qualified to perform that examination.
It will probably never come to it anyway. Not because the SIAI is not going to succeed but if it told anyone that it is even close to implementing something like CEV then the whole might of the world would crush it (if the world didn’t turn rational until then). Because to say that you are going to run a fooming AI will be interpreted as trying to take over all power and rule the universe. I suppose this is also the most likely reason for the SIAI to fail. The idea is out and once people notice that fooming AI isn’t just science fiction they will do everything to stop anyone from either implementing one at all or to run their own before anyone else does. And who’ll be the first competitor to take out in the race to take over the universe? The SIAI of course, just search Google. I guess it would have been a better idea to make this a stealth project from day one. But that train has left.
Anyway, if the SIAI does succeed one can only hope that Yudkowsky is not Dr. Evil in disguise. But even that would still be better than a paperclip maximizer. I assign more utility to a universe adjusted to Yudkowsky’s volition (or the SIAI) than paperclips (I suppose even if that means I’ll not “like” what happens to me then).
I don’t see who is going to enforce that. Probably nobody.
What we are fairly likely to see is open-source projects getting more limelight. It is hard to gather mindshare if your strategy is: trust the code to us. Relatively few programmers are likely to buy into such projects—unless you pay them to do so.