(Parenthetical about how changing your mind, admitting you were wrong, oops, etc, is a good thing).
Yes, I agree. I don’t really believe that he only learnt how to disguise his true goals. But I’m curious if you would be satisfied with his word alone if he would be able to run a fooming AI next week only if you gave your OK?
He has; this is made abundantly clear in the Metaethics sequence and particularly the “coming of age” sequence. That passage appears to be a reflection of the big embarrassing mistake he talked about, when he thought that he knew nothing about true morality (se “Could Anything Be Right?”) and that a superintelligence with a sufficiently “unconstrained” goal system (or what he’d currently refer to as “a rock”) would necessarily discover the ultimate true morality, so that whatever this superintelligence ended up doing would necessarily be the right thing, whether that turned out to consist of giving everyone a volcano lair full of catgirls/boys or wiping out humanity and reshaping the galaxy for its own purposes.
Needless to say, that is not his view anymore; there isn’t even any “Us or Them” to speak of anymore. Friendly AIs aren’t (necessarily) people, and certainly won’t be a distinct race of people with their own goals and ambitions.
Yes, I’m not suggesting that he is just signaling all that he wrote in the sequences to persuade people to trust him. I’m just saying that when you consider what people are doing for much less than shaping the whole universe to their liking, one might consider some sort of public or third-party examination before anyone is allowed to launch a fooming AI.
It will probably never come to it anyway. Not because the SIAI is not going to succeed but if it told anyone that it is even close to implementing something like CEV then the whole might of the world would crush it (if the world didn’t turn rational until then). Because to say that you are going to run a fooming AI will be interpreted as trying to take over all power and rule the universe. I suppose this is also the most likely reason for the SIAI to fail. The idea is out and once people notice that fooming AI isn’t just science fiction they will do everything to stop anyone from either implementing one at all or to run their own before anyone else does. And who’ll be the first competitor to take out in the race to take over the universe? The SIAI of course, just search Google. I guess it would have been a better idea to make this a stealth project from day one. But that train has left.
Anyway, if the SIAI does succeed one can only hope that Yudkowsky is not Dr. Evil in disguise. But even that would still be better than a paperclip maximizer. I assign more utility to a universe adjusted to Yudkowsky’s volition (or the SIAI) than paperclips (I suppose even if that means I’ll not “like” what happens to me then).
I’m just saying that when you consider what people are doing for much less than shaping the whole universe to their liking, one might consider some sort of public or third-party examination before anyone is allowed to launch a fooming AI.
I don’t see who is going to enforce that. Probably nobody.
What we are fairly likely to see is open-source projects getting more limelight. It is hard to gather mindshare if your strategy is: trust the code to us. Relatively few programmers are likely to buy into such projects—unless you pay them to do so.
So you take him at his word that he’s working in your best interest. You don’t think it is necessary to supervise the SIAI while working towards friendly AI. But once they finished their work, ready to go, you are in favor of some sort of examination before they can implement it. Is that correct?
I don’t think human selfishness vs. public interest is much of a problem with FAI; everyone’s interests with respect to FAI are well correlated, and making an FAI which specifically favors its creator doesn’t give enough extra benefit over an FAI which treats everyone equally to justify the risks (that the extra term will be discovered, or that the extra term introduces a bug). Not even for a purely selfish creator; FAI scenarios just doesn’t leave enough room for improvement to motivate implementing something else.
On the matter of inspecting AIs before launch, however, I’m conflicted. On one hand, the risk of bugs is very serious, and the only way to mitigate it is to have lots of qualified people look at it closely. On the other hand, if the knowledge that a powerful AI was close to completion became public, it would be subject to meddling by various entities that don’t understand what they’re doing. and it would also become a major target for espionage by groups of questionable motives and sanity who might create UFAIs. These risks are difficult to balance, but I think secrecy is the safer choice, and should be the default.
If your first paragraph turns out to be true, does that change anything with respect to the problem of human and political irrationality? My worry is that even if there is only one rational solution that everyone should favor, how likely is it that people understand and accept this? That might be no problem given the current perception. If the possibility of fooming AI will still be ignored at the point it will be possible to implement friendliness (CEV etc.), then there will be no opposition. So some quick quantum leaps towards AGI will likely allow the SIAI to follow through on it. But my worry is that if the general public or governments notice this possibility and take it serious, it will turn into a political mess never seen before. The world would have to be dramatically different for the big powers to agree on something like CEV. I still think this is the most likely failure mode in case the SIAI succeeds in defining friendliness before someone else runs a fooming AI. Politics.
These risks are difficult to balance, but I think secrecy is the safer choice, and should be the default.
I agree. But is that still possible? After all we’re writing about it in public. Although to my knowledge the SIAI never suggested that it would actually create a fooming AI, only come up with a way to guarantee its friendliness. But what you said in your second paragraph would suggest that the SIAI would also have to implement friendliness or otherwise people will take advantage of it or simply mess it up.
Although to my knowledge the SIAI never suggested that it would actually create a fooming AI, only come up with a way to guarantee its friendliness.
This?
The Singularity Institute was founded on the theory that in order to get a Friendly artificial intelligence, someone has got to build one. So, we’re just going to have an organization whose mission is: build a Friendly AI. That’s us.”
You don’t think it is necessary to supervise the SIAI while working towards friendly AI. But once they finished their work, ready to go, you are in favor of some sort of examination before they can implement it.
Probably it would be easier to run the examination during the SIAI’s work, rather than after. Certainly it would save more lives. So, supervise them, so that your examination is faster and more thorough. I am not in favour of pausing the project, once complete, to examine it if it’s possible to examine in in operation.
Or so you hope.
Yes, I agree. I don’t really believe that he only learnt how to disguise his true goals. But I’m curious if you would be satisfied with his word alone if he would be able to run a fooming AI next week only if you gave your OK?
He has; this is made abundantly clear in the Metaethics sequence and particularly the “coming of age” sequence. That passage appears to be a reflection of the big embarrassing mistake he talked about, when he thought that he knew nothing about true morality (se “Could Anything Be Right?”) and that a superintelligence with a sufficiently “unconstrained” goal system (or what he’d currently refer to as “a rock”) would necessarily discover the ultimate true morality, so that whatever this superintelligence ended up doing would necessarily be the right thing, whether that turned out to consist of giving everyone a volcano lair full of catgirls/boys or wiping out humanity and reshaping the galaxy for its own purposes.
Needless to say, that is not his view anymore; there isn’t even any “Us or Them” to speak of anymore. Friendly AIs aren’t (necessarily) people, and certainly won’t be a distinct race of people with their own goals and ambitions.
Yes, I’m not suggesting that he is just signaling all that he wrote in the sequences to persuade people to trust him. I’m just saying that when you consider what people are doing for much less than shaping the whole universe to their liking, one might consider some sort of public or third-party examination before anyone is allowed to launch a fooming AI.
The hard part there is determining who’s qualified to perform that examination.
It will probably never come to it anyway. Not because the SIAI is not going to succeed but if it told anyone that it is even close to implementing something like CEV then the whole might of the world would crush it (if the world didn’t turn rational until then). Because to say that you are going to run a fooming AI will be interpreted as trying to take over all power and rule the universe. I suppose this is also the most likely reason for the SIAI to fail. The idea is out and once people notice that fooming AI isn’t just science fiction they will do everything to stop anyone from either implementing one at all or to run their own before anyone else does. And who’ll be the first competitor to take out in the race to take over the universe? The SIAI of course, just search Google. I guess it would have been a better idea to make this a stealth project from day one. But that train has left.
Anyway, if the SIAI does succeed one can only hope that Yudkowsky is not Dr. Evil in disguise. But even that would still be better than a paperclip maximizer. I assign more utility to a universe adjusted to Yudkowsky’s volition (or the SIAI) than paperclips (I suppose even if that means I’ll not “like” what happens to me then).
I don’t see who is going to enforce that. Probably nobody.
What we are fairly likely to see is open-source projects getting more limelight. It is hard to gather mindshare if your strategy is: trust the code to us. Relatively few programmers are likely to buy into such projects—unless you pay them to do so.
Yes on the question of humans vs Singularity.
(His word alone would not be enough to convince me he’s gotten the fooming AI friendly, though, so I would not give the OK for prudential reasons.)
So you take him at his word that he’s working in your best interest. You don’t think it is necessary to supervise the SIAI while working towards friendly AI. But once they finished their work, ready to go, you are in favor of some sort of examination before they can implement it. Is that correct?
I don’t think human selfishness vs. public interest is much of a problem with FAI; everyone’s interests with respect to FAI are well correlated, and making an FAI which specifically favors its creator doesn’t give enough extra benefit over an FAI which treats everyone equally to justify the risks (that the extra term will be discovered, or that the extra term introduces a bug). Not even for a purely selfish creator; FAI scenarios just doesn’t leave enough room for improvement to motivate implementing something else.
On the matter of inspecting AIs before launch, however, I’m conflicted. On one hand, the risk of bugs is very serious, and the only way to mitigate it is to have lots of qualified people look at it closely. On the other hand, if the knowledge that a powerful AI was close to completion became public, it would be subject to meddling by various entities that don’t understand what they’re doing. and it would also become a major target for espionage by groups of questionable motives and sanity who might create UFAIs. These risks are difficult to balance, but I think secrecy is the safer choice, and should be the default.
If your first paragraph turns out to be true, does that change anything with respect to the problem of human and political irrationality? My worry is that even if there is only one rational solution that everyone should favor, how likely is it that people understand and accept this? That might be no problem given the current perception. If the possibility of fooming AI will still be ignored at the point it will be possible to implement friendliness (CEV etc.), then there will be no opposition. So some quick quantum leaps towards AGI will likely allow the SIAI to follow through on it. But my worry is that if the general public or governments notice this possibility and take it serious, it will turn into a political mess never seen before. The world would have to be dramatically different for the big powers to agree on something like CEV. I still think this is the most likely failure mode in case the SIAI succeeds in defining friendliness before someone else runs a fooming AI. Politics.
I agree. But is that still possible? After all we’re writing about it in public. Although to my knowledge the SIAI never suggested that it would actually create a fooming AI, only come up with a way to guarantee its friendliness. But what you said in your second paragraph would suggest that the SIAI would also have to implement friendliness or otherwise people will take advantage of it or simply mess it up.
This?
http://www.acceleratingfuture.com/people-blog/?p=196
Probably it would be easier to run the examination during the SIAI’s work, rather than after. Certainly it would save more lives. So, supervise them, so that your examination is faster and more thorough. I am not in favour of pausing the project, once complete, to examine it if it’s possible to examine in in operation.