Let’s say “we” are the good guys in the race for AI. Define
W = we win the race to create an AI powerful enough to protect humanity from any subsequent AIs
G = our AI can be used to achieve a good outcome
F = we go the “formalize friendliness” route
O = we go a promising route other than formalizing friendliness
At issue is which of the following is higher:
P(G|WF)P(W|F) or P(G|WO)P(W|O)
From what I know of SIAI’s approach to F, I estimate P(W|F) to be many orders of magnitude smaller than P(W|O). I estimate P(G|WO) to be more than 1% for a good choice of O (this is a lower bound; my actual estimate of P(G|WO) is much higher, but you needn’t agree with that to agree with my conclusion). Therefore the right side wins.
There are two points here that one could conceivably dispute, but it sounds like the “SIAI logic” is to dispute my estimate of P(G|WO) and say that P(G|WO) is in fact tiny. I haven’t seen SIAI give a convincing argument for that.
My summary would be: there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds then it likely to be contrary to our values because it will have a different sense of what is good or worthwhile. This moderately relies on the speed/singleton issue, because evolution pressure between AI might force them in the same direction as us. We would likely be out-competed before this happens though, if we rely on competition between AIs.
I think various people associated with SIAI mean different things by formalizing friendliness. I remember Vladimir Nesov means getting better than 50% probability for providing a good outcome.
It doesn’t matter what happens when we sample a mind at random. We only care about the sorts of minds we might build, whether by designing them or evolving them. Either way, they’ll be far from random.
Consider my “at random” short hand for “at random from the space of possible minds built by humans”.
The Eliezer approved example of humans not getting a simple system to do what they want is the classic Machine Learning example where a Neural Net was trained on two different sorts of tanks. It had happened that the photographs of the different types of tanks had been taken at different times of day. So the classifier just worked on that rather than actually looking at the types of tank. So we didn’t build a tank classifier but a day/night classifier. More here.
While I may not agree with Eliezer on everything, I do agree with him it is damn hard to get a computer to do what you want when you stop programming them explicitly .
Obviously AI is hard, and obviously software has bugs.
To counter my argument, you need to make a case that the bugs will be so fundamental and severe, and go undetected for so long, that despite any safeguards we take, they will lead to catastrophic results with probability greater than 99%.
Things like AI boxing or “emergency stop buttons” would be instances of safeguards. Basically any form of human supervision that can keep the AI in check even if it’s not safe to let it roam free.
Are you really suggesting a trial and error approach where we stick evolved and human created AIs in boxes and then eyeball them to see what they are like? Then pick the nicest looking one, on a hunch, to have control over our light cone?
This is why we need to create friendliness before AGI → A lot of people who are loosely familiar with the subject think those options will work!
A goal directed intelligence will work around any obstacles in front of it. It’ll make damn sure that it prevents anyone from pressing emergency stop buttons.
The first AI will be determined by the first programmer, sure. But I wasn’t talking about that level; the biases and concern for the ethics of the AI of that programmer will be random from the space of humans. Or at least I can’t see any reason why I should expect people who care about ethics to be more likely to make AI than those that think economics will constrain AI to be nice,
That is now a completely different argument to the original “there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds”.
Re: “the biases and concern for the ethics of the AI of that programmer will be random from the space of humans”
Those concerned probably have to be an expert programmers, able to build a company or research group, and attract talented assistance, as well as probably customers. They will probably be far-from what you would get if you chose at “random”.
Do we pick a side of a coin “at random” from the two possibilities when we flip it?
Epistemically, yes, we don’t have sufficient information to predict it*. However if we do the same thing twice it has the same outcome so it is not physically random.
So while the process that decides what the first AI is like is not physically random, it is epistemically random until we have a good idea of what AIs produce good outcomes and get humans to follow those theories. For this we need something that looks like a theory of friendliness, to some degree.
Considering we might use evolutionary methods for part of the AI creation process, randomness doesn’t look like too bad a model.
*With a few caveats. I think it is biased to land the same way up as it was when flipped, due to the chance of making it spin and not flip.
We do have an extensive body of knowledge about how to write computer programs that do useful things. The word “random” seems like a terrible mis-summary of that body of information to me.
As for “evolution” being equated to “randomnness”—isn’t that one of the points that creationists make all the time? Evolution has two motors—variation and selection. The first of these may have some random elements, but it is only one part of the overall process.
I think we have a disconnect on how much we believe proper scary AIs will be like previous computer programs.
My conception of current computer programs is that they are crystallised thoughts plucked from our own minds and easily controllable and unchangeable. When we get interesting AI the programs will morphing and be far less controllable without a good theory of how to control the change.
I shudder every time people say the “AI’s source code” as if it is some unchangeable and informative thing about the AI’s behaviour after the first few days of the AI’s existence.
Let’s say “we” are the good guys in the race for AI. Define
W = we win the race to create an AI powerful enough to protect humanity from any subsequent AIs
G = our AI can be used to achieve a good outcome
F = we go the “formalize friendliness” route
O = we go a promising route other than formalizing friendliness
At issue is which of the following is higher:
P(G|WF)P(W|F) or P(G|WO)P(W|O)
From what I know of SIAI’s approach to F, I estimate P(W|F) to be many orders of magnitude smaller than P(W|O). I estimate P(G|WO) to be more than 1% for a good choice of O (this is a lower bound; my actual estimate of P(G|WO) is much higher, but you needn’t agree with that to agree with my conclusion). Therefore the right side wins.
There are two points here that one could conceivably dispute, but it sounds like the “SIAI logic” is to dispute my estimate of P(G|WO) and say that P(G|WO) is in fact tiny. I haven’t seen SIAI give a convincing argument for that.
I’d start here to get an overview.
My summary would be: there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds then it likely to be contrary to our values because it will have a different sense of what is good or worthwhile. This moderately relies on the speed/singleton issue, because evolution pressure between AI might force them in the same direction as us. We would likely be out-competed before this happens though, if we rely on competition between AIs.
I think various people associated with SIAI mean different things by formalizing friendliness. I remember Vladimir Nesov means getting better than 50% probability for providing a good outcome.
Edited to add my own overview.
It doesn’t matter what happens when we sample a mind at random. We only care about the sorts of minds we might build, whether by designing them or evolving them. Either way, they’ll be far from random.
Consider my “at random” short hand for “at random from the space of possible minds built by humans”.
The Eliezer approved example of humans not getting a simple system to do what they want is the classic Machine Learning example where a Neural Net was trained on two different sorts of tanks. It had happened that the photographs of the different types of tanks had been taken at different times of day. So the classifier just worked on that rather than actually looking at the types of tank. So we didn’t build a tank classifier but a day/night classifier. More here.
While I may not agree with Eliezer on everything, I do agree with him it is damn hard to get a computer to do what you want when you stop programming them explicitly .
Obviously AI is hard, and obviously software has bugs.
To counter my argument, you need to make a case that the bugs will be so fundamental and severe, and go undetected for so long, that despite any safeguards we take, they will lead to catastrophic results with probability greater than 99%.
How do you consider “formalizing friendliness” to be different from “building safeguards”?
Things like AI boxing or “emergency stop buttons” would be instances of safeguards. Basically any form of human supervision that can keep the AI in check even if it’s not safe to let it roam free.
Are you really suggesting a trial and error approach where we stick evolved and human created AIs in boxes and then eyeball them to see what they are like? Then pick the nicest looking one, on a hunch, to have control over our light cone?
I’ve never seen the appeal of AI boxing.
This is why we need to create friendliness before AGI → A lot of people who are loosely familiar with the subject think those options will work!
A goal directed intelligence will work around any obstacles in front of it. It’ll make damn sure that it prevents anyone from pressing emergency stop buttons.
Better than chance? What chance?
Sorry, “Better than chance” is an english phrase than tends to mean more than 50%.
It assumes an even chance of each outcome. I.e. do better than selecting randomly.
Not appropriate in this context, my brain didn’t think of the wider implications as it wrote it.
It’s easy to do better than random. *Pours himself a cup of tea.*
Programmers do not operate by “picking programs at random”, though.
The idea that “picking programs at random” has anything to do with the issue seems just confused to me.
The first AI will be determined by the first programmer, sure. But I wasn’t talking about that level; the biases and concern for the ethics of the AI of that programmer will be random from the space of humans. Or at least I can’t see any reason why I should expect people who care about ethics to be more likely to make AI than those that think economics will constrain AI to be nice,
That is now a completely different argument to the original “there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds”.
Re: “the biases and concern for the ethics of the AI of that programmer will be random from the space of humans”
Those concerned probably have to be an expert programmers, able to build a company or research group, and attract talented assistance, as well as probably customers. They will probably be far-from what you would get if you chose at “random”.
Do we pick a side of a coin “at random” from the two possibilities when we flip it?
Epistemically, yes, we don’t have sufficient information to predict it*. However if we do the same thing twice it has the same outcome so it is not physically random.
So while the process that decides what the first AI is like is not physically random, it is epistemically random until we have a good idea of what AIs produce good outcomes and get humans to follow those theories. For this we need something that looks like a theory of friendliness, to some degree.
Considering we might use evolutionary methods for part of the AI creation process, randomness doesn’t look like too bad a model.
*With a few caveats. I think it is biased to land the same way up as it was when flipped, due to the chance of making it spin and not flip.
Edit: Oh and no open source AI then?
We do have an extensive body of knowledge about how to write computer programs that do useful things. The word “random” seems like a terrible mis-summary of that body of information to me.
As for “evolution” being equated to “randomnness”—isn’t that one of the points that creationists make all the time? Evolution has two motors—variation and selection. The first of these may have some random elements, but it is only one part of the overall process.
I think we have a disconnect on how much we believe proper scary AIs will be like previous computer programs.
My conception of current computer programs is that they are crystallised thoughts plucked from our own minds and easily controllable and unchangeable. When we get interesting AI the programs will morphing and be far less controllable without a good theory of how to control the change.
I shudder every time people say the “AI’s source code” as if it is some unchangeable and informative thing about the AI’s behaviour after the first few days of the AI’s existence.
I’m not sure how to resolve that difference.