That is now a completely different argument to the original “there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds”.
Re: “the biases and concern for the ethics of the AI of that programmer will be random from the space of humans”
Those concerned probably have to be an expert programmers, able to build a company or research group, and attract talented assistance, as well as probably customers. They will probably be far-from what you would get if you chose at “random”.
Do we pick a side of a coin “at random” from the two possibilities when we flip it?
Epistemically, yes, we don’t have sufficient information to predict it*. However if we do the same thing twice it has the same outcome so it is not physically random.
So while the process that decides what the first AI is like is not physically random, it is epistemically random until we have a good idea of what AIs produce good outcomes and get humans to follow those theories. For this we need something that looks like a theory of friendliness, to some degree.
Considering we might use evolutionary methods for part of the AI creation process, randomness doesn’t look like too bad a model.
*With a few caveats. I think it is biased to land the same way up as it was when flipped, due to the chance of making it spin and not flip.
We do have an extensive body of knowledge about how to write computer programs that do useful things. The word “random” seems like a terrible mis-summary of that body of information to me.
As for “evolution” being equated to “randomnness”—isn’t that one of the points that creationists make all the time? Evolution has two motors—variation and selection. The first of these may have some random elements, but it is only one part of the overall process.
I think we have a disconnect on how much we believe proper scary AIs will be like previous computer programs.
My conception of current computer programs is that they are crystallised thoughts plucked from our own minds and easily controllable and unchangeable. When we get interesting AI the programs will morphing and be far less controllable without a good theory of how to control the change.
I shudder every time people say the “AI’s source code” as if it is some unchangeable and informative thing about the AI’s behaviour after the first few days of the AI’s existence.
That is now a completely different argument to the original “there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds”.
Re: “the biases and concern for the ethics of the AI of that programmer will be random from the space of humans”
Those concerned probably have to be an expert programmers, able to build a company or research group, and attract talented assistance, as well as probably customers. They will probably be far-from what you would get if you chose at “random”.
Do we pick a side of a coin “at random” from the two possibilities when we flip it?
Epistemically, yes, we don’t have sufficient information to predict it*. However if we do the same thing twice it has the same outcome so it is not physically random.
So while the process that decides what the first AI is like is not physically random, it is epistemically random until we have a good idea of what AIs produce good outcomes and get humans to follow those theories. For this we need something that looks like a theory of friendliness, to some degree.
Considering we might use evolutionary methods for part of the AI creation process, randomness doesn’t look like too bad a model.
*With a few caveats. I think it is biased to land the same way up as it was when flipped, due to the chance of making it spin and not flip.
Edit: Oh and no open source AI then?
We do have an extensive body of knowledge about how to write computer programs that do useful things. The word “random” seems like a terrible mis-summary of that body of information to me.
As for “evolution” being equated to “randomnness”—isn’t that one of the points that creationists make all the time? Evolution has two motors—variation and selection. The first of these may have some random elements, but it is only one part of the overall process.
I think we have a disconnect on how much we believe proper scary AIs will be like previous computer programs.
My conception of current computer programs is that they are crystallised thoughts plucked from our own minds and easily controllable and unchangeable. When we get interesting AI the programs will morphing and be far less controllable without a good theory of how to control the change.
I shudder every time people say the “AI’s source code” as if it is some unchangeable and informative thing about the AI’s behaviour after the first few days of the AI’s existence.
I’m not sure how to resolve that difference.