Making it more accurate is not the same as making it more intelligent. The question is: How does making something “more intelligent” change the nature of the inaccuracies? In translation especially there can be a bias without any real inaccuracy .
Goallessness at the level of the program is not what makes translators safe. They are safe because neither they nor any component is intelligent.
Most professional computer scientists and programmers I know routinely talk about “smart”, “dumb”, “intelligent” etc algorithms. In context, a smarter algorithm exploits more properties of the input or the problem. I think this is a reasonable use of language, and it’s the one I had in mind.
(I am open to using some other definition of algorithmic intelligence, if you care to supply one.)
I don’t see why making an algorithm smarter or more general would make it dangerous, so long as it stays fundamentally a (non-self-modifying) translation algorithm. There certainly will be biases in a smart algorithm. But dumb algorithms and humans have biases too.
I generally go with cross domain optimization power. http://wiki.lesswrong.com/wiki/Optimization_process Note that optimization target is not the same thing as a goal, and the process doesn’t need to exist within obvious boundaries. Evolution is goalless and disembodied.
If an algorithm is smart because a programmer has encoded everything that needs to be known to solve a problem, great. That probably reduces potential for error, especially in well-defined environments. This is not what’s going on in translation programs, or even the voting system here. (based on reddit) As systems like this creep up in complexity, their errors and biases become more subtle. (especially since we ‘fix’ them so that they usually work well) If an algorithm happens to be powerful in multiple domains, then the errors themselves might be optimized for something entirely different, and perhaps unrecognizable.
By your definition I would tend to agree that they are not dangerous, so long as their generalized capabilities are below human level, (seems to be the case for everything so far) with some complex caveats. For example ‘non-self-modifying’ is a likely false sense of security. If an AI has access to a medium which can be used to do computations, and the AI is good at making algorithms, then it could (Edit: It could build a powerful if not superintelligent program.)
Also, my concern in this thread has never been about the translation algorithm, the tax program, or even the paperclipper. It’s about some sub-process which happens to be a powerful optimizer. (in a hypothetical situation where we do more AI research on the premise that it is safe if it is in a goalless program.
Making it more accurate is not the same as making it more intelligent. The question is: How does making something “more intelligent” change the nature of the inaccuracies? In translation especially there can be a bias without any real inaccuracy .
Goallessness at the level of the program is not what makes translators safe. They are safe because neither they nor any component is intelligent.
Most professional computer scientists and programmers I know routinely talk about “smart”, “dumb”, “intelligent” etc algorithms. In context, a smarter algorithm exploits more properties of the input or the problem. I think this is a reasonable use of language, and it’s the one I had in mind.
(I am open to using some other definition of algorithmic intelligence, if you care to supply one.)
I don’t see why making an algorithm smarter or more general would make it dangerous, so long as it stays fundamentally a (non-self-modifying) translation algorithm. There certainly will be biases in a smart algorithm. But dumb algorithms and humans have biases too.
I generally go with cross domain optimization power. http://wiki.lesswrong.com/wiki/Optimization_process Note that optimization target is not the same thing as a goal, and the process doesn’t need to exist within obvious boundaries. Evolution is goalless and disembodied.
If an algorithm is smart because a programmer has encoded everything that needs to be known to solve a problem, great. That probably reduces potential for error, especially in well-defined environments. This is not what’s going on in translation programs, or even the voting system here. (based on reddit) As systems like this creep up in complexity, their errors and biases become more subtle. (especially since we ‘fix’ them so that they usually work well) If an algorithm happens to be powerful in multiple domains, then the errors themselves might be optimized for something entirely different, and perhaps unrecognizable.
By your definition I would tend to agree that they are not dangerous, so long as their generalized capabilities are below human level, (seems to be the case for everything so far) with some complex caveats. For example ‘non-self-modifying’ is a likely false sense of security. If an AI has access to a medium which can be used to do computations, and the AI is good at making algorithms, then it could (Edit: It could build a powerful if not superintelligent program.)
Also, my concern in this thread has never been about the translation algorithm, the tax program, or even the paperclipper. It’s about some sub-process which happens to be a powerful optimizer. (in a hypothetical situation where we do more AI research on the premise that it is safe if it is in a goalless program.