I think my reply to Lurker, above, might clarify some things. To answer your question.
Making an operating system is easy. Deciding which operating system should be used is harder. This is true despite the fact that an operating system’s performance on most criteria can easily be assessed. Assessing whether an operating system is fast is easier than assessing whether a universe is “just”, for example. Also, choosing one operating system from a set of preexisting options is much easier than choosing one future from all possibilities that can be created (unimaginably many). Finally, there are far fewer tradeoffs involved in choice of operating system than there are in choice of satisficing criteria. You might trade money for speed, or speed for usability, and be able to guess correctly a reasonably high percentage of the time. But few other tradeoffs exist, so the situation is comparatively a simple one. If comparing two AI with different satisficing protocols, it would be very hard to guess which AI did a better job just by looking at the results they created, unless one failed spectacularly. But the fact that it’s difficult to judge a tradeoff doesn’t mean it doesn’t exist. Because I think human values are complicated, I think such tradeoffs are very likely to exist.
We are used to imagining spectacular failures resulting from tiny mistakes, on LW. An AI that’s designed to make people smiles creates a monstrous stitching of fleshy carpet. But we are not used to imagining failures that are less obvious, but still awful. I think that’s a problem.
We are used to imagining spectacular failures resulting from tiny mistakes, on LW. An AI that’s designed to make people smiles creates a monstrous stitching of fleshy carpet. But we are not used to imagining failures that are less obvious, but still awful. I think that’s a problem.
I think people understand that this is a danger, but by nature, one can’t spend a lot of time imagining these situations. Also, UGC has little to do with this problem.
I think my reply to Lurker, above, might clarify some things. To answer your question.
Making an operating system is easy. Deciding which operating system should be used is harder. This is true despite the fact that an operating system’s performance on most criteria can easily be assessed. Assessing whether an operating system is fast is easier than assessing whether a universe is “just”, for example. Also, choosing one operating system from a set of preexisting options is much easier than choosing one future from all possibilities that can be created (unimaginably many). Finally, there are far fewer tradeoffs involved in choice of operating system than there are in choice of satisficing criteria. You might trade money for speed, or speed for usability, and be able to guess correctly a reasonably high percentage of the time. But few other tradeoffs exist, so the situation is comparatively a simple one. If comparing two AI with different satisficing protocols, it would be very hard to guess which AI did a better job just by looking at the results they created, unless one failed spectacularly. But the fact that it’s difficult to judge a tradeoff doesn’t mean it doesn’t exist. Because I think human values are complicated, I think such tradeoffs are very likely to exist.
We are used to imagining spectacular failures resulting from tiny mistakes, on LW. An AI that’s designed to make people smiles creates a monstrous stitching of fleshy carpet. But we are not used to imagining failures that are less obvious, but still awful. I think that’s a problem.
I think people understand that this is a danger, but by nature, one can’t spend a lot of time imagining these situations. Also, UGC has little to do with this problem.