The post has five bullet points at the end, and this does not respond to any of them. The post explores the nature of values that humans have, and values in general; Vladimir’s comment is to the effect that we can’t investigate values, and must design a Friendly AI without understanding the problem domain it will face.
Vladimir’s comment is to the effect that we can’t investigate values, and must design a Friendly AI without understanding the problem domain it will face.
We can’t investigate the content of human values in a way that is useful for constructing Friendly AI, and we can’t investigate what specifically Friendly AI will do. We can investigate values for the purpose of choosing better human-designed policies.
We can’t investigate the content of human values in a way that is useful for constructing Friendly AI
Do you want to qualify that some way? I interpret as meaning that learning about values has no relevance to constructing an AI whose purpose is to preserve values. It’s almost an anti-tautology.
I interpret as meaning that learning about values has no relevance to constructing an AI whose purpose is to preserve values. It’s almost an anti-tautology.
The classical analogy is that if you need to run another instance of a given program on a faster computer, figuring out what the program does is of no relevance, you only need to correctly copy its machine code and correctly interpret it on the new machine.
If you need to run another instance of a given program on a faster computer, but you don’t know what an algorithm is, or what part of the thing in front of you is a “computer” and what part is a “computer program”, and you have not as of yet discovered the concept of universal computation, nor are certain whether the computer hardware, or even arithmetic itself, operates deterministically -
-- then you should take some time to study the thing in front of you and figure out what you’re talking about.
You’d probably need to study how these “computers” work in general, not how to change the background color in documents opened with a word processor that runs on the thing. A better analogy in the direction you took is uploading: we need to study neurons, not beliefs that a brain holds.
You seem to think that values are just a content problem, and that we can build a mechanism now and fill the content in later. But the whole endeavor is full of unjustified assumptions about what values are, and what values we should pursue. We have to learn a lot more about what values are, what values are possible, what values humans have, and why they have them, before we can decide what we ought to try to do in the first place.
We have to learn a lot more about what values are, what values are possible, what values humans have, and why they have them, before we can decide what we ought to try to do in the first place.
Of course. Only the finer detail is content problem.
But the whole endeavor is full of unjustified assumptions about what values are, and what values we should pursue.
Not that I know of. On the contrary, the assumption is that one shouldn’t posit statements about which values human actually have, and what kind of mathematical structure values are is an open problem.
The post has five bullet points at the end, and this does not respond to any of them. The post explores the nature of values that humans have, and values in general; Vladimir’s comment is to the effect that we can’t investigate values, and must design a Friendly AI without understanding the problem domain it will face.
We can’t investigate the content of human values in a way that is useful for constructing Friendly AI, and we can’t investigate what specifically Friendly AI will do. We can investigate values for the purpose of choosing better human-designed policies.
Do you want to qualify that some way? I interpret as meaning that learning about values has no relevance to constructing an AI whose purpose is to preserve values. It’s almost an anti-tautology.
The classical analogy is that if you need to run another instance of a given program on a faster computer, figuring out what the program does is of no relevance, you only need to correctly copy its machine code and correctly interpret it on the new machine.
If you need to run another instance of a given program on a faster computer, but you don’t know what an algorithm is, or what part of the thing in front of you is a “computer” and what part is a “computer program”, and you have not as of yet discovered the concept of universal computation, nor are certain whether the computer hardware, or even arithmetic itself, operates deterministically -
-- then you should take some time to study the thing in front of you and figure out what you’re talking about.
You’d probably need to study how these “computers” work in general, not how to change the background color in documents opened with a word processor that runs on the thing. A better analogy in the direction you took is uploading: we need to study neurons, not beliefs that a brain holds.
You seem to think that values are just a content problem, and that we can build a mechanism now and fill the content in later. But the whole endeavor is full of unjustified assumptions about what values are, and what values we should pursue. We have to learn a lot more about what values are, what values are possible, what values humans have, and why they have them, before we can decide what we ought to try to do in the first place.
Of course. Only the finer detail is content problem.
Not that I know of. On the contrary, the assumption is that one shouldn’t posit statements about which values human actually have, and what kind of mathematical structure values are is an open problem.