You are conflating at least four senses of the word “value” into one concept and then assuming that those four things are the same thing.
What we call “human value” is the actual, abstract, difficult-to-know set of things that human beings want, and the things that we don’t know that we want but we would want if we knew about them, and maybe even the things that we wouldn’t want but that we might be glad we had after we got them. Discovering the content of “human value” is an enormous, unsolved philosophical problem.
What you might call “economic value” is purely defined in terms of trades between agents and can only be grounded in relative comparison between commodities.
In your post you throw in a reference to “value to society”. Something might be very valuable to society but individuals fail to coordinate and pay for that thing. This is known as the tragedy of the commons.
Individual humans are known to value weird, idiosyncratic things. You might call these “individual values”. Perhaps it would be valuable to society for some person to be put to death, but this would be contrary to that person’s individual values.
“Value to society” is not identical to “value to an individual” is not identical to “economic value” is not identical to “human value”, and none of these are identical to “Ideal Currency value”. There is certainly no apparent reason why Ideal Currency would converge on “human value”, which is the one that I would say should be optimized. Swapping these various senses of the word in your rhetoric as though they are interchangeable is leading you to make large errors. Assuming that an AI will just get what you mean and know instinctively which definition you mean is crippling to your thesis.
You are conflating at least four senses of the word “value” into one concept and then assuming that those four things are the same thing.
What we call “human value” is the actual, abstract, difficult-to-know set of things that human beings want, and the things that we don’t know that we want but we would want if we knew about them, and maybe even the things that we wouldn’t want but that we might be glad we had after we got them. Discovering the content of “human value” is an enormous, unsolved philosophical problem.
What you might call “economic value” is purely defined in terms of trades between agents and can only be grounded in relative comparison between commodities.
In your post you throw in a reference to “value to society”. Something might be very valuable to society but individuals fail to coordinate and pay for that thing. This is known as the tragedy of the commons.
Individual humans are known to value weird, idiosyncratic things. You might call these “individual values”. Perhaps it would be valuable to society for some person to be put to death, but this would be contrary to that person’s individual values.
“Value to society” is not identical to “value to an individual” is not identical to “economic value” is not identical to “human value”, and none of these are identical to “Ideal Currency value”. There is certainly no apparent reason why Ideal Currency would converge on “human value”, which is the one that I would say should be optimized. Swapping these various senses of the word in your rhetoric as though they are interchangeable is leading you to make large errors. Assuming that an AI will just get what you mean and know instinctively which definition you mean is crippling to your thesis.