We have many objective values that result from cultural history, such as mythology, concepts, and other “legacy” things built upon them. When we say these values are objective, we mean that we receive them as they are, and we cannot change them too much. In general, they are kind of infinite mythologies with many rules that “help” people do something right “like in the past” and achieve their goals “after all.”
Also we have some objective programmed value, our biological nature, our genes that work for reproduction
When something really scary happens, like bombings, wars, or other threats to survival, simple values (whether they are biological, religious, or national) take charge. These observations confirm a certain hierarchy of values and needs.
Many of the values we talk about reflect our altruistic cosmopolitan hopes for the future, and they are not real values for most people. That’s kind of a philosophical illusion that people usually talks after success in other values, such as biological, religious, or national. It’s an illusion that every smart person can understand basic philosophical or ethical constructions. For many tech-savvy people, it’s easier to wear a comfortable political and social point of view, and they don’t have time to learn about complex concepts like “should not do to another what he does not want another to do to him” or “treat humanity, both in your own person and in the person of everyone else, as an end, and you would never have treated it only as a means.”
These concepts are too complex for most people, even tech-savvy ones with big egos. People from the outskirts of humanity who might also build AI may not understand such complex conceptions like philosophy, terminal, axiomatic, epistemology, and other terms. For a basic utilitarian brain, these could be just words to explain why you think you should get his goods or betray the ideas of his nation for your own.
Many people live in a life where violence, nepotism, and elitism are the basis of the existence of society, and judging by the stability of these regimes, this is not without some basic foundation. People in highly competitive areas may not have time for learning humanitarian sciences, they may not have enough information, and they may have basic “ideology blocks.” In other words, it’s like choosing comfortable shoes for them that fit well.
If you were to ask people, “Okay, you have a button to kill someone you don’t know. Nobody will know it was you, and you will get one million dollars. Will you press it?” For many of them, from 10% to 50%, the answer will be yes, or maybe even “How many times could I press it?” Many AI creators could be blind to cosmopolitan needs and values. They may not understand the dilemma of creating such buttons if they only do a small part of its creation or only part of the instruction to press it.
Maybe it is necessary to input moral and value monitoring inside products so that people use them in fervor not to harm others (maybe even in open source, so they could be so advanced that AI constructors should not use other sources). Some defense in the opportunity to create such things for themselves could be made. If someone could create a big graphical cluster or something like that, then they would have to seek help from advanced AI developers who apply basic precautions against existential threats. Some kind of red map needs to be drawn up so that the creators of the AI, or those who see its creation, can accurately see the signs that something is going completely wrong.
Of course, we cannot know what to do with solving GAI because we do not know what to expect, but maybe we could find something that will, with some probability, be good and identify what is completely wrong. Could we have at least red map? What could everyone do to be less wrong in it?
We have many objective values that result from cultural history, such as mythology, concepts, and other “legacy” things built upon them. When we say these values are objective, we mean that we receive them as they are, and we cannot change them too much. In general, they are kind of infinite mythologies with many rules that “help” people do something right “like in the past” and achieve their goals “after all.”
Also we have some objective programmed value, our biological nature, our genes that work for reproduction
When something really scary happens, like bombings, wars, or other threats to survival, simple values (whether they are biological, religious, or national) take charge. These observations confirm a certain hierarchy of values and needs.
Many of the values we talk about reflect our altruistic cosmopolitan hopes for the future, and they are not real values for most people. That’s kind of a philosophical illusion that people usually talks after success in other values, such as biological, religious, or national. It’s an illusion that every smart person can understand basic philosophical or ethical constructions. For many tech-savvy people, it’s easier to wear a comfortable political and social point of view, and they don’t have time to learn about complex concepts like “should not do to another what he does not want another to do to him” or “treat humanity, both in your own person and in the person of everyone else, as an end, and you would never have treated it only as a means.”
These concepts are too complex for most people, even tech-savvy ones with big egos. People from the outskirts of humanity who might also build AI may not understand such complex conceptions like philosophy, terminal, axiomatic, epistemology, and other terms. For a basic utilitarian brain, these could be just words to explain why you think you should get his goods or betray the ideas of his nation for your own.
Many people live in a life where violence, nepotism, and elitism are the basis of the existence of society, and judging by the stability of these regimes, this is not without some basic foundation. People in highly competitive areas may not have time for learning humanitarian sciences, they may not have enough information, and they may have basic “ideology blocks.” In other words, it’s like choosing comfortable shoes for them that fit well.
If you were to ask people, “Okay, you have a button to kill someone you don’t know. Nobody will know it was you, and you will get one million dollars. Will you press it?” For many of them, from 10% to 50%, the answer will be yes, or maybe even “How many times could I press it?” Many AI creators could be blind to cosmopolitan needs and values. They may not understand the dilemma of creating such buttons if they only do a small part of its creation or only part of the instruction to press it.
Maybe it is necessary to input moral and value monitoring inside products so that people use them in fervor not to harm others (maybe even in open source, so they could be so advanced that AI constructors should not use other sources). Some defense in the opportunity to create such things for themselves could be made. If someone could create a big graphical cluster or something like that, then they would have to seek help from advanced AI developers who apply basic precautions against existential threats. Some kind of red map needs to be drawn up so that the creators of the AI, or those who see its creation, can accurately see the signs that something is going completely wrong.
Of course, we cannot know what to do with solving GAI because we do not know what to expect, but maybe we could find something that will, with some probability, be good and identify what is completely wrong. Could we have at least red map? What could everyone do to be less wrong in it?