Thanks for the comment! Yeah, I guess I was having a bit too much fun in writing my post to explicitly define all the terms I used. You say you “don’t think ethics is something you can discover.” But perhaps I should’ve been more clear about what I meant by “figuring out ethics.” According to merriam-webster.com, ethics is “a set of moral principles : a theory or system of moral values.” So I take “figuring out ethics” to basically be figuring out a system by which to make decisions based on a minimum agreeable set of moral values of humans. Whether such a “minimum agreeable set” exists or not is of course debatable, but that’s what I’m currently trying to “discover.”
Towards that end, I’m working on a system by which to calculate the ethics of a decision in a given situation. The system recommends that we maximize net “positive experiences.” In my view, what we consider to be “positive” is highly dependent on our self-esteem level, which in turn depends on how much personal responsibility we take and how much we follow our conscience. In this way, the system effectively takes into account “no pain, no gain” (conscience is painful and so can be building responsibility).
I agree that I’d like us to retain our humanity.
Regarding AI promoting certain political values, I don’t know if there’s any way around that happening. People pretty much always want to push their views on others, so if they have control of an AI, they’ll likely use it as a tool for this purpose. Personally, I’m a Libertarian, although not an absolutist about it. I’m trying to design my ethics calculator to leave room for people to have as many options as they can without infringing unnecessarily on others’ rights. Having options, including to make mistakes and even to not always “naively” maximize value, are necessary to raise one’s self-esteem, at least the way I see it. Thanks again for the comment!
(I likely wrote too much. Don’t feel pressured to read all of it) Everything this community is trying to do (like saving the world) is extremely difficult, but we try anyway, and it’s sort of interesting/fun. I’m in over my head myself, I just think that psychological (rather than logical or biological) insights about morality are rare despite being important for solving the problem.
I believe that you can make a system of moral values, but a mathematical formalization of it would probably be rather vulgar (and either based on human nature or constructed from absolutely nothing). Being honest about moral values is itself immoral, for the same reason that saying “Hi, I want money” at a job interview is considered rude. I belive that morality is largely aesthetic, but exposing and breaking illusions, and pointing out all the elephants in the room, just gets really ugly. The Tao Te Ching says something like “The great person doesn’t know that he is virtuous, therefore he is virtuous”
Why do we hate cockroaches, wasps and rats, but love butterflies and bees? They differ a little in how useful they are and some have histories of causing problems for humanity, but I think the bigger factor is that we like beautiful and cute things. Think about that, we have no empathy for bugs unless they’re cute, and we call ourselves ethical? In the anime community they like to say “cute is justice”, but I can’t help but take this sentence literally. The punishment people face is inversely proportional to how cute they are (leading to racial and gender bias in criminal sentencing). We also like people who are beautiful (and exceptions to this is when beautiful people have ugly personalities, but that too is based on aesthetics). We consider people guilty when we know that they know what they did wrong. This makes many act less mature and intelligent than they are (Japanese derogatory colloquial word: Burikko, woman who acts cute by playing innocent and helpless. Thought of as a self-defense mechanism formed in ones childhood, it makes a lot of sense. Some people hate this, either due to cute-aggression or as an antidote to the deception inherent in the social strategy of feigning weakness)
Exposing these things is a problem, since most people who talk about morality do so in beautiful ways, “Oh how wonderful it would be if everyone could prosper together!”, which still exists within the pretty social illusions we have made. And while I have no intention to support the incel world-view, they’re at least a little bit correct about some of their claims, which are rooted in evolutionary psychology. Mainstream psychology doesn’t take them seriously, but that’s not because they’re wrong, it’s because they’re ugly parts of reality. Looks, height, intelligence and personality traits follow a standard distribution, and some are simply dealt better cards than others. We want the world to be fair to the extent that we ignore evidence of unfairness.
The way I solved this for myself, and made my own world beautiful again, was to realize that this is all just our instincts. Discriminating is how life works, it’s “natural selection” every step of the way. Those who complain the most about this are themselves sick people who hate this part of themselves and project it onto others, “exposing” them of human behaviour. In short: we’re innocent, like animals are innocent. If you interferer with this process of selection, it’s likely that society will collapse because it stops selecting for healthy and functional parts. This will sound harsh, but we need to deny parasitic behaviour in order to motivate people to develop agency and responsibility for themselves.
Anyway, just by bringing up “Responsibility”, you take a non-hedonistic view on things, which is much more healthy than the angle of most moralizers (only a healthy person can design a healthy morality). If you create a simple system which doesn’t expose all the variables, I belive it’s possible. Inequality could be justified partly as a meritocracy in which one is rewarded for responsibility. You can always climb the ladder if you want, but you’d realize that there’s a sacrifice behind every privilege, which would reduce the jealousy/general hatred against those of higher standing.
they’ll likely use it as a tool for this purpose
Yes, agreed entirely. I also lean libertarian, but I think this is a privileged (or in my eyes, healthy) worldview for people who have developed themselves as individuals and therefore have a certain level of self-esteem. People like us tend to be pro-freedom, but we can also handle freedom. The conservatives lock everything down with rules, they think that freedom results in degeneracy. The progressives are pro-freedom in some sense, but they’re also terrified of my freedom of speech and want to restrict it, and the freedom they give society is being used to excuse sick and hedonic indulgence, which is basically degeneracy. The truth about freedom is this, if you don’t want to be controlled, you need to control yourself and you can do anything as long as you can remain functional. Can you handle drugs/sexual freedom/alcohol/gambling? Then restricting you would be insulting you, - you know best! But if you’re hedonist, prone to addiction, prone to running away from your problems and responsibilties.. Then giving you freedom would be a vice. Another insight: We can’t agree because different people need different rules. Some groups think “Freedom would destroy me, so it would destroy others”, others think “I can handle freedom, I don’t want a nanny state!”, and others think “Everyone is so unfair, if only I had more freedom!”. These three groups would be: Self-aware degenerates, self-aware healthy people, and degenerates lacking the self-awareness that they’re degenerate.
Necessary to raise one’s self-esteem
Nice intuition again! Lacking self-esteem is likely the driving force behind the rising mental illness in the modern world. Ted Kaczynski warned that this would happen, because 1: Technology is taking away peoples autonomy. 2: We’re comparing ourselves to too many other people. If you were born in a small village you could be the best at something, but in the modern world, even a genius will feel like a drop in the ocean. 3: We’re being controlled by people far away, we can’t even reach them without complaints. All these result in a feeling of powerlessness/helplessness and undermine the need for agency, which harms our self-esteem, which in turn breaks our spirits. This is one of the reasons that globalization is psychologically unhealthy, I think, as simpler lives are easier to make work. Even communism can work as long as the scale is small enough (say, 100 or 200 people).
My worldview is influenced by Nietzsche. If you want something less brutal, I suggest you visit qri.org, they explore consciousness and various ways of maximizing valuence without creating a hedonistic society. Basically following a reward model rather than a punishment model, or just creating blissful/spiritual states of mind which maximize productivity (unlike weed, etc) without blunting emotional depth (like stimulants tend to do). Such people would naturally care about the well-being of others
Thanks for the comment. I do find that a helpful way to think about other people’s behavior is that they’re innocent, like you said, and they’re just trying to feel good. I fully expect that the majority of people are going to hate at least some aspect of the ethics calculator I’m putting together, in large part because they’ll see it as a threat to them feeling good in some way. But I think it’s necessary to have something consistent to align AI to, i.e., it’s better than the alternative.
Thanks for the comment! Yeah, I guess I was having a bit too much fun in writing my post to explicitly define all the terms I used. You say you “don’t think ethics is something you can discover.” But perhaps I should’ve been more clear about what I meant by “figuring out ethics.” According to merriam-webster.com, ethics is “a set of moral principles : a theory or system of moral values.” So I take “figuring out ethics” to basically be figuring out a system by which to make decisions based on a minimum agreeable set of moral values of humans. Whether such a “minimum agreeable set” exists or not is of course debatable, but that’s what I’m currently trying to “discover.”
Towards that end, I’m working on a system by which to calculate the ethics of a decision in a given situation. The system recommends that we maximize net “positive experiences.” In my view, what we consider to be “positive” is highly dependent on our self-esteem level, which in turn depends on how much personal responsibility we take and how much we follow our conscience. In this way, the system effectively takes into account “no pain, no gain” (conscience is painful and so can be building responsibility).
I agree that I’d like us to retain our humanity.
Regarding AI promoting certain political values, I don’t know if there’s any way around that happening. People pretty much always want to push their views on others, so if they have control of an AI, they’ll likely use it as a tool for this purpose. Personally, I’m a Libertarian, although not an absolutist about it. I’m trying to design my ethics calculator to leave room for people to have as many options as they can without infringing unnecessarily on others’ rights. Having options, including to make mistakes and even to not always “naively” maximize value, are necessary to raise one’s self-esteem, at least the way I see it. Thanks again for the comment!
(I likely wrote too much. Don’t feel pressured to read all of it)
Everything this community is trying to do (like saving the world) is extremely difficult, but we try anyway, and it’s sort of interesting/fun. I’m in over my head myself, I just think that psychological (rather than logical or biological) insights about morality are rare despite being important for solving the problem.
I believe that you can make a system of moral values, but a mathematical formalization of it would probably be rather vulgar (and either based on human nature or constructed from absolutely nothing). Being honest about moral values is itself immoral, for the same reason that saying “Hi, I want money” at a job interview is considered rude. I belive that morality is largely aesthetic, but exposing and breaking illusions, and pointing out all the elephants in the room, just gets really ugly. The Tao Te Ching says something like “The great person doesn’t know that he is virtuous, therefore he is virtuous”
Why do we hate cockroaches, wasps and rats, but love butterflies and bees? They differ a little in how useful they are and some have histories of causing problems for humanity, but I think the bigger factor is that we like beautiful and cute things. Think about that, we have no empathy for bugs unless they’re cute, and we call ourselves ethical? In the anime community they like to say “cute is justice”, but I can’t help but take this sentence literally. The punishment people face is inversely proportional to how cute they are (leading to racial and gender bias in criminal sentencing). We also like people who are beautiful (and exceptions to this is when beautiful people have ugly personalities, but that too is based on aesthetics). We consider people guilty when we know that they know what they did wrong. This makes many act less mature and intelligent than they are (Japanese derogatory colloquial word: Burikko, woman who acts cute by playing innocent and helpless. Thought of as a self-defense mechanism formed in ones childhood, it makes a lot of sense. Some people hate this, either due to cute-aggression or as an antidote to the deception inherent in the social strategy of feigning weakness)
Exposing these things is a problem, since most people who talk about morality do so in beautiful ways, “Oh how wonderful it would be if everyone could prosper together!”, which still exists within the pretty social illusions we have made. And while I have no intention to support the incel world-view, they’re at least a little bit correct about some of their claims, which are rooted in evolutionary psychology. Mainstream psychology doesn’t take them seriously, but that’s not because they’re wrong, it’s because they’re ugly parts of reality. Looks, height, intelligence and personality traits follow a standard distribution, and some are simply dealt better cards than others. We want the world to be fair to the extent that we ignore evidence of unfairness.
The way I solved this for myself, and made my own world beautiful again, was to realize that this is all just our instincts. Discriminating is how life works, it’s “natural selection” every step of the way. Those who complain the most about this are themselves sick people who hate this part of themselves and project it onto others, “exposing” them of human behaviour. In short: we’re innocent, like animals are innocent. If you interferer with this process of selection, it’s likely that society will collapse because it stops selecting for healthy and functional parts. This will sound harsh, but we need to deny parasitic behaviour in order to motivate people to develop agency and responsibility for themselves.
Anyway, just by bringing up “Responsibility”, you take a non-hedonistic view on things, which is much more healthy than the angle of most moralizers (only a healthy person can design a healthy morality). If you create a simple system which doesn’t expose all the variables, I belive it’s possible. Inequality could be justified partly as a meritocracy in which one is rewarded for responsibility. You can always climb the ladder if you want, but you’d realize that there’s a sacrifice behind every privilege, which would reduce the jealousy/general hatred against those of higher standing.
Yes, agreed entirely. I also lean libertarian, but I think this is a privileged (or in my eyes, healthy) worldview for people who have developed themselves as individuals and therefore have a certain level of self-esteem. People like us tend to be pro-freedom, but we can also handle freedom. The conservatives lock everything down with rules, they think that freedom results in degeneracy. The progressives are pro-freedom in some sense, but they’re also terrified of my freedom of speech and want to restrict it, and the freedom they give society is being used to excuse sick and hedonic indulgence, which is basically degeneracy. The truth about freedom is this, if you don’t want to be controlled, you need to control yourself and you can do anything as long as you can remain functional. Can you handle drugs/sexual freedom/alcohol/gambling? Then restricting you would be insulting you, - you know best! But if you’re hedonist, prone to addiction, prone to running away from your problems and responsibilties.. Then giving you freedom would be a vice.
Another insight: We can’t agree because different people need different rules. Some groups think “Freedom would destroy me, so it would destroy others”, others think “I can handle freedom, I don’t want a nanny state!”, and others think “Everyone is so unfair, if only I had more freedom!”. These three groups would be: Self-aware degenerates, self-aware healthy people, and degenerates lacking the self-awareness that they’re degenerate.
Nice intuition again! Lacking self-esteem is likely the driving force behind the rising mental illness in the modern world. Ted Kaczynski warned that this would happen, because
1: Technology is taking away peoples autonomy.
2: We’re comparing ourselves to too many other people. If you were born in a small village you could be the best at something, but in the modern world, even a genius will feel like a drop in the ocean.
3: We’re being controlled by people far away, we can’t even reach them without complaints.
All these result in a feeling of powerlessness/helplessness and undermine the need for agency, which harms our self-esteem, which in turn breaks our spirits. This is one of the reasons that globalization is psychologically unhealthy, I think, as simpler lives are easier to make work. Even communism can work as long as the scale is small enough (say, 100 or 200 people).
My worldview is influenced by Nietzsche. If you want something less brutal, I suggest you visit qri.org, they explore consciousness and various ways of maximizing valuence without creating a hedonistic society. Basically following a reward model rather than a punishment model, or just creating blissful/spiritual states of mind which maximize productivity (unlike weed, etc) without blunting emotional depth (like stimulants tend to do). Such people would naturally care about the well-being of others
Thanks for the comment. I do find that a helpful way to think about other people’s behavior is that they’re innocent, like you said, and they’re just trying to feel good. I fully expect that the majority of people are going to hate at least some aspect of the ethics calculator I’m putting together, in large part because they’ll see it as a threat to them feeling good in some way. But I think it’s necessary to have something consistent to align AI to, i.e., it’s better than the alternative.