Thanks for pointing to the orthogonality thesis as a reason for believing the chance would be low that advanced aliens would be nice to humans. I followed up by reading Bostrom’s “The Superintelligent Will,” and I narrowed down my disagreement to how this point is interpreted:
In a similar vein, even if there are objective moral facts that any fully rational agent would comprehend, and even if these moral facts are somehow intrinsically motivating (such that anybody who fully comprehends them is necessarily motivated to act in accordance with them) this need not undermine the orthogonality thesis. The thesis could still be true if an agent could have impeccable instrumental rationality even whilst lacking some other faculty constitutive of rationality proper, or some faculty required for the full comprehension of the objective moral facts. (An agent could also be extremely intelligent, even superintelligent, without having full instrumental rationality in every domain.)
Just because it’s possible that an agent could have impeccable instrumental rationality while lacking in epistemic rationality to some degree, I expect the typical case that leads to very advanced intelligence would eventually involve synergy between growing both in concert, as many here at Less Wrong are working to do. In other words, a highly competent general intelligence is likely to be curious about objective facts across a very diverse range of topics.
So while aliens could be instrumentally advanced enough to make it to Earth without having ever made basic discoveries in a particular area, there’s no reason for us to expect that it is specifically the area of morality where they will be ignorant or delusional. A safer bet is that they have learned at least as many objective facts as humans have about any given topic on expectation, and that a topic where the aliens have blind spots in relation to some humans is an area where they would be curious to learn from us.
A policy of unconditional harmlessness and friendliness toward all beings is a Schelling Point that could be discovered in many ways. I grant that humans may have it relatively easy to mature on the moral axis because we are conscious, which may or may not be the typical case for general intelligence. That means we can directly experience within our own awareness facts about how happiness is preferred to suffering, how anger and violence lead to suffering, how compassion and equanimity lead to happiness, and so on. We can also see these processes operating in others. But even a superintelligence with no degree of happiness is likely to learn whatever it can from humans, and learning something like love would be a priceless treasure to discover on Earth.
If aliens show up here, I give them at least a 50% chance of being as knowledgeable as the wisest humans in matters of morality. That’s ten times more than Yudkowsky gives them and perhaps infinitely more than Hotz does!
Have humans learnt any objective moral facts? What sort t thing is an objective moral fact? Something like an abstract mathematical theorem , a perceivable object, or a game theoretic equilibrium...?
My view is that humans have learned objective moral facts, yes. For example:
If one acts with an angry or greedy mind, suffering is guaranteed to follow.
I posit that this is not limited to humans. Some people who became famous in history due to their wisdom who I expect would agree include Mother Teresa, Leo Tolstoy, Marcus Aurelius, Martin Luther King Jr., Gandhi, Jesus, and Buddha.
I don’t claim that all humans know all facts about morality. Sadly, it’s probably the case that most people are quite lost, ignorant in matters of virtuous conduct, which is why they find life to be so difficult.
The form you described is called an argument. It requires a series of facts. If you’re working with propositions such as
All beings want to be happy.
No being wants to suffer.
Suffering is caused by confusion and ignorance of morality.
...
then I suppose it could be called a “moral” argument made of “moral” facts and “moral” reasoning, but it’s really just the regular form of an argument made of facts and reasoning. The special thing about moral facts is that direct experience is how they are discovered, and it is that same experiential reality to which they exclusively pertain. I’m talking about the set of moment-by-moment first-person perspectives of sentient beings, such as the familiar one you can investigate right now in real time. Without a being experiencing a sensation come and go, there is no moral consideration to evaluate. NULL.
“Objective moral fact” is Bostrom’s term from the excerpt above, and the phrasing probably isn’t ideal for this discussion. Tabooing such words is no easy feat, but let’s do our best to unpack this. Sticking with the proposition we agree is factual:
If one acts with an angry or greedy mind, suffering is guaranteed to follow.
What kind of fact is this? It’s a fact that can be discovered and/or verified by any sentient being upon investigation of their own direct experience. It is without exception. It is highly relevant for benefiting oneself and others—not just humans. For thousands of years, many people have been revered for articulating it and many more have become consistently happy by basing their decisions on it. Most people don’t; it continues to be a rare piece of wisdom at this stage of civilization. (Horrifyingly, a person on the edge of starting a war or shooting up a school currently would receive advice from ChatGPT to increase “focused, justified anger.”)
Humankind has discovered and recorded a huge body of such knowledge, whatever we wish to call it. If the existence of well-established, verifiable, fundamental insights into the causal nature of experiential reality comes as a surprise to anyone working in fields like psychotherapy or AI alignment, I would urge them to make an earnest and direct inquiry into the matter so they can see firsthand whether such claims have merit. Given the chance, I believe many nonhuman general intelligences would also try and succeed at understanding this kind of information.
(Phew! I packed a lot of words into this comment because I’m too new here to speak more than three times per day. For more on the topic, see the chapter on morality in Dr. Daniel M. Ingram’s book that was reviewed on Slate Star Codex.)
Thanks for pointing to the orthogonality thesis as a reason for believing the chance would be low that advanced aliens would be nice to humans. I followed up by reading Bostrom’s “The Superintelligent Will,” and I narrowed down my disagreement to how this point is interpreted:
Just because it’s possible that an agent could have impeccable instrumental rationality while lacking in epistemic rationality to some degree, I expect the typical case that leads to very advanced intelligence would eventually involve synergy between growing both in concert, as many here at Less Wrong are working to do. In other words, a highly competent general intelligence is likely to be curious about objective facts across a very diverse range of topics.
So while aliens could be instrumentally advanced enough to make it to Earth without having ever made basic discoveries in a particular area, there’s no reason for us to expect that it is specifically the area of morality where they will be ignorant or delusional. A safer bet is that they have learned at least as many objective facts as humans have about any given topic on expectation, and that a topic where the aliens have blind spots in relation to some humans is an area where they would be curious to learn from us.
A policy of unconditional harmlessness and friendliness toward all beings is a Schelling Point that could be discovered in many ways. I grant that humans may have it relatively easy to mature on the moral axis because we are conscious, which may or may not be the typical case for general intelligence. That means we can directly experience within our own awareness facts about how happiness is preferred to suffering, how anger and violence lead to suffering, how compassion and equanimity lead to happiness, and so on. We can also see these processes operating in others. But even a superintelligence with no degree of happiness is likely to learn whatever it can from humans, and learning something like love would be a priceless treasure to discover on Earth.
If aliens show up here, I give them at least a 50% chance of being as knowledgeable as the wisest humans in matters of morality. That’s ten times more than Yudkowsky gives them and perhaps infinitely more than Hotz does!
Have humans learnt any objective moral facts? What sort t thing is an objective moral fact? Something like an abstract mathematical theorem , a perceivable object, or a game theoretic equilibrium...?
My view is that humans have learned objective moral facts, yes. For example:
I posit that this is not limited to humans. Some people who became famous in history due to their wisdom who I expect would agree include Mother Teresa, Leo Tolstoy, Marcus Aurelius, Martin Luther King Jr., Gandhi, Jesus, and Buddha.
I don’t claim that all humans know all facts about morality. Sadly, it’s probably the case that most people are quite lost, ignorant in matters of virtuous conduct, which is why they find life to be so difficult.
It’s not a moral fact, it’s just fact. Moral fact is something of form “and that means that acting with angry or greedy mind is wrong”.
The form you described is called an argument. It requires a series of facts. If you’re working with propositions such as
All beings want to be happy.
No being wants to suffer.
Suffering is caused by confusion and ignorance of morality.
...
then I suppose it could be called a “moral” argument made of “moral” facts and “moral” reasoning, but it’s really just the regular form of an argument made of facts and reasoning. The special thing about moral facts is that direct experience is how they are discovered, and it is that same experiential reality to which they exclusively pertain. I’m talking about the set of moment-by-moment first-person perspectives of sentient beings, such as the familiar one you can investigate right now in real time. Without a being experiencing a sensation come and go, there is no moral consideration to evaluate. NULL.
“Objective moral fact” is Bostrom’s term from the excerpt above, and the phrasing probably isn’t ideal for this discussion. Tabooing such words is no easy feat, but let’s do our best to unpack this. Sticking with the proposition we agree is factual:
What kind of fact is this? It’s a fact that can be discovered and/or verified by any sentient being upon investigation of their own direct experience. It is without exception. It is highly relevant for benefiting oneself and others—not just humans. For thousands of years, many people have been revered for articulating it and many more have become consistently happy by basing their decisions on it. Most people don’t; it continues to be a rare piece of wisdom at this stage of civilization. (Horrifyingly, a person on the edge of starting a war or shooting up a school currently would receive advice from ChatGPT to increase “focused, justified anger.”)
Humankind has discovered and recorded a huge body of such knowledge, whatever we wish to call it. If the existence of well-established, verifiable, fundamental insights into the causal nature of experiential reality comes as a surprise to anyone working in fields like psychotherapy or AI alignment, I would urge them to make an earnest and direct inquiry into the matter so they can see firsthand whether such claims have merit. Given the chance, I believe many nonhuman general intelligences would also try and succeed at understanding this kind of information.
(Phew! I packed a lot of words into this comment because I’m too new here to speak more than three times per day. For more on the topic, see the chapter on morality in Dr. Daniel M. Ingram’s book that was reviewed on Slate Star Codex.)