Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they’ll cooperate.
Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That’s pretty impressive for a bug… but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so.
You don’t have to want to make the bugs suffer. It’s enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.)
Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc.
(And that’s still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)
Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I’d say they’d leave us alone. Unless, of course, there’s a hyperspace bypass that needs to be built.
Only if there’s general lack of atoms around. When atoms are in abundance, it’s more instrumentally useful to ask me for help constructing whatever you find terminally useful.
Right, but your conclusion still doesn’t follow—my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
But “[of others]” part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there’s a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don’t care, are not enough to conserve the pain.
Its own pain, probably. Why do you believe it will care about the pain of other beings?
Cooperation with other intelligent beings is instrumentally useful, unless the pain of others is one’s terminal value.
If one being is a thousand times more intelligent than another, such cooperation may be a waste of time.
Why do you think so? By default, I think their interaction would run like this: the much more intelligent being will easily persuade/trick the other one to do whatever the first one wants, so they’ll cooperate.
Imagine yourself and a bug. A bug that understands numbers up to one hundred, and is even able to do basic mathematical operations, though in 50% of cases it gets the answer wrong. That’s pretty impressive for a bug… but how much value would a cooperation with this bug provide to you? To compare, how much value would you get by removing such bugs from your house, or by driving your car without caring how many bugs do you kill by doing so.
You don’t have to want to make the bugs suffer. It’s enough if they have zero value for you, and you can gain some value by ignoring their pain. (You could also tell them to leave your house, but maybe they have nowhere else to go, or are just too stupid to find a way out, or they always forget and return.)
Now imagine a being with similar attitude towards humans. Any kind of human thought or work it can do better, and at a lesser cost than communicating with us. It does not hate us, it just can derive some important value by replacing our cities with something else, or by increasing radiation, etc.
(And that’s still assuming a rather benevolent being with values similar to ours. More friendly than a hypothetical Mother-Theresa-bot convinced that the most beautiful gift for a human is that they can participate in suffering.)
Such a scenario is certainly conceivable. On the other hand, bugs do not have general intelligence. So we can only speculate about how interaction between us and much more intelligent aliens would go. By default, I’d say they’d leave us alone. Unless, of course, there’s a hyperspace bypass that needs to be built.
The conclusion doesn’t follow. Ripping apart your body to use the atoms to construct something terminally useful is also instrumentally useful.
Only if there’s general lack of atoms around. When atoms are in abundance, it’s more instrumentally useful to ask me for help constructing whatever you find terminally useful.
Right, but your conclusion still doesn’t follow—my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
Well, of course. But which my conclusion you mean that doesn’t follow?
But “[of others]” part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there’s a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
This is highly dependent on the strategic structure of the situation.
Since I would care, I think other intelligent could care also. One who cares might be enough to abolish us all from the pain. A billion of those who don’t care, are not enough to conserve the pain.