Ah, but Kawoomba doesn’t expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.
Which, honestly, should simply be called “goals”, not “ethics”, but there you go.
Yea, honestly I’ve never seen the exact distinction between goals which have an ethics-rating, and goals which do not. I understand that humans share many ethical intuitions, which isn’t surprising given our similar hardware. Also, that it may be possible to define some axioms for “medieval Han Chinese ethics” (or some subset thereof), and then say we have an objectively correct model of their specific ethical code. About the shared intuitions amongst most humans, those could be e.g. “murdering your parents is wrong” (not even “murder is wrong”, since that varies across cultures and circumstances). I’d still call those systems different, just as different cars can have the same type of engine.
Also, I understand that different alien cultures, using different “ethical axioms”, or whatever they base their goals on, do not invalidate the medieval Han Chinese axioms, they merely use different ones.
My problem with “objectively correct ethics for all rational agents” is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could not exist (edit: i.e., it probably could exist), and its very existence would contradict some “‘rational’ corresponds to a fixed set of ethics” rule. If someone would say “well, Clippy isn’t really rational then”, that would just be torturously warping the definition of “rational actor” to “must also believe in some specific set of ethical rules”.
If I remember correctly, you say at least for humans there is a common ethical basis which we should adopt (correct me otherwise). I guess I see more variance and differences where you see common elements, especially going in the future. Should some bionically enhanced human, or an upload on a spacestation which doesn’t even have parents, still share all the same rules for “good” and “bad” as an Amazon tribe living in an enclosed reservation? “Human civilization” is more of a loose umbrella term, and while there certainly can be general principles which some still share, I doubt there’s that much in common in the ethical codex of an African child soldier and Donald Trump.
My problem with “objectively correct ethics for all rational agents” is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could exist, and its very existence would contradict some “‘rational’ corresponds to a fixed set of ethics” rule. If someone would say “well, Clippy isn’t really rational then”, that would just be torturously warping the definition of “rational actor” to “must also believe in some specific set of ethical rules”.
The argument is not that rational agents (for some vaue of “rational”) must believe in some rules, it is rather that they must not adopt arbitrary goals. Also, the argument only requires a statistical majority of rational agents to converge,
because of the P<1.0 thing.
Should some bionically enhanced human, or an upload on a spacestation which doesn’t even have parents, still share all the same rules for “good” and “bad” as an Amazon tribe living in an enclosed reservation?
Maybe not. The important thing is that variations in ethics should not be arbitrary—they should be systematically related to variations in circumstances.
I’m not disputing that there are goals/ethics which may be best suited to take humanity along a certain trajectory, towards a previously defined goal (space exploration!). Given a different predefined goal, the optimal path there would often be different. Say, ruthless exploitation may have certain advantages in empire building, under certain circumstances.
The Categorical Imperative in all its variants may be a decent system for humans (not that anyone really uses it).
But is the justification for its global applicability that “if everyone lived by that rule, average happiness would be maximized”? That (or any other such consideration) itself is not a mandatory goal, but a chosen one. Choosing different criteria to maximize (e.g. noone less happy than x) would yield different rules, e.g. different from the Categorical Imperative. If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative. How can it still be said to be “correct”/optimal for the king, then?
So I’m not saying there aren’t useful ethical system (as judged in relation to some predefined course), but that because those various ultimate goals of various rational agents (happiness, paperclips, replicating yourself all over the universe) and associated optimal ethics vary, there cannot be one system that optimizes for all conceivable goals.
My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions, how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?
Gandhi wouldn’t take a pill which may transform him into a murderer. Clippy would not willingly modify itself such that suddenly it had different goals. Once you’ve taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further “core spark” which really yearns to adopt the Categorical Imperative. Clippy may choose to use it, for a time, if it serves its ultimate goals. But any given ethical code will never be optimal for arbitrary goals, in perpetuity (proof by example). When then would a particular code following from particular axioms be adopted by all rational agents?
But is the justification for its global applicability that “if everyone lived by that rule, average happiness would be maximized”?
Well, not, that’s not Kant’s justification!
That (or any other such consideration) itself is not a mandatory goal, but a chosen one.
Why would a rational agent choose unhappiness?
If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative.
Yes, but that wouldn’t count as ethics. You wouldn’t want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn’t want to be a slave, and you probably would be.
This is brought out in Rawls’ version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what
role in society you are going to have, and choose rules that you would want to have if you were
to enter society at random.
My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions,
You don’t have object-level stuff like ice cream or paperclips in your axioms (maxims), you have abstract stuff,
like the Categorical Imperative. You then arrive at object level ethics by plugging in details of actual circumstances and values. These will vary, but not in an arbitrary way, as is the disadvantage of anything-goes relativism.
how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?
The idea is that things like the CI have rational appeal.
Once you’ve taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further “core spark” which really yearns to adopt the Categorical Imperative.
Rational agents will converge on a number of things because they are rational. None of them will think 2+2-=5.
1) You wake up in a bright box of light, no memories. You are told you’ll presently be born into an Absolute monarchy, your role randomly chosen. You may choose any moral principles that should govern that society. The Categorical Imperative would on average give you the best result.
2) You are the monarch in that society, you do not need to guess which role you’re being born into, you have that information. You don’t need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
Lastly, whatever Kant’s justification, why can you not optimize for a different principle—peak happiness versus average happiness, what makes any particular justifying principle correct across all—rational—agents. Here come my algae!
You are the monarch in that society, you do not need to guess which role you’re being born into, you have that information. You don’t need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
For what value of “best”? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn’t maximise your personally utility. But I don’t see why you would expect that. Things like utilitarianism that seek to maximise group utility, don’t promise to make everyone blissfully happy individually. Some will lose out.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to
turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not
converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
Lastly, whatever Kant’s justification, why can you not optimize for a different principle—peak happiness versus average happiness, what makes any particular justifying principle correct across all—rational—agents.
If you think RAs can converge on an ultimately correct theory of physics (which we don’t have), what is to stop them converging on the correct theory of morality, which we also don’t have?
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
It is irrational for agents to sign up to anyhting which is not in their [added: current] interests
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
what is to stop them converging on the correct theory of morality, which we also don’t have?
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
There’s no evidence of terminal values. Judgements can be updated without changing values.
Starting out with different interests. A strong clippy accomodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
Not Beady, Anti-Clippy: an agent that is the precise opposite of Clippy. It wants to minimize the number of paperclips.
Yes, but that wouldn’t count as ethics. You wouldn’t want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn’t want to be a slave, and you probably would be.
If there are a lot of similar agents in similar positions, Kantian ethics works, no matter what their goals. For example, theft may appear to have positive expected value—assuming you’re selfish—but it has positive expected value for lots of people, and if they all stole the economy would collapse.
OTOH, if you are in an unusual position, the Categorical Imperative only has force if you take it as axiomatic.
This is brought out in Rawls’ version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.
That’s not a version of Kantian ethics, it’s a hack for designing a society without privileging yourself. If you’re selfish, it’s a bad idea.
Kawoomba, maybe it would be better for you to think in terms of ethics along the lines of Kant’s Categorical Imperative, or social contract theory; ways for agents with different goals to co-operate.
Wouldn’t that presuppose that “cooperation is the source/the sine qua non of all good”?
Sure, we can redefine some version of ethics in such a cooperative light, and then conclude that many agents don’t give a hoot about such ethics, or regard it in the cold, hard terms of game theory, e.g. negotiating/extortion strategies only.
Judging actions as “good” or “bad” doesn’t prima facie depend entirely on cooperation, the good of your race, or whatever. For example, if you were a part of a planet-eating race, consuming all matter/life in its path—while being very friendly amongst themselves—couldn’t it be considered ethically “good” even from a human perspective to killswitch your own race? And “bad” from the moral standpoint of the planet-eating race?
The easiest way to dissolve such obvious contradictions is to say that there is just not, in fact, a universal hierarchy ranking ethical systems universally, regardless of the nature of the (rational = capable reasoner) agent.
Doesn’t mean an agent isn’t allowed to strongly defend what it considers to be moral, to die for it, even.
Wouldn’t that presuppose that “cooperation is the source/the sine qua non of all good”?
The point is it doesn’t matter what you consider “good”; fighting people wont produce it (even if you value fighting people, because they will beat you and you’ll be unable to fight.)
I’m not saying your goals should be ethical; I’m saying you should be ethical in order to achieve your goals.
Ethically “good” = enabling cooperation, if you are not cooperating you must be “fighting”?
Those are evidently only rough approximations of social dynamics even just in a human context. Would it be good to cooperate with an invading army, or to cooperate with the resistance? The one with an opposing goal, so as a patriot, the opposing army it is, eh?
Is it good to cooperate with someone bullying you, or torturing you? What about game theory, if you’re not “cooperating” (for your value of cooperating), you must be “fighting”? What do you mean by fighting, physical altercations? Is a loan negotiation more like cooperation or more like fighting, and is it thus ethically good or bad, for your notion of “ethics = ways for agents with different goals to co-operate”?
It seems like a nice soundbite, but doesn’t make even cursory sense on further examination. I’m all for models that are as simple as possible, but no simpler. But cooperation as the definition of ethics? For you, maybe. Collaborateur!
Fighting in this context refers to anything analogous to defecting in a Prisoner’s Dilemma. You hurt the other side but encourage them to defect in order to punish you. You should strive for the Pareto Optimimum.
Maybe this would be clearer if we talked in terms of Pebblesorters?
Ah, but Kawoomba doesn’t expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.
Why not just say there is no ethics? His theory is like saying that since teapots are made of chocolate, their purpose is to melt into a messy puddle instead of making tea.
I’m all in favor of him just using the word “goals”, myself, and leaving us non-paperclippers the word “ethics”, but oh well. It confuses discussion no end, but I guess it makes him happy.
Also, arguing over the “correct” word is low-status, so I’d suggest you start calling them “normative guides” or something while Kawoomba can hear you if you don’t want to rehash this conversation. And they can always hear you.
Ah, but Kawoomba doesn’t expect ethics to regulate other people, because he thinks everyone has incompatible goals. Thus ethics serves purely to define your goals.
Which, honestly, should simply be called “goals”, not “ethics”, but there you go.
Yea, honestly I’ve never seen the exact distinction between goals which have an ethics-rating, and goals which do not. I understand that humans share many ethical intuitions, which isn’t surprising given our similar hardware. Also, that it may be possible to define some axioms for “medieval Han Chinese ethics” (or some subset thereof), and then say we have an objectively correct model of their specific ethical code. About the shared intuitions amongst most humans, those could be e.g. “murdering your parents is wrong” (not even “murder is wrong”, since that varies across cultures and circumstances). I’d still call those systems different, just as different cars can have the same type of engine.
Also, I understand that different alien cultures, using different “ethical axioms”, or whatever they base their goals on, do not invalidate the medieval Han Chinese axioms, they merely use different ones.
My problem with “objectively correct ethics for all rational agents” is, you could say, where the compellingness of any particular system comes in. There is reason to believe an agent such as Clippy could not exist (edit: i.e., it probably could exist), and its very existence would contradict some “‘rational’ corresponds to a fixed set of ethics” rule. If someone would say “well, Clippy isn’t really rational then”, that would just be torturously warping the definition of “rational actor” to “must also believe in some specific set of ethical rules”.
If I remember correctly, you say at least for humans there is a common ethical basis which we should adopt (correct me otherwise). I guess I see more variance and differences where you see common elements, especially going in the future. Should some bionically enhanced human, or an upload on a spacestation which doesn’t even have parents, still share all the same rules for “good” and “bad” as an Amazon tribe living in an enclosed reservation? “Human civilization” is more of a loose umbrella term, and while there certainly can be general principles which some still share, I doubt there’s that much in common in the ethical codex of an African child soldier and Donald Trump.
A number of criteria have been put forward. For instance, do as you would be done by. If you don’t want to be murdered, murder is not an ethical goal.
The argument is not that rational agents (for some vaue of “rational”) must believe in some rules, it is rather that they must not adopt arbitrary goals. Also, the argument only requires a statistical majority of rational agents to converge, because of the P<1.0 thing.
Maybe not. The important thing is that variations in ethics should not be arbitrary—they should be systematically related to variations in circumstances.
I’m not disputing that there are goals/ethics which may be best suited to take humanity along a certain trajectory, towards a previously defined goal (space exploration!). Given a different predefined goal, the optimal path there would often be different. Say, ruthless exploitation may have certain advantages in empire building, under certain circumstances.
The Categorical Imperative in all its variants may be a decent system for humans (not that anyone really uses it).
But is the justification for its global applicability that “if everyone lived by that rule, average happiness would be maximized”? That (or any other such consideration) itself is not a mandatory goal, but a chosen one. Choosing different criteria to maximize (e.g. noone less happy than x) would yield different rules, e.g. different from the Categorical Imperative. If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative. How can it still be said to be “correct”/optimal for the king, then?
So I’m not saying there aren’t useful ethical system (as judged in relation to some predefined course), but that because those various ultimate goals of various rational agents (happiness, paperclips, replicating yourself all over the universe) and associated optimal ethics vary, there cannot be one system that optimizes for all conceivable goals.
My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions, how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?
Gandhi wouldn’t take a pill which may transform him into a murderer. Clippy would not willingly modify itself such that suddenly it had different goals. Once you’ve taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further “core spark” which really yearns to adopt the Categorical Imperative. Clippy may choose to use it, for a time, if it serves its ultimate goals. But any given ethical code will never be optimal for arbitrary goals, in perpetuity (proof by example). When then would a particular code following from particular axioms be adopted by all rational agents?
Well, not, that’s not Kant’s justification!
Why would a rational agent choose unhappiness?
Yes, but that wouldn’t count as ethics. You wouldn’t want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn’t want to be a slave, and you probably would be. This is brought out in Rawls’ version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.
You don’t have object-level stuff like ice cream or paperclips in your axioms (maxims), you have abstract stuff, like the Categorical Imperative. You then arrive at object level ethics by plugging in details of actual circumstances and values. These will vary, but not in an arbitrary way, as is the disadvantage of anything-goes relativism.
The idea is that things like the CI have rational appeal.
Rational agents will converge on a number of things because they are rational. None of them will think 2+2-=5.
Scenario:
1) You wake up in a bright box of light, no memories. You are told you’ll presently be born into an Absolute monarchy, your role randomly chosen. You may choose any moral principles that should govern that society. The Categorical Imperative would on average give you the best result.
2) You are the monarch in that society, you do not need to guess which role you’re being born into, you have that information. You don’t need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
Lastly, whatever Kant’s justification, why can you not optimize for a different principle—peak happiness versus average happiness, what makes any particular justifying principle correct across all—rational—agents. Here come my algae!
For what value of “best”? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn’t maximise your personally utility. But I don’t see why you would expect that. Things like utilitarianism that seek to maximise group utility, don’t promise to make everyone blissfully happy individually. Some will lose out.
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
If you think RAs can converge on an ultimately correct theory of physics (which we don’t have), what is to stop them converging on the correct theory of morality, which we also don’t have?
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
There’s no evidence of terminal values. Judgements can be updated without changing values.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.
Not Beady, Anti-Clippy: an agent that is the precise opposite of Clippy. It wants to minimize the number of paperclips.
If there are a lot of similar agents in similar positions, Kantian ethics works, no matter what their goals. For example, theft may appear to have positive expected value—assuming you’re selfish—but it has positive expected value for lots of people, and if they all stole the economy would collapse.
OTOH, if you are in an unusual position, the Categorical Imperative only has force if you take it as axiomatic.
That’s not a version of Kantian ethics, it’s a hack for designing a society without privileging yourself. If you’re selfish, it’s a bad idea.
Kawoomba, maybe it would be better for you to think in terms of ethics along the lines of Kant’s Categorical Imperative, or social contract theory; ways for agents with different goals to co-operate.
Wouldn’t that presuppose that “cooperation is the source/the sine qua non of all good”?
Sure, we can redefine some version of ethics in such a cooperative light, and then conclude that many agents don’t give a hoot about such ethics, or regard it in the cold, hard terms of game theory, e.g. negotiating/extortion strategies only.
Judging actions as “good” or “bad” doesn’t prima facie depend entirely on cooperation, the good of your race, or whatever. For example, if you were a part of a planet-eating race, consuming all matter/life in its path—while being very friendly amongst themselves—couldn’t it be considered ethically “good” even from a human perspective to killswitch your own race? And “bad” from the moral standpoint of the planet-eating race?
The easiest way to dissolve such obvious contradictions is to say that there is just not, in fact, a universal hierarchy ranking ethical systems universally, regardless of the nature of the (rational = capable reasoner) agent.
Doesn’t mean an agent isn’t allowed to strongly defend what it considers to be moral, to die for it, even.
The point is it doesn’t matter what you consider “good”; fighting people wont produce it (even if you value fighting people, because they will beat you and you’ll be unable to fight.)
I’m not saying your goals should be ethical; I’m saying you should be ethical in order to achieve your goals.
That seems very simplistic.
Ethically “good” = enabling cooperation, if you are not cooperating you must be “fighting”?
Those are evidently only rough approximations of social dynamics even just in a human context. Would it be good to cooperate with an invading army, or to cooperate with the resistance? The one with an opposing goal, so as a patriot, the opposing army it is, eh?
Is it good to cooperate with someone bullying you, or torturing you? What about game theory, if you’re not “cooperating” (for your value of cooperating), you must be “fighting”? What do you mean by fighting, physical altercations? Is a loan negotiation more like cooperation or more like fighting, and is it thus ethically good or bad, for your notion of “ethics = ways for agents with different goals to co-operate”?
It seems like a nice soundbite, but doesn’t make even cursory sense on further examination. I’m all for models that are as simple as possible, but no simpler. But cooperation as the definition of ethics? For you, maybe. Collaborateur!
Fighting in this context refers to anything analogous to defecting in a Prisoner’s Dilemma. You hurt the other side but encourage them to defect in order to punish you. You should strive for the Pareto Optimimum.
Maybe this would be clearer if we talked in terms of Pebblesorters?
Why not just say there is no ethics? His theory is like saying that since teapots are made of chocolate, their purpose is to melt into a messy puddle instead of making tea.
I’m all in favor of him just using the word “goals”, myself, and leaving us non-paperclippers the word “ethics”, but oh well. It confuses discussion no end, but I guess it makes him happy.
Also, arguing over the “correct” word is low-status, so I’d suggest you start calling them “normative guides” or something while Kawoomba can hear you if you don’t want to rehash this conversation. And they can always hear you.
ALWAYS.