“the proper formulation of objective ballistics takes projectile mass as a parameter”
I think the best analogy here is to say something like, the proper formulation of decision theory takes terminal values as a parameter. Decision theory defines a “universal” optimum (that is, universal “for all minds”… presumably anyway), but each person is individually running a decision theory process as a function of their own terminal values—there is no “universal” terminal value, for example if I could build an AI then I could theoretically put in any utility function I wanted. Ethics is “universal” in the sense of optimal decision theory, but “person dependent” in the sense of plugging in one’s own particular terminal values—but terminal values and ethics are not necessarily “mind-dependent”, as explained here.
I would certainly agree that there is no terminal value shared by all minds (come to that, I’m not convinced there are any terminal values shared by all of any given mind).
Also, I would agree that when figuring out how I should best apply a value-neutral decision theory to my environment I have to “plug in” some subset of information about my own values and about my environment.
I would also say that a sufficiently powerful value-neutral decision theory instructs me on how to optimize any environment towards any value, given sufficiently comprehensive data about the environment and the value. Which seems like another way of saying that decision theory is objective and universal, in the same sense that ballistics is.
How that relates to statements about ethics being universal,objective, person-dependent, and/or mind-dependent is not clear to me, though, even after following your link.
“the proper formulation of objective ballistics takes projectile mass as a parameter”
I think the best analogy here is to say something like, the proper formulation of decision theory takes terminal values as a parameter. Decision theory defines a “universal” optimum (that is, universal “for all minds”… presumably anyway), but each person is individually running a decision theory process as a function of their own terminal values—there is no “universal” terminal value, for example if I could build an AI then I could theoretically put in any utility function I wanted. Ethics is “universal” in the sense of optimal decision theory, but “person dependent” in the sense of plugging in one’s own particular terminal values—but terminal values and ethics are not necessarily “mind-dependent”, as explained here.
I would certainly agree that there is no terminal value shared by all minds (come to that, I’m not convinced there are any terminal values shared by all of any given mind).
Also, I would agree that when figuring out how I should best apply a value-neutral decision theory to my environment I have to “plug in” some subset of information about my own values and about my environment.
I would also say that a sufficiently powerful value-neutral decision theory instructs me on how to optimize any environment towards any value, given sufficiently comprehensive data about the environment and the value. Which seems like another way of saying that decision theory is objective and universal, in the same sense that ballistics is.
How that relates to statements about ethics being universal,objective, person-dependent, and/or mind-dependent is not clear to me, though, even after following your link.