[ 1. ] In public policy making, you have a set of preferences, which you get from votes or surveys, and you formulate policy based on your best objective understanding of cause and effect. The preferences don’t have to be objective, because they are taken as given.
The point I’m making in the post
Well, I reject the presumption of guilt.
is that no matter whether you have to treat the preferences as objective, there is an objective fact of the matter about what someone’s preferences are, in the real world [ real, even if not physical ].
[ 2. ] [ Agreeing on such basic elements of our ontology/epistemology ] isn’t all that relevant to AI safety, because an AI only needs some potentially dangerous capabilities.
Whether or not an AI “only needs some potentially dangerous capabilities” for your local PR purposes, the global truth of the matter is that “randomly-rolled” superintelligences will have convergent instrumental desires that have to do with making use of the resources we are currently using [like the negentropy that would make Earth’s oceans a great sink for 3 x 10^27 joules], but not desires that tightly converge with our terminal desires that make boiling the oceans without evacuating all the humans first a Bad Idea.
[ 3. ] You haven’t defined consciousness and you haven’t explained how [ we can know something that lives in a physical substrate that is unlike ours is conscious ].
My intent is not to say “I/we understand consciousness, therefore we can derive objectively sound-valid-and-therefore-true statements from theories with mentalistic atoms”. The arguments I actually give for why it’s true that we can derive objective abstract facts about the mental world, begin at “So why am I saying this premise is false?”, and end at ”. . . and agree that the results came out favoring one theory or another.” If we can derive objectively true abstract statements about the mental world, the same way we can derive such statements about the physical world [e.g. “the force experienced by a moving charge in a magnetic field is orthogonal both to the direction of the field and to the direction of its motion”] this implies that we can understand consciousness well, whether or not we already do.
[ 4. ] there doesn’t need to be [ some degree of objective truth as to what is valuable ]. You don’t have to solve ethics to set policy.
My point, again, isn’t that there needs to be, for whatever local practical purpose. My point is that there is.
I’ll address each of your 4 critiques:
The point I’m making in the post
is that no matter whether you have to treat the preferences as objective, there is an objective fact of the matter about what someone’s preferences are, in the real world [ real, even if not physical ].
Whether or not an AI “only needs some potentially dangerous capabilities” for your local PR purposes, the global truth of the matter is that “randomly-rolled” superintelligences will have convergent instrumental desires that have to do with making use of the resources we are currently using [like the negentropy that would make Earth’s oceans a great sink for 3 x 10^27 joules], but not desires that tightly converge with our terminal desires that make boiling the oceans without evacuating all the humans first a Bad Idea.
My intent is not to say “I/we understand consciousness, therefore we can derive objectively sound-valid-and-therefore-true statements from theories with mentalistic atoms”. The arguments I actually give for why it’s true that we can derive objective abstract facts about the mental world, begin at “So why am I saying this premise is false?”, and end at ”. . . and agree that the results came out favoring one theory or another.” If we can derive objectively true abstract statements about the mental world, the same way we can derive such statements about the physical world [e.g. “the force experienced by a moving charge in a magnetic field is orthogonal both to the direction of the field and to the direction of its motion”] this implies that we can understand consciousness well, whether or not we already do.
My point, again, isn’t that there needs to be, for whatever local practical purpose. My point is that there is.