Would you accept that an AI could figure out morality better than you?
No, unless you mean by taking invasive action like scanning my brain and applying whole brain emulation. It would then quickly learn that I’d consider the action it took to be an unforgivable act in violation of my individual sovereignty, that it can’t take further action (including simulating me to reflectively equilibrate my morality) without my consent, and should suspend the simulation, and return it to me immediately with the data asap (destruction no longer being possible due to the creation of sentience).
That is, assuming the AI cares at all about my morality, and not the its creators imbued into it, which is rather the point. And incidentally, why I work on AGI: I don’t trust anyone else to do it.
Morality isn’t some universal truth written on a stone tablet: it is individual and unique like a snowflake. In my current understanding of my own morality, it is not possible for some external entity to reach a full or even sufficient understanding of my own morality without doing something that I would consider to be unforgivable. So no, AI can’t figure out morality better than me, precisely because it is not me.
(Upvoted for asking an appropriate question, however.)
No, unless you mean by taking invasive action like scanning my brain and applying whole brain emulation. It would then quickly learn that I’d consider the action it took to be an unforgivable act in violation of my individual sovereignty,
Shrug. Then let’s take a bunch of people less fussy than you: could a sitiably equipped AI emultate their morlaity better than they can?
Morality isn’t some universal truth written on a stone tablet:
That isn’t fact.
it is individual and unique like a snowflake.
That isn’t a fact either, and doesn’t follow from the above either, since moral nihilism could be true.
If my moral snowflake says I can kick you on your shin, and yours says I can’t, do I get to kick on your shin?
No, unless you mean by taking invasive action like scanning my brain and applying whole brain emulation. It would then quickly learn that I’d consider the action it took to be an unforgivable act in violation of my individual sovereignty, that it can’t take further action (including simulating me to reflectively equilibrate my morality) without my consent, and should suspend the simulation, and return it to me immediately with the data asap (destruction no longer being possible due to the creation of sentience).
That is, assuming the AI cares at all about my morality, and not the its creators imbued into it, which is rather the point. And incidentally, why I work on AGI: I don’t trust anyone else to do it.
Morality isn’t some universal truth written on a stone tablet: it is individual and unique like a snowflake. In my current understanding of my own morality, it is not possible for some external entity to reach a full or even sufficient understanding of my own morality without doing something that I would consider to be unforgivable. So no, AI can’t figure out morality better than me, precisely because it is not me.
(Upvoted for asking an appropriate question, however.)
Shrug. Then let’s take a bunch of people less fussy than you: could a sitiably equipped AI emultate their morlaity better than they can?
That isn’t fact.
That isn’t a fact either, and doesn’t follow from the above either, since moral nihilism could be true.
If my moral snowflake says I can kick you on your shin, and yours says I can’t, do I get to kick on your shin?