I’m a super-dummy when it comes to thinking about AI. I rightly leave it to people better equipped and more motivated than me.
But, can someone explain to me why a solution would not involve some form of “don’t do things to people or their property without their permission”? Certainly, that would lead to a sub-optimal use of AI in some people’s opinions. But it would completely respect the opinions of those who disagree.
Recognizing that I am probably the least AI-knowledgeable person to have posted a comment here, I ask, what am I missing?
Even leaving aside the matters of ‘permission’ (which lead into awkward matters of informed consent) as well as the difficulties of defining concepts like ‘people’ and ‘property’, define ‘do things to X’. Every action affects others. If you so much as speak a word, you’re causing others to undergo the experience of hearing that word spoken. For an AGI, even thinking draws a miniscule amount of electricity from the power grid, which has near-negligible but quantifiable effects on the power industry which will affect humans in any number of different ways. If you take chaos theory seriously, you could take this even further. It may seem obvious to a human that there’s a vast difference between innocuous actions like those in the above examples and those that are potentially harmful, but lots of things are intuitively obvious to humans and yet turn out to be extremely difficult to precisely quantify, and this seems like just such a case.
What people permit is more inclusive and vague than what they want and doesn’t even in the same sense try to aim to further a persons goals. There is also an problem that people could accept a fate they don’t want. Whether that is the human being self-unfriendly or the ai being unfriendly is a matter of debate. But still it’s a form of unfriendliness.
it’s not strictly an AI problem—any sufficiently rapid optimization process bears the risk of irretrievably converging on an optimum nobody likes before anybody can intervene with an updated optimization target.
individual and property rights are not rigorously specified enough to be a sufficient safeguard against bad outcomes even in an economy moving at human speeds
in other words the science of getting what we ask for advances faster than the science of figuring out what to ask for
I you don’t know that you are missing somethin or reason to be beleive this to be the case, you are unsure about wheter you are dummy when it comes to AI or not. Not knowiing whether you should AI discuss is different from knowing not to AI discuss.
I’m a super-dummy when it comes to thinking about AI. I rightly leave it to people better equipped and more motivated than me.
But, can someone explain to me why a solution would not involve some form of “don’t do things to people or their property without their permission”? Certainly, that would lead to a sub-optimal use of AI in some people’s opinions. But it would completely respect the opinions of those who disagree.
Recognizing that I am probably the least AI-knowledgeable person to have posted a comment here, I ask, what am I missing?
Even leaving aside the matters of ‘permission’ (which lead into awkward matters of informed consent) as well as the difficulties of defining concepts like ‘people’ and ‘property’, define ‘do things to X’. Every action affects others. If you so much as speak a word, you’re causing others to undergo the experience of hearing that word spoken. For an AGI, even thinking draws a miniscule amount of electricity from the power grid, which has near-negligible but quantifiable effects on the power industry which will affect humans in any number of different ways. If you take chaos theory seriously, you could take this even further. It may seem obvious to a human that there’s a vast difference between innocuous actions like those in the above examples and those that are potentially harmful, but lots of things are intuitively obvious to humans and yet turn out to be extremely difficult to precisely quantify, and this seems like just such a case.
What people permit is more inclusive and vague than what they want and doesn’t even in the same sense try to aim to further a persons goals. There is also an problem that people could accept a fate they don’t want. Whether that is the human being self-unfriendly or the ai being unfriendly is a matter of debate. But still it’s a form of unfriendliness.
it’s not strictly an AI problem—any sufficiently rapid optimization process bears the risk of irretrievably converging on an optimum nobody likes before anybody can intervene with an updated optimization target.
individual and property rights are not rigorously specified enough to be a sufficient safeguard against bad outcomes even in an economy moving at human speeds
in other words the science of getting what we ask for advances faster than the science of figuring out what to ask for
I you don’t know that you are missing somethin or reason to be beleive this to be the case, you are unsure about wheter you are dummy when it comes to AI or not. Not knowiing whether you should AI discuss is different from knowing not to AI discuss.