Hi, I’m new here. I find this site while looking for information about A.I.
I read a few articles and couldn’t help but smile to myself and think ’wasn’t this what to the Internet was suppose to be. I had no idea this site existed and I’m honestly glad to have found stacks of future reading, you know that feeling.
I never really post on sites and would have usually have lurked myself silly but I’ve been promted into action by a question. I posted this to reddit in the shower thoughts section because it seemed appropriate but I’d like to ask you (more).
I was reading about Orthogonality thesis, and Oracle A.I.‘s as warnings and attempted precaution to potential hostile outcomes. I’ve recently finished Robots and Empires and couldn’t help but think that something like the Zeroth law could further complicate trying to restrain A.I.‘s with begin laws like do no harm or seemingly innocent tasks like acquire paper clips. To me it seemed trying to stop A.I.’s from harming us whilst also completing another task would always end up with us in the way.
So I thought perhaps we should try to give the A.I. a goal that would not benefit from violence in anyway. Try to make it Buddha-like. To become all knowing and one with all things? Would a statement like that even mean anything to a computer?
The one critism I receive was “what would be the point of that?” I don’t know. But I’m curious.
a goal that would not benefit from violence in anyway...To become all knowing
I have bad news for you. People have described ideas for an AI that only seeks knowledge (though I can’t find the best link to explain it now). I think this design would calmly kill us all to see what would happen, if we’d somehow prevented it from dropping an anvil on its own head.
To “become one with all things” does not seem sufficiently well-specified to stop either from happening. In general, if we can reasonably interpret the goal as something that’s already true, then the AI will do nothing to achieve it (nothing being the most efficient action).
I by no means thought I had stumbled upon something. I was just curious to see what other people thought.
I thought to be one with all things was a very ambiguous statement, I think what I was trying to get at was if the A.I. caused harm in some way it would therefore inhibit it from completing its primary goal by definition. And Buddha seemed the only example I could think of. Perhaps Plato’s or Nietzche’s versions of übermensch might fit better?
Thank you for replying I look forward to being a part of this community
Hi, I’m new here. I find this site while looking for information about A.I. I read a few articles and couldn’t help but smile to myself and think ’wasn’t this what to the Internet was suppose to be. I had no idea this site existed and I’m honestly glad to have found stacks of future reading, you know that feeling. I never really post on sites and would have usually have lurked myself silly but I’ve been promted into action by a question. I posted this to reddit in the shower thoughts section because it seemed appropriate but I’d like to ask you (more).
I was reading about Orthogonality thesis, and Oracle A.I.‘s as warnings and attempted precaution to potential hostile outcomes. I’ve recently finished Robots and Empires and couldn’t help but think that something like the Zeroth law could further complicate trying to restrain A.I.‘s with begin laws like do no harm or seemingly innocent tasks like acquire paper clips. To me it seemed trying to stop A.I.’s from harming us whilst also completing another task would always end up with us in the way. So I thought perhaps we should try to give the A.I. a goal that would not benefit from violence in anyway. Try to make it Buddha-like. To become all knowing and one with all things? Would a statement like that even mean anything to a computer? The one critism I receive was “what would be the point of that?” I don’t know. But I’m curious.
What do you think?
I have bad news for you. People have described ideas for an AI that only seeks knowledge (though I can’t find the best link to explain it now). I think this design would calmly kill us all to see what would happen, if we’d somehow prevented it from dropping an anvil on its own head.
To “become one with all things” does not seem sufficiently well-specified to stop either from happening. In general, if we can reasonably interpret the goal as something that’s already true, then the AI will do nothing to achieve it (nothing being the most efficient action).
I by no means thought I had stumbled upon something. I was just curious to see what other people thought. I thought to be one with all things was a very ambiguous statement, I think what I was trying to get at was if the A.I. caused harm in some way it would therefore inhibit it from completing its primary goal by definition. And Buddha seemed the only example I could think of. Perhaps Plato’s or Nietzche’s versions of übermensch might fit better? Thank you for replying I look forward to being a part of this community