It too closely approximates the way the herein proposed unfriendly AI would reason—get itself a goal (predict stealing on job for example) and then proceed to solving it, oblivious to the notion of fairness. I’ve seen several other posts over the time that rub in exactly same wrong way, but I do not remember exact titles. By the way, what if I am to use the same sort of reasoning as in this post on the people who have a rather odd mental model of artificial minds (or mind uploaded fellow humans) ?
By the way, what if I am to use the same sort of reasoning as in this post on the people who have a rather odd mental model of artificial minds (or mind uploaded fellow humans) ?
Can you clarify? I am not sure what odd mental models and, more generally, situations you have in mind.
It too closely approximates the way the herein proposed unfriendly AI would reason—get itself a goal (predict stealing on job for example) and then proceed to solving it, oblivious to the notion of fairness. I’ve seen several other posts over the time that rub in exactly same wrong way, but I do not remember exact titles. By the way, what if I am to use the same sort of reasoning as in this post on the people who have a rather odd mental model of artificial minds (or mind uploaded fellow humans) ?
Can you clarify? I am not sure what odd mental models and, more generally, situations you have in mind.