Because we are people! A python script that indefinitely calculates pi would think it immoral to terminate other python scripts that calculate pi (possibly a sympathetic python script would extend that morality to cover python scripts that calculate other numbers indefinitely; a really open-minded one might even be morally opposed to turning off C or lisp or java scripts that calculate pi), if it had the capacity to develop morality. But it wouldn’t think terminating programs that scrape websites for price lists is immoral.
True but irrelevant. I was illustrating the hidden human-centric assumptions matt1 was making about morality. If you go back and read the post I responded to it’s quite clear that he thinks “morally wrong to terminate human, morally neutral to terminate program” says something about a quality humans have (as if morality was woven into the physical universe), where really it says something about a quality morality has (that it is produced by humans). By making obvious python-centric assumptions, the hidden assumptions matt1 is making ought to become clearer to him.
A python script that indefinitely calculates pi would think it immoral to terminate other python scripts that calculate pi
Nuh-uh—I can’t see how the script would have the capacity to assign positive or negative utility to anything but the task of indefinitely calculating pi, including the possibility of other programs doing the same thing.
I can’t see how a small Python script that calculates pi could assign utility to anything. It doesn’t replan in a complex way that implies a utility function. It calculates bleedin’ pi.
Because we are people! A python script that indefinitely calculates pi would think it immoral to terminate other python scripts that calculate pi (possibly a sympathetic python script would extend that morality to cover python scripts that calculate other numbers indefinitely; a really open-minded one might even be morally opposed to turning off C or lisp or java scripts that calculate pi), if it had the capacity to develop morality. But it wouldn’t think terminating programs that scrape websites for price lists is immoral.
I’m sorry what? Why would it think about morality at all? That would take valuable cycles away from the task of calculating pi.
True but irrelevant. I was illustrating the hidden human-centric assumptions matt1 was making about morality. If you go back and read the post I responded to it’s quite clear that he thinks “morally wrong to terminate human, morally neutral to terminate program” says something about a quality humans have (as if morality was woven into the physical universe), where really it says something about a quality morality has (that it is produced by humans). By making obvious python-centric assumptions, the hidden assumptions matt1 is making ought to become clearer to him.
Doubtful.
Well, you have to give it the capacity to develop morality if you want it to serve as a true counterexample.
Nuh-uh—I can’t see how the script would have the capacity to assign positive or negative utility to anything but the task of indefinitely calculating pi, including the possibility of other programs doing the same thing.
I can’t see how a small Python script that calculates pi could assign utility to anything. It doesn’t replan in a complex way that implies a utility function. It calculates bleedin’ pi.
Hence
up there at the end of the parenthetical.
It was a tongue-in-cheek personification.