Are you serious? Do you really think that morality can be programmed on computers? Good luck then. Pursuing even unrealistic goals can yield useful results. As the least, your effort will mark more clearly boundaries and limitations of the computer programming method in solving the AI problem.
But first, yes, I skimmed through the recommended article, but dont see how does it fit in here. Its an old familiar dispute about philosophical zombies. My take on this, the idea of such zombies is rather artificial. I think it is advocated by people who have problems a understanding mind/body connection. These people are dualists, even if they don`t admit it.
Now about morality. There is a good expression in the article you referenced: high-level cognitive architectures. We don`t know yet what this architecture is, but this is the level that provides categories and the language one has to understand and adopt in order to understand high-level mind functionality, including morality. Programming languages are a way below that level and not suitable for the purpose. As an illustration, imagine that we have a complex expert system that performs extensive data base searches and sophisticated logical inferences, and then we try to understand how it works in terms of gates, transistors, capacitors that operate on a microchip. It will not do it! The same is about trying to program morality. How one is going to do this? To write a function like, bool isMoral(...)? You pass parameters that represent a certain life situation and it returns true of false for moral/immoral? That seems absurd to me. The best that I can think about utilizing programming for AI is to write a software that models behavior of neurons. There still will remain a long way up to high-level cognitive architectures, and only then, morality.
Are you serious? Do you really think that morality can be programmed on computers? Good luck then.
… which I would not make due to the violation of GAP.
Regarding the somewhat weaker claim “programming morality into computers would be very hard” we may have less disagreement. My expectation is that even with the best human minds dedicated into ‘programming morality into computers” after first spending decades of research into those ‘high-level architectures’ they are still quite likely to make a mistake and thereby kill us all.
I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack.
I think you understand my point now (at least partially) and can see how weird it looks to me such ideas as programming morality. I now realize, there maybe many people here who take these ideas seriously.
I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack.
GAP, The Generalized Antizombie Principle as mentioned in the preceding comments. (Perhaps I should have included the ‘Z’.) You have made no social violation and there is nothing personal here, just a factual claim dismissed due to a commonly understood principle.
I think I understand now why you keep mentioning GAP. You thought that I objected the idea of morality programming due to zombie argument. Sort of, we will create only a morality imitating zombie, rather than a real moral mind, etc. No, my objection is not about this. I dont take zombies seriously and dont care about them. My objection is about hierarchy violation. Programming languages are not right means to describe/implement high-level cognitive architectures, which will be a basis for morality and other high-level phenomena of mind.
Hm. If I implement a neural-networking algorithm on my computer and present it with a set of prototypical images until it reliably recognizes pictures of rabbits, would you say I have not programmed my computer to recognize rabbits? If so, what verb would you use to describe what I’ve done?
You’ve implemented a neural network (rather simple) and made it to self-organize to recognize rabbits. It was self-organized following outside sensory input (this is only one way direction of information flow, another direction would be sending controlling impulses to network output, so that those impulses would affect what kind of input the network receives).
Now, suppose I want a term that isn’t quite so specific to a particular technology, a particular technique, a particular style of problem solving. That is, suppose I want a term that refers to a large class of techniques for causing my computer to perform a variety of cognitive tasks, including but not limited to recognizing rabbits.
If I’m understanding you correctly, you reject the phrase “I program the computer to perform various cognitive tasks” but might endorse “I made the computer self-organize to perform various cognitive tasks.”
Well, it’s not that I made it to self-organize, it is information coming from the real world that did the trick. I only used a conventional programming language to implement a mechanism for such self-organization (neural network). But I’m not programming the way how this network is going to function. It is rather “programmed” by reality itself. The reality can be considered as a giant supercomputer constantly generating consistent streams of information. Some of that information is fed to a network and makes it to self-organize.
Fair enough… so, OK, “I made it to self-organize” isn’t right either.
That said, I’ll point out that that was your own choice of words (“You’ve implemented a neural network [..] and made it to self-organize”).
I mention this, not to criticize your choice of words, but to point out that you have experience with the dynamic that causes people to choose a brief not-quite-right phrase that more-or-less means what we want to express, rather than a paragraph of text that is more precise.
Which is exactly what’s going on when people talk about programming a computer to perform cognitive tasks.
I could have challenged your word choice when you made it (just like you did now, when I echoed it back), but I more or less understood what you meant, and I chose to engage with your approximate meaning instead. Sometimes that’s a helpful move in conversations.
Yes, there is some ambiguity in use of words, I myself noticed it yesterday. I can only say that you understood it correctly and made the right move! OK, I’ll try to be more accurate in using words (sometimes it is not simple, requires time and effort).
I agree completely that it’s not simple and requires time and effort. I am, as I said explicitly, not criticizing your choice of words. I’m criticizing your listening skills.
This whole thread got started because you chose to interpret “programming morality” in a fairly narrow way to mean something unreasonable, and then chose to criticize that unreasonable thing.
I am suggesting that next time around, you can profitably make more of an effort as a listener to meet the speaker halfway and think about what reasonable thing they might have been trying to express, rather than interpret their words narrowly to suggest something unreasonable. Just as you value others doing the same for you.
I think I understand now why you keep mentioning GAP.
Did you correctly infer that it is primarily because that post and the surrounding posts in the associated sequence appeared in my playlist while I was at the gym today? That would have been impressive.
(If I hadn’t been primed I may have ignored your comment rather than replied with the relevant link.)
You thought that I objected the idea of morality programming due to zombie argument.
The other direction. Your objection (as it was then made) was a violation of the aforementioned GAZP so I rejected it.
Are you serious? Do you really think that morality can be programmed on computers? Good luck then. Pursuing even unrealistic goals can yield useful results. As the least, your effort will mark more clearly boundaries and limitations of the computer programming method in solving the AI problem.
Required reading: The Generalized Antizombie Principle
I think you misunderstood my point here.
But first, yes, I skimmed through the recommended article, but don
t see how does it fit in here. It
s an old familiar dispute about philosophical zombies. My take on this, the idea of such zombies is rather artificial. I think it is advocated by people who have problems a understanding mind/body connection. These people are dualists, even if they don`t admit it.Now about morality. There is a good expression in the article you referenced:
high-level cognitive architectures
. We don`t know yet what this architecture is, but this is the level that provides categories and the language one has to understand and adopt in order to understand high-level mind functionality, including morality. Programming languages are a way below that level and not suitable for the purpose. As an illustration, imagine that we have a complex expert system that performs extensive data base searches and sophisticated logical inferences, and then we try to understand how it works in terms of gates, transistors, capacitors that operate on a microchip. It will not do it! The same is about trying to program morality. How one is going to do this? To write a function like, bool isMoral(...)? You pass parameters that represent a certain life situation and it returns true of false for moral/immoral? That seems absurd to me. The best that I can think about utilizing programming for AI is to write a software that models behavior of neurons. There still will remain a long way up tohigh-level cognitive architectures
, and only then, morality.I was responding directly to this claim:
… which I would not make due to the violation of GAP.
Regarding the somewhat weaker claim “programming morality into computers would be very hard” we may have less disagreement. My expectation is that even with the best human minds dedicated into ‘programming morality into computers” after first spending decades of research into those ‘high-level architectures’ they are still quite likely to make a mistake and thereby kill us all.
I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack. I think you understand my point now (at least partially) and can see how weird it looks to me such ideas as programming morality. I now realize, there maybe many people here who take these ideas seriously.
GAP, The Generalized Antizombie Principle as mentioned in the preceding comments. (Perhaps I should have included the ‘Z’.) You have made no social violation and there is nothing personal here, just a factual claim dismissed due to a commonly understood principle.
I think I understand now why you keep mentioning GAP. You thought that I objected the idea of morality programming due to zombie argument. Sort of, we will create only a morality imitating zombie, rather than a real moral mind, etc. No, my objection is not about this. I dont take zombies seriously and dont care about them. My objection is about hierarchy violation. Programming languages are not right means to describe/implement high-level cognitive architectures, which will be a basis for morality and other high-level phenomena of mind.
Hm.
If I implement a neural-networking algorithm on my computer and present it with a set of prototypical images until it reliably recognizes pictures of rabbits, would you say I have not programmed my computer to recognize rabbits?
If so, what verb would you use to describe what I’ve done?
You’ve implemented a neural network (rather simple) and made it to self-organize to recognize rabbits. It was self-organized following outside sensory input (this is only one way direction of information flow, another direction would be sending controlling impulses to network output, so that those impulses would affect what kind of input the network receives).
OK.
Now, suppose I want a term that isn’t quite so specific to a particular technology, a particular technique, a particular style of problem solving. That is, suppose I want a term that refers to a large class of techniques for causing my computer to perform a variety of cognitive tasks, including but not limited to recognizing rabbits.
If I’m understanding you correctly, you reject the phrase “I program the computer to perform various cognitive tasks” but might endorse “I made the computer self-organize to perform various cognitive tasks.”
Have I understood you correctly?
Well, it’s not that I made it to self-organize, it is information coming from the real world that did the trick. I only used a conventional programming language to implement a mechanism for such self-organization (neural network). But I’m not programming the way how this network is going to function. It is rather “programmed” by reality itself. The reality can be considered as a giant supercomputer constantly generating consistent streams of information. Some of that information is fed to a network and makes it to self-organize.
Fair enough… so, OK, “I made it to self-organize” isn’t right either.
That said, I’ll point out that that was your own choice of words (“You’ve implemented a neural network [..] and made it to self-organize”).
I mention this, not to criticize your choice of words, but to point out that you have experience with the dynamic that causes people to choose a brief not-quite-right phrase that more-or-less means what we want to express, rather than a paragraph of text that is more precise.
Which is exactly what’s going on when people talk about programming a computer to perform cognitive tasks.
I could have challenged your word choice when you made it (just like you did now, when I echoed it back), but I more or less understood what you meant, and I chose to engage with your approximate meaning instead. Sometimes that’s a helpful move in conversations.
Yes, there is some ambiguity in use of words, I myself noticed it yesterday. I can only say that you understood it correctly and made the right move! OK, I’ll try to be more accurate in using words (sometimes it is not simple, requires time and effort).
I agree completely that it’s not simple and requires time and effort.
I am, as I said explicitly, not criticizing your choice of words.
I’m criticizing your listening skills.
This whole thread got started because you chose to interpret “programming morality” in a fairly narrow way to mean something unreasonable, and then chose to criticize that unreasonable thing.
I am suggesting that next time around, you can profitably make more of an effort as a listener to meet the speaker halfway and think about what reasonable thing they might have been trying to express, rather than interpret their words narrowly to suggest something unreasonable.
Just as you value others doing the same for you.
Did you correctly infer that it is primarily because that post and the surrounding posts in the associated sequence appeared in my playlist while I was at the gym today? That would have been impressive.
(If I hadn’t been primed I may have ignored your comment rather than replied with the relevant link.)
The other direction. Your objection (as it was then made) was a violation of the aforementioned GAZP so I rejected it.