I think that AI with greater than human intelligence will happen sooner or later and I’d prefer it to be friendly than not so yes, I’m for the Friendly AI project.
In general I don’t support attempting to restrict progress or change simply because some people are not comfortable with it. I don’t put that in the same category as imposing compulsory intelligence enhancement on someone who doesn’t want it.
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
An AI that forced anything on humans ‘for their own good’ against their will would not count as friendly by my definition. A ‘friendly AI’ project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don’t think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
The closest named ethical philosophy I’ve found to mine is something like Ethical Egoism. It’s not close enough to what I believe that I’m comfortable self identifying as an ethical egoist however. I’ve posted quite a bit here in the past on the topic—a search for my user name and ‘ethics’ using the custom search will turn up quite a few posts. I’ve been thinking about writing up a more complete summary at some point but haven’t done so yet.
The category “actions forced on humans ‘for their own good’ against their will” is not binary. There’s actually a large gray area. I’d appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in “against their will” and the second in “forced”.
I think that AI with greater than human intelligence will happen sooner or later and I’d prefer it to be friendly than not so yes, I’m for the Friendly AI project.
In general I don’t support attempting to restrict progress or change simply because some people are not comfortable with it. I don’t put that in the same category as imposing compulsory intelligence enhancement on someone who doesn’t want it.
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
An AI that forced anything on humans ‘for their own good’ against their will would not count as friendly by my definition. A ‘friendly AI’ project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don’t think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
The closest named ethical philosophy I’ve found to mine is something like Ethical Egoism. It’s not close enough to what I believe that I’m comfortable self identifying as an ethical egoist however. I’ve posted quite a bit here in the past on the topic—a search for my user name and ‘ethics’ using the custom search will turn up quite a few posts. I’ve been thinking about writing up a more complete summary at some point but haven’t done so yet.
The category “actions forced on humans ‘for their own good’ against their will” is not binary. There’s actually a large gray area. I’d appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in “against their will” and the second in “forced”.