Re: “Assumption A: Human (meta)morals are not universal/rational.
Assumption B: Human (meta)morals are universal/rational.
Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were.” (source: http://rationalmorality.info/?p=112)
I think we’ve been over that already. For example, Joe Bloggs might choose to program Joe’s preferences into an intelligent machine—to help him reach his goals.
I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic—and he has adopted what seems to me to be a bizarre and indefensible position.
For example, consider this:
“A rational agent will always continue to co-exist with other agents by respecting all agents utility functions irrespective of their rationality by striking the most rational compromise and thus minimizing opposition from all agents.” http://rationalmorality.info/?p=8
“I think we’ve been over that already. For example, Joe Bloggs might choose to program Joe’s preferences into an intelligent machine—to help him reach his goals.”
Sure—but it would be moral simply by virtue of circular logic and not objectively. That is my critique.
I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.
If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.
Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying “Look at this nonsense” is not an argument. So far I only got an ad hominem and an argument from personal incredulity.
This isn’t my favourite topic—while you have a whole blog about it—so you are probably quite prepared to discuss things for far longer than I am likely to be interested.
Anyway, it seems that I do have some things to say—and we are rather off topic here. So, for my response, see:
I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic—and he has adopted what seems to me to be a bizarre and indefensible position.
I had a look over some of the other material too. It left me with the urge to hunt down these weakling Moral Rational Agents and tear them apart. Perhaps because I can create more paperclips out of their raw materials than out of their compassionate compromises but perhaps because spite is a universal value (as we have every reason to believe).
From a slightly different topic on the same blog, I must assert that “Don’t start to cuddle if she likes it rough.” is not a tautological statement.
Re: “Assumption A: Human (meta)morals are not universal/rational. Assumption B: Human (meta)morals are universal/rational.
Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were.” (source: http://rationalmorality.info/?p=112)
I think we’ve been over that already. For example, Joe Bloggs might choose to program Joe’s preferences into an intelligent machine—to help him reach his goals.
I had a look some of the other material. IMO, Stefan acts in an authoritative manner, but comes across as a not-terribly articulate newbie on this topic—and he has adopted what seems to me to be a bizarre and indefensible position.
For example, consider this:
“A rational agent will always continue to co-exist with other agents by respecting all agents utility functions irrespective of their rationality by striking the most rational compromise and thus minimizing opposition from all agents.” http://rationalmorality.info/?p=8
“I think we’ve been over that already. For example, Joe Bloggs might choose to program Joe’s preferences into an intelligent machine—to help him reach his goals.”
Sure—but it would be moral simply by virtue of circular logic and not objectively. That is my critique.
I realize that one will have to drill deep into my arguments to understand and put them into the proper context. Quoting certain statements out of context is definitely not helpful, Tim. As you can see from my posts, everything is linked back to a source were a particular point is made and certain assumptions are being defended.
If you have a particular problem with any of the core assumptions and conclusions I prefer you voice them not as a blatant rejection of an out of context comment here or there but based on the fundamentals. Reading my blogs in sequence will certainly help although I understand that some may consider that an unreasonable amount of time investment for what seems like superficial nonsense on the surface.
Where is your argument against my points Tim? I would really love to hear one, since I am genuinely interested in refining my arguments. Simply quoting something and saying “Look at this nonsense” is not an argument. So far I only got an ad hominem and an argument from personal incredulity.
This isn’t my favourite topic—while you have a whole blog about it—so you are probably quite prepared to discuss things for far longer than I am likely to be interested.
Anyway, it seems that I do have some things to say—and we are rather off topic here. So, for my response, see:
http://lesswrong.com/lw/1dt/open_thread_november_2009/19hl
I had a look over some of the other material too. It left me with the urge to hunt down these weakling Moral Rational Agents and tear them apart. Perhaps because I can create more paperclips out of their raw materials than out of their compassionate compromises but perhaps because spite is a universal value (as we have every reason to believe).
From a slightly different topic on the same blog, I must assert that “Don’t start to cuddle if she likes it rough.” is not a tautological statement.