So you think that true moral behavior excludes choice? (More generally, once someone chooses their morality, no more choices remain to be made?)
I think so. What choice is there in the field mathematics? I don’t see that mathematicians ever had any choice but to eventually converge at the same answer given the same conjecture. Why would that be different given an objective morality?
I thought that is what the Aumann’s agreement theorem states and the core insight of TDT, that rational agents will eventually arrive at the same conclusions and act accordingly.
The question would be what happens if a superintelligence was equipped with goals that contradict reality. If there exists an objective morality and the goal of a certain AI was to maximize paperclips while maximizing paperclips was morally wrong then that would be similar to the goal of proving that 1+1=3 or to attain faster-than-light propagation.
ETA I suppose that if there does exist a sort of objective morality but it is inconsistent with its goals, then you would end up with an unfriendly AI anyway. Since such an AI would attempt to pursue its goals given the small probability that there is no objective morality for one reason or the other.
I think so. What choice is there in the field mathematics? I don’t see that mathematicians ever had any choice but to eventually converge at the same answer given the same conjecture.
Mathematics is not an agent, it cannot be controlled anyway. But mathematicians have choice over what branch of math to pursue.
Mathematics is not an agent, it cannot be controlled anyway. But mathematicians have choice over what branch of math to pursue.
An expected utility maximizer has no choice but to pursue the world state it assigns the highest expected utility. The computation to determine which world state has the highest expected utility is completely deterministic. The evidence it used to calculate what to do was also not a matter of choice.
I don’t think that every consequentialist view of ethics reduces to equating morality with maximizing an arbitrary but fixed utility function which leaves no action as morally neutral.
Under bounded resources, I think there is (and I think remains as the horizon expands with the capability of the system) plenty of leeway in the “Pareto front” of actions judged at a given time not to be “likely worse in the long term” than any other action considered.
The trajectory of a system depends on its boundary conditions even if the dynamic is in some sense “convergent”, so “convergence” does not exclude control over the particular trajectory.
So you think that true moral behavior excludes choice? (More generally, once someone chooses their morality, no more choices remain to be made?)
I think so. What choice is there in the field mathematics? I don’t see that mathematicians ever had any choice but to eventually converge at the same answer given the same conjecture. Why would that be different given an objective morality?
I thought that is what the Aumann’s agreement theorem states and the core insight of TDT, that rational agents will eventually arrive at the same conclusions and act accordingly.
The question would be what happens if a superintelligence was equipped with goals that contradict reality. If there exists an objective morality and the goal of a certain AI was to maximize paperclips while maximizing paperclips was morally wrong then that would be similar to the goal of proving that 1+1=3 or to attain faster-than-light propagation.
ETA I suppose that if there does exist a sort of objective morality but it is inconsistent with its goals, then you would end up with an unfriendly AI anyway. Since such an AI would attempt to pursue its goals given the small probability that there is no objective morality for one reason or the other.
Mathematics is not an agent, it cannot be controlled anyway. But mathematicians have choice over what branch of math to pursue.
An expected utility maximizer has no choice but to pursue the world state it assigns the highest expected utility. The computation to determine which world state has the highest expected utility is completely deterministic. The evidence it used to calculate what to do was also not a matter of choice.
I don’t think that every consequentialist view of ethics reduces to equating morality with maximizing an arbitrary but fixed utility function which leaves no action as morally neutral.
Under bounded resources, I think there is (and I think remains as the horizon expands with the capability of the system) plenty of leeway in the “Pareto front” of actions judged at a given time not to be “likely worse in the long term” than any other action considered.
The trajectory of a system depends on its boundary conditions even if the dynamic is in some sense “convergent”, so “convergence” does not exclude control over the particular trajectory.