Mathematics is not an agent, it cannot be controlled anyway. But mathematicians have choice over what branch of math to pursue.
An expected utility maximizer has no choice but to pursue the world state it assigns the highest expected utility. The computation to determine which world state has the highest expected utility is completely deterministic. The evidence it used to calculate what to do was also not a matter of choice.
I don’t think that every consequentialist view of ethics reduces to equating morality with maximizing an arbitrary but fixed utility function which leaves no action as morally neutral.
Under bounded resources, I think there is (and I think remains as the horizon expands with the capability of the system) plenty of leeway in the “Pareto front” of actions judged at a given time not to be “likely worse in the long term” than any other action considered.
The trajectory of a system depends on its boundary conditions even if the dynamic is in some sense “convergent”, so “convergence” does not exclude control over the particular trajectory.
An expected utility maximizer has no choice but to pursue the world state it assigns the highest expected utility. The computation to determine which world state has the highest expected utility is completely deterministic. The evidence it used to calculate what to do was also not a matter of choice.
I don’t think that every consequentialist view of ethics reduces to equating morality with maximizing an arbitrary but fixed utility function which leaves no action as morally neutral.
Under bounded resources, I think there is (and I think remains as the horizon expands with the capability of the system) plenty of leeway in the “Pareto front” of actions judged at a given time not to be “likely worse in the long term” than any other action considered.
The trajectory of a system depends on its boundary conditions even if the dynamic is in some sense “convergent”, so “convergence” does not exclude control over the particular trajectory.