Once a deterministic method of making a determination (along with all of the data that method will take into account) are set, it cannot be reasonably said that a decision is being made.
There’s nothing in there that is advice to robots about what decisions to make.
It is all about robots—deterministic machines—performing activities that everyone unproblematically calls “making decisions”. According to what you mean by “decision”, they are inherently incapable of doing any such thing. Robots, in your view, cannot be “agents”; a similar Google search shows that no-one who works with robots has any problem describing them as agents.
So, what do you mean by “decision” and “agenthood”? You seem to mean something ontologically primitive that no purely material entity can have; and so you conclude that if materialism is true, nothing at all has these things. Is that your view?
It would be better to say that materialism being true has the prerequisite of determinism being true, in which case “decisions” do not have the properties we’re crossing on.
Really?
Oddly enough, those are about programming. There’s nothing in there that is advice to robots about what decisions to make.
It is all about robots—deterministic machines—performing activities that everyone unproblematically calls “making decisions”. According to what you mean by “decision”, they are inherently incapable of doing any such thing. Robots, in your view, cannot be “agents”; a similar Google search shows that no-one who works with robots has any problem describing them as agents.
So, what do you mean by “decision” and “agenthood”? You seem to mean something ontologically primitive that no purely material entity can have; and so you conclude that if materialism is true, nothing at all has these things. Is that your view?
It would be better to say that materialism being true has the prerequisite of determinism being true, in which case “decisions” do not have the properties we’re crossing on.