If AI risk arguments mainly apply to consequentialist (which I assume is the same as EU-maximizing in the OP) AI, and the first half of the OP is right that such AI is unlikely to arise naturally, does that make you update against AI risk?
which I assume is the same as EU-maximizing in the OP
Not quite the same, but probably close enough.
You can have non-consequentialist EU maximizers if e.g. the actionspace and statespace is small and someone manually computed a table of the expected utilities. In that case, the consequentialism is in the entity that computed the table of the expected utilities, not the entity that selects an action based on the table.
(Though I suppose such an agent is kind of pointless since you could as well just store a table of the actions to choose.)
You can also have consequentialists that are not EU maximizers if they are e.g. a collection of consequentialist EU maximizers working together.
If AI risk arguments mainly apply to consequentialist (which I assume is the same as EU-maximizing in the OP) AI, and the first half of the OP is right that such AI is unlikely to arise naturally, does that make you update against AI risk?
Yes
Not quite the same, but probably close enough.
You can have non-consequentialist EU maximizers if e.g. the actionspace and statespace is small and someone manually computed a table of the expected utilities. In that case, the consequentialism is in the entity that computed the table of the expected utilities, not the entity that selects an action based on the table.
(Though I suppose such an agent is kind of pointless since you could as well just store a table of the actions to choose.)
You can also have consequentialists that are not EU maximizers if they are e.g. a collection of consequentialist EU maximizers working together.