I just meant I accept the consequentialist idea in decision theory that we should maximize, e.g. pick the best out of alternatives. But said in this way, it’s a trivial point.
I understood and agree with that statement of consequentialism in decision theory—what I disagree with is that it’s trivial that maximization is the right approach to take! For many situations, a reflexive agent that does not actively simulate the future or consider alternatives performs better than a contemplative agent that does simulate the future and considerate alternatives, because the best alternative is “obvious” and the acts of simulation and consideration consume time and resources that do not pay for themselves.
That’s obviously what’s going on with thermostats, but I would argue is what goes on all the way up to the consequentialism-deontology divide in ethics.
Can you taboo “weird?”
I would probably replace it with Pearl’s phrase here, of “surprising or unbelievable.”
To use the specific example of Newcomb’s problem, if people find a perfect predictor “surprising or unbelievable,” then they probably also think that the right thing to do around a perfect predictor is “surprising or unbelievable,” because using logic on an unbelievable premise can lead to an unbelievable conclusion! Consider a Mundane Newcomb’s problem which is missing perfect prediction but has the same evidential and counterfactual features: that is, Omega offers you the choice of one or two boxes, you choose which boxes to take, and then it puts a million dollars in the red box and a thousand dollars in the blue box if you choose only the red box and it puts a thousand dollars in the blue box if you choose the blue box or no boxes. Anyone that understands the scenario and prefers more money to less money will choose just the red box, and there’s nothing surprising or unbelievable about it.
What is surprising is the claim that there’s an entity who can replicate the counterfactual structure of the Mundane Newcomb scenario without also replicating the temporal structure of that scenario. But that’s a claim about physics, not decision theory!
because the best alternative is “obvious” and the acts of simulation and consideration consume time and
resources that do not pay for themselves.
Absolutely. This is the “bounded rationality” setting lots of people think about. For instance, Big Data is fashionable these days, and lots of people think about how we may do usual statistics business under severe computational constraints due to huge dataset sizes, e.g. stuff like this:
But in bounded rationality settings we still want to pick the best out of our alternatives, we just have a constraint that we can’t take more than a certain amount of resources to return an answer. The (trivial) idea of doing your best is still there. That is the part I accept. But that part is boring, thinking of the right thing to maximize is what is very subtle (and may involve non-consequentialist ideas, for example a decision theory that handles blackmail may involve virtue ethical ideas because the returned answer depends on “the sort of agent” someone is).
I understood and agree with that statement of consequentialism in decision theory—what I disagree with is that it’s trivial that maximization is the right approach to take! For many situations, a reflexive agent that does not actively simulate the future or consider alternatives performs better than a contemplative agent that does simulate the future and considerate alternatives, because the best alternative is “obvious” and the acts of simulation and consideration consume time and resources that do not pay for themselves.
That’s obviously what’s going on with thermostats, but I would argue is what goes on all the way up to the consequentialism-deontology divide in ethics.
I would probably replace it with Pearl’s phrase here, of “surprising or unbelievable.”
To use the specific example of Newcomb’s problem, if people find a perfect predictor “surprising or unbelievable,” then they probably also think that the right thing to do around a perfect predictor is “surprising or unbelievable,” because using logic on an unbelievable premise can lead to an unbelievable conclusion! Consider a Mundane Newcomb’s problem which is missing perfect prediction but has the same evidential and counterfactual features: that is, Omega offers you the choice of one or two boxes, you choose which boxes to take, and then it puts a million dollars in the red box and a thousand dollars in the blue box if you choose only the red box and it puts a thousand dollars in the blue box if you choose the blue box or no boxes. Anyone that understands the scenario and prefers more money to less money will choose just the red box, and there’s nothing surprising or unbelievable about it.
What is surprising is the claim that there’s an entity who can replicate the counterfactual structure of the Mundane Newcomb scenario without also replicating the temporal structure of that scenario. But that’s a claim about physics, not decision theory!
Absolutely. This is the “bounded rationality” setting lots of people think about. For instance, Big Data is fashionable these days, and lots of people think about how we may do usual statistics business under severe computational constraints due to huge dataset sizes, e.g. stuff like this:
http://www.cs.berkeley.edu/~jordan/papers/blb_icml2012.pdf
But in bounded rationality settings we still want to pick the best out of our alternatives, we just have a constraint that we can’t take more than a certain amount of resources to return an answer. The (trivial) idea of doing your best is still there. That is the part I accept. But that part is boring, thinking of the right thing to maximize is what is very subtle (and may involve non-consequentialist ideas, for example a decision theory that handles blackmail may involve virtue ethical ideas because the returned answer depends on “the sort of agent” someone is).