because the best alternative is “obvious” and the acts of simulation and consideration consume time and
resources that do not pay for themselves.
Absolutely. This is the “bounded rationality” setting lots of people think about. For instance, Big Data is fashionable these days, and lots of people think about how we may do usual statistics business under severe computational constraints due to huge dataset sizes, e.g. stuff like this:
But in bounded rationality settings we still want to pick the best out of our alternatives, we just have a constraint that we can’t take more than a certain amount of resources to return an answer. The (trivial) idea of doing your best is still there. That is the part I accept. But that part is boring, thinking of the right thing to maximize is what is very subtle (and may involve non-consequentialist ideas, for example a decision theory that handles blackmail may involve virtue ethical ideas because the returned answer depends on “the sort of agent” someone is).
Absolutely. This is the “bounded rationality” setting lots of people think about. For instance, Big Data is fashionable these days, and lots of people think about how we may do usual statistics business under severe computational constraints due to huge dataset sizes, e.g. stuff like this:
http://www.cs.berkeley.edu/~jordan/papers/blb_icml2012.pdf
But in bounded rationality settings we still want to pick the best out of our alternatives, we just have a constraint that we can’t take more than a certain amount of resources to return an answer. The (trivial) idea of doing your best is still there. That is the part I accept. But that part is boring, thinking of the right thing to maximize is what is very subtle (and may involve non-consequentialist ideas, for example a decision theory that handles blackmail may involve virtue ethical ideas because the returned answer depends on “the sort of agent” someone is).