That’s just an example. I think that if society were far more tolerant of risks, and there was more funding, and the teams working on the problem were organized and led properly, then human patient successes would be seen in the near future.
Yes, that is part of it. I don’t think that the flat financial loss is the killer issue in many cases where an unproven method could work, or not. When doing nothing is acceptable, trying something becomes fraught with the risk of being blamed for the failure.
“Pascal’s wager” denotes several different fallacies, which are present in Pascal’s original argument.
Instrumentally, it refers to estimating expected utility based only on a possible outcome with an extremely large (positive or negative) payoff, without taking into account the fact that said outcome has an extremely small probability.
This is not quite right. The justification is that an action leading to certain negative consequences is not equivalent to inaction leading to the same consequences. Inaction is almost always acceptable, morally and legally. There are many obvious and non-obvious pitfalls in changing this attitude.
an action leading to certain negative consequences is not equivalent to inaction leading to the same consequences
True when comparing one actions with a non-conjugate declining-to-act (e.g. throwing someone off a building vs not saving someone from falling off a building)
In this case, we’re looking at a fear of ineffectiveness—the case where acting could produce the same effect as not doing that exact same thing.
And yet, from a consequentialist standpoint, there shouldn’t be. Regardless of potential pitfalls, this is unlikely to change: I suspect it’s “hardwired” into our psychology. But there is also a reverse tendency, especially on the part of the public attitude towards leaders, where it is better to be seen to be doing something rather than nothing. Even if it is not clear what action should be taken.
And yet, from a consequentialist standpoint, there shouldn’t be.
Only if your reasoning is extremely reliable in estimating the consequences of your action or inaction. Otherwise you may end up doing more harm by acting than you would by inacting (happens all the time). I am guessing that this is a part of what keeps people from acting.
We can and we can’t. Here’s an 11 year old article where rats successfully regained function : http://www.jneurosci.org/content/21/23/9334.abstract
That’s just an example. I think that if society were far more tolerant of risks, and there was more funding, and the teams working on the problem were organized and led properly, then human patient successes would be seen in the near future.
Isn’t that the funny thing? We’ll take a certain loss over a risk of the same exact loss. Sigh.
Isn’t it closer to “take a certain loss over a risk of the same exact loss, plus a whole lot of money”?
Yes, that is part of it. I don’t think that the flat financial loss is the killer issue in many cases where an unproven method could work, or not. When doing nothing is acceptable, trying something becomes fraught with the risk of being blamed for the failure.
That’s a Pascal’s wager argument.
What? No. Pascal’s wager is when you apply the rules of instrumental rationality to epistemic rationality.
Simply being willing to take risks to possibly get a better outcome, without warping your beliefs, is not the same thing at all.
“Pascal’s wager” denotes several different fallacies, which are present in Pascal’s original argument.
Instrumentally, it refers to estimating expected utility based only on a possible outcome with an extremely large (positive or negative) payoff, without taking into account the fact that said outcome has an extremely small probability.
This is not quite right. The justification is that an action leading to certain negative consequences is not equivalent to inaction leading to the same consequences. Inaction is almost always acceptable, morally and legally. There are many obvious and non-obvious pitfalls in changing this attitude.
True when comparing one actions with a non-conjugate declining-to-act (e.g. throwing someone off a building vs not saving someone from falling off a building)
In this case, we’re looking at a fear of ineffectiveness—the case where acting could produce the same effect as not doing that exact same thing.
And yet, from a consequentialist standpoint, there shouldn’t be. Regardless of potential pitfalls, this is unlikely to change: I suspect it’s “hardwired” into our psychology. But there is also a reverse tendency, especially on the part of the public attitude towards leaders, where it is better to be seen to be doing something rather than nothing. Even if it is not clear what action should be taken.
Only if your reasoning is extremely reliable in estimating the consequences of your action or inaction. Otherwise you may end up doing more harm by acting than you would by inacting (happens all the time). I am guessing that this is a part of what keeps people from acting.