That seems like a misuse of the word “rationality”. The “rational” course of action is directly dependent upon whatever your response will be to the thought experiment according to your utility function (and therefore values mostly) and decision algorithm, and so somewhat question-begging.
A better term would be “your decision theory”, but that is trivially dismissible as non-rational—if you disagree with the results of the decision theory you use, then it’s not optimal, which means you should pick a better one.
If a utility function and decision theory system that are fully reflectively coherent with myself agree with me that for X reasons killing my child is necessarily and strictly more optimal than other courses of action even taking into account my preference for the survival of my child over that of other people, then yes, I definitely would—clearly there’s more utility to be gained elsewhere, and therefore the world will predictably be a better one for it. This calculation will (must) include the negative utility from my sadness, prison, the life lost, the opportunity costs, and any other negative impacts of killing my own child.
And as per other theorems, since the value of information and accuracy here would obviously be very high, I’d make really damn sure about these calculations—to a degree of accuracy and formalism much higher than I believe my own mind would currently be capable of with lives involved. So with all that said, in a real situation I would doubt my own calculations and would assign much greater probability to an error in my calculations or a lack / bias in my information, than to my calculations being right and killing my own child being optimal.
That seems like a misuse of the word “rationality”. The “rational” course of action is directly dependent upon whatever your response will be to the thought experiment according to your utility function (and therefore values mostly) and decision algorithm, and so somewhat question-begging.
A better term would be “your decision theory”, but that is trivially dismissible as non-rational—if you disagree with the results of the decision theory you use, then it’s not optimal, which means you should pick a better one.
If a utility function and decision theory system that are fully reflectively coherent with myself agree with me that for X reasons killing my child is necessarily and strictly more optimal than other courses of action even taking into account my preference for the survival of my child over that of other people, then yes, I definitely would—clearly there’s more utility to be gained elsewhere, and therefore the world will predictably be a better one for it. This calculation will (must) include the negative utility from my sadness, prison, the life lost, the opportunity costs, and any other negative impacts of killing my own child.
And as per other theorems, since the value of information and accuracy here would obviously be very high, I’d make really damn sure about these calculations—to a degree of accuracy and formalism much higher than I believe my own mind would currently be capable of with lives involved. So with all that said, in a real situation I would doubt my own calculations and would assign much greater probability to an error in my calculations or a lack / bias in my information, than to my calculations being right and killing my own child being optimal.
Any other specifics I forgot to mention?