A purely rational person would be nigh omniscient. If a combustible engine does more good than bad (which it does), a purely ration person would realize this.
If you want to know how we’d act if we just weren’t biased about risks, but were just as imprecise, consider: would it be worth while to have been substantially more cautious? Barring nuclear weapons, I doubt it. The lives lost due to technological advancements have been dwarfed by the lives saved. A well-calibrated agent would realize this, and proceed with a lesser level of caution.
There are areas where we’re far too cautious, such as medicine. Drugs aren’t released until the probability of killing someone is vastly below the probability of saving someone. Human testing is avoided until it’s reasonably safe, rather than risking a few lives to get a potentially life-saving drug out years earlier.
A purely rational person would be nigh omniscient. If a combustible engine does more good than bad (which it does), a purely ration person would realize this.
That’s not the definition of rationality that’s usually used around here. The one used around here is much more conservative on the scale of counterfactual ability.
I agree with the rest of your point, but I think I’m misinterpreting this statement, because it seems like an overstatement to me. That is, I’d restate your second sentence as “If there was dispute over whether use of a combustion engine did more good than bad, a purely rational person would be able to effectively investigate and correctly determine the answer.” As you say, I’m fairly certain that the combustion engine created more benefit than than harm to humanity.
“A purely rational person would be nigh omniscient”
Even at current human intelligence levels? I don’t see how pure rationality without the ability to crunch massive amounts of data extremely fast would make someone omniscient, but I may be missing something.
“If a combustible engine does more good than bad (which it does)”
Of course, I’m playing devil’s advocate with this post a bit, but I do have some uncertainty about.… well, your certainty about this :)
What if a purely rational mind decides that while there is a high probability that the combustible engine would bring about more “good” than “bad”, the probable risks compels them to reject its production in favor of first improving the technology into something with a better reward/risk ratio? A purely rational mind would certainly recognize that, over time, the resource of gasoline derived from oil would lead to shortages and potential global warfare. This is a rather high risk probability. Perhaps a purely rational mind would opt to continue development until a more sustainable technology could be mass produced, greatly reducing the potential need for war/pollution/etc. Keep in mind, we have yet to see the final aftermath of our combustible engine reliance.
“The lives lost due to technological advancements have been dwarfed by the lives saved.”
How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology? How many people lead very low quality lives today due to over population? Would a purely rational mind make decisions to limit population rather than help them explode?
Does a purely rational mind value life less or more? Are humans MORE expendable to a purely rational mind so long as it is 51% beneficial, or is there a rational reason to value each individual life more passionately?
I feel that we tend to associate pure rationality with a rather sci-fi notion of robotic intelligence. In other words, pure rationality is cold and mathematical and would consider compassion a weakness. While this may be true, a purely rational mind may have other reasons than compassion to value individual life MORE rather than less, even when measured against a potential benefit.
The questions seem straight forward at first, but is it possible that we lean toward the easy answers that may or may not be highly influenced by very irrational cultural assumptions?
How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology?
Overpopulation isn’t caused by technology. It’s caused by having too many kids, and not using resources well enough. Technology has drastically increased our efficiency with resources, allowing us to easily grow enough to feed everyone.
Does a purely rational mind value life less or more?
Does a purely rational mind value life less or more?
Specifying that a mind is rational does not specify how much it values life.
That is correct but it is also probably the case that rational mind would propagate better from it’s other values, to the value of it’s own life. For instance if your arm is trapped under boulder, human as is would either be unable to cut off own arm, or do it at suboptimal time (too late), compared to the agent that can propagate everything that it values in the world, to the value of it’s life, and have that huge value win vs the pain. Furthermore, it would correctly propagate pain later (assuming it knows it’ll eventually have to cut off own arm) into the decision now. So it would act as if it values life more and pain less.
Well, the value of life, lacking specifiers, should be able to refer to the total of the value of life (as derived from other goals and intrinsic value if any); my post is rather explicit in that it speaks of the total. Of course you can take ‘value life’ to mean only the intrinsic value of life, but it is pretty clear that is not what OP meant if we assume that OP is not entirely stupid. He is correct in the sense that the full value of life is affected by rationality. Rational person should only commit suicide in some very few circumstances where it truly results in maximum utility given the other values not accomplished if you are dead (e.g. so that your children can cook and eat your body, or like in “28 days later” killing yourself in the 10 seconds after infection to avoid becoming a hazard, that kind of stuff). It can be said that irrational person can’t value the life correctly (due to incorrect propagation).
A purely rational person would be nigh omniscient. If a combustible engine does more good than bad (which it does), a purely ration person would realize this.
If you want to know how we’d act if we just weren’t biased about risks, but were just as imprecise, consider: would it be worth while to have been substantially more cautious? Barring nuclear weapons, I doubt it. The lives lost due to technological advancements have been dwarfed by the lives saved. A well-calibrated agent would realize this, and proceed with a lesser level of caution.
There are areas where we’re far too cautious, such as medicine. Drugs aren’t released until the probability of killing someone is vastly below the probability of saving someone. Human testing is avoided until it’s reasonably safe, rather than risking a few lives to get a potentially life-saving drug out years earlier.
That’s not the definition of rationality that’s usually used around here. The one used around here is much more conservative on the scale of counterfactual ability.
I agree with the rest of your point, but I think I’m misinterpreting this statement, because it seems like an overstatement to me. That is, I’d restate your second sentence as “If there was dispute over whether use of a combustion engine did more good than bad, a purely rational person would be able to effectively investigate and correctly determine the answer.” As you say, I’m fairly certain that the combustion engine created more benefit than than harm to humanity.
“A purely rational person would be nigh omniscient”
Even at current human intelligence levels? I don’t see how pure rationality without the ability to crunch massive amounts of data extremely fast would make someone omniscient, but I may be missing something.
“If a combustible engine does more good than bad (which it does)”
Of course, I’m playing devil’s advocate with this post a bit, but I do have some uncertainty about.… well, your certainty about this :)
What if a purely rational mind decides that while there is a high probability that the combustible engine would bring about more “good” than “bad”, the probable risks compels them to reject its production in favor of first improving the technology into something with a better reward/risk ratio? A purely rational mind would certainly recognize that, over time, the resource of gasoline derived from oil would lead to shortages and potential global warfare. This is a rather high risk probability. Perhaps a purely rational mind would opt to continue development until a more sustainable technology could be mass produced, greatly reducing the potential need for war/pollution/etc. Keep in mind, we have yet to see the final aftermath of our combustible engine reliance.
“The lives lost due to technological advancements have been dwarfed by the lives saved.”
How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology? How many people lead very low quality lives today due to over population? Would a purely rational mind make decisions to limit population rather than help them explode?
Does a purely rational mind value life less or more? Are humans MORE expendable to a purely rational mind so long as it is 51% beneficial, or is there a rational reason to value each individual life more passionately?
I feel that we tend to associate pure rationality with a rather sci-fi notion of robotic intelligence. In other words, pure rationality is cold and mathematical and would consider compassion a weakness. While this may be true, a purely rational mind may have other reasons than compassion to value individual life MORE rather than less, even when measured against a potential benefit.
The questions seem straight forward at first, but is it possible that we lean toward the easy answers that may or may not be highly influenced by very irrational cultural assumptions?
Overpopulation isn’t caused by technology. It’s caused by having too many kids, and not using resources well enough. Technology has drastically increased our efficiency with resources, allowing us to easily grow enough to feed everyone.
The utility function is not up for grabs. Specifying that a mind is rational does not specify how much it values life.
I was answering based on the idea that these are altruistic people. I really don’t know what would happen in a society full of rational egoists.
It isn’t.
That is correct but it is also probably the case that rational mind would propagate better from it’s other values, to the value of it’s own life. For instance if your arm is trapped under boulder, human as is would either be unable to cut off own arm, or do it at suboptimal time (too late), compared to the agent that can propagate everything that it values in the world, to the value of it’s life, and have that huge value win vs the pain. Furthermore, it would correctly propagate pain later (assuming it knows it’ll eventually have to cut off own arm) into the decision now. So it would act as if it values life more and pain less.
You were talking about instrumental values? I thought you were talking about terminal values.
Well, the value of life, lacking specifiers, should be able to refer to the total of the value of life (as derived from other goals and intrinsic value if any); my post is rather explicit in that it speaks of the total. Of course you can take ‘value life’ to mean only the intrinsic value of life, but it is pretty clear that is not what OP meant if we assume that OP is not entirely stupid. He is correct in the sense that the full value of life is affected by rationality. Rational person should only commit suicide in some very few circumstances where it truly results in maximum utility given the other values not accomplished if you are dead (e.g. so that your children can cook and eat your body, or like in “28 days later” killing yourself in the 10 seconds after infection to avoid becoming a hazard, that kind of stuff). It can be said that irrational person can’t value the life correctly (due to incorrect propagation).