Yeah, this is close, but a perfect match would end with you being forced to take more risks—because if you don’t, you are no longer competitive. Otherwise, you can opt out of taking more risks, in a way that you cannot opt out of e.g. inflation.
A situation like: there is a machine that is dangerous to operate; it has a certain probability to kill you during every day you use it. The government decides that a non-zero probability of an employee dying on the job is okay, as long as it doesn’t exceed let’s say 1:1000 per person per year. Your company crunches some numbers, and concludes that you should operate this machine exactly 10 working days each year, and spend the rest of the year operating some much safer but less productive machinery.
Then a technological improvement reduces the chances of getting killed by this machine to a half (yay), and your company updates the rules so that now you are required to use this machine 20 working days each year (oh no); you get fired and replaced by someone else if you refuse; and your wage remains the same, all the extra profit goes to the company. (A naive expectation would be that a safer machine would reduce your chances of dying per year.)
People who voluntarily take more risk by e.g. driving their cars faster when wearing seat belts, at least actually get faster from point A to point B, so its a trade-off, they are now at a different indifference curve.
No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.
Another example is risk compensation: You make an activity safer (yay) and participants compensate by taking more risks (oh no).
Yeah, this is close, but a perfect match would end with you being forced to take more risks—because if you don’t, you are no longer competitive. Otherwise, you can opt out of taking more risks, in a way that you cannot opt out of e.g. inflation.
A situation like: there is a machine that is dangerous to operate; it has a certain probability to kill you during every day you use it. The government decides that a non-zero probability of an employee dying on the job is okay, as long as it doesn’t exceed let’s say 1:1000 per person per year. Your company crunches some numbers, and concludes that you should operate this machine exactly 10 working days each year, and spend the rest of the year operating some much safer but less productive machinery.
Then a technological improvement reduces the chances of getting killed by this machine to a half (yay), and your company updates the rules so that now you are required to use this machine 20 working days each year (oh no); you get fired and replaced by someone else if you refuse; and your wage remains the same, all the extra profit goes to the company. (A naive expectation would be that a safer machine would reduce your chances of dying per year.)
People who voluntarily take more risk by e.g. driving their cars faster when wearing seat belts, at least actually get faster from point A to point B, so its a trade-off, they are now at a different indifference curve.
No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.