If you had a chance to preform an action that led to a slight risk to your life
but increased the chance of sapience continuing to exist (in such a way
as to lower your overall chance of living forever), would you do so?
A good question.
I have at least one datum suggesting that the answer, for me in particular, is ‘yes’. I currently believe that what’s generally called ‘free speech’ is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience’s survival. Last year, there was an event, ‘Draw Muhammad Day’, to promote free speech; for which, before it actually happened, there was a non-zero probability that anyone participating in it would receive threats and potentially even violence from certain extremists. While that was still the calculation, I joined in. (I did get my very first death threats in response, but nothing came of them.)
You have evidence that you do, in fact, take such risks, but, unless you considered the issue very carefully, you don’t know whether you really want to do so. Section 1 of Yvain’s consequentialism FAQ covers the concept of not knowing and then determining when you really want. (The rest of the FAQ is also good but not directly relevant to this discussion and I think you might disagree with much of it.)
A good question.
I have at least one datum suggesting that the answer, for me in particular, is ‘yes’. I currently believe that what’s generally called ‘free speech’ is a strong supporting factor, if not necessary prerequisite, for developing the science we need to ensure sapience’s survival. Last year, there was an event, ‘Draw Muhammad Day’, to promote free speech; for which, before it actually happened, there was a non-zero probability that anyone participating in it would receive threats and potentially even violence from certain extremists. While that was still the calculation, I joined in. (I did get my very first death threats in response, but nothing came of them.)
You have evidence that you do, in fact, take such risks, but, unless you considered the issue very carefully, you don’t know whether you really want to do so. Section 1 of Yvain’s consequentialism FAQ covers the concept of not knowing and then determining when you really want. (The rest of the FAQ is also good but not directly relevant to this discussion and I think you might disagree with much of it.)