But, to specifically answer within your clarified framing and with the idea of my choice being the governing choice in all resulting timelines, I would currently choose to withhold the information/technology, and very likely would make use of my ability to “lock away” memories to properly control the information.
Ok. So remember, your choices are:
Lock away the technology for some time
Release it now
1 doesn’t mean forever, say the length of the maximum human lifespan. You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome. The next generation, slightly after everyone alive is dead, will be immortal, and as unethical or not as you believe future people will be.
I am saying that I don’t see how 1 is very justifiable, it’s also genocide even though in this hypothetical you will fail no legal consequences for committing the atrocity.
I believe this made up hypothetical is a fairly good model for actual reality. I think people working together even by accident* - simply pretending that immortality is impossible, for example and not allowing studies on cryonics to ever be published—could in fact delay human indefinite life extension for some time, maybe as long as the maximum human lifespan. But regardless of the length of the delay, there are ‘assholes’ today, and ‘future assholes’, and it isn’t a valid argument to say your should delay immortality for hope that future people are less, well, bad.
*the reason this won’t last forever is because the technology has immense instrumental utility. Even a small amount of reliable, proven to work life extension would have almost every person who can afford it purchasing it, and advances in other areas make achieving this more and more likely.
You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.
Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don’t know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.
Do you think your process could be explained to others in an “external reasoning” way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.
The decision was generated by my intuition since I’ve done the math on this question before, but it did not draw from a specific “gut feeling” beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.
Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:
I spent a large amount of time tracing what virtues I value and what sorts of “value” I care about, and afterwards have spent 5-ish years using that knowledge to “automate” calculations that use such information as input by training my intuition to do as much of the process as is reasonable
I know what my value categories are (even if I don’t usually share the full list) and why they’re on the list (and why some things aren’t on the list)
My “decision engine” is trained to be capable of adding “research X to improve confidence” options when making decisions
If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
This system is actively maintained
I do not consider lives “priceless”, I will perform some sort of valuation if they are relevant to a calculation
An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
Groups I can’t easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
Two “value categories” I track are “allows timelines where Super Good Things happen” and “allows timelines where Super Bad Things happen”
These categories have some of the strongest weights in the list of categories
They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
Awful people can convert “normal” people to expand the numbers of awful people, and given a lack of pressure even “average” people can trend towards being awful
The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride
Ok. So remember, your choices are:
Lock away the technology for some time
Release it now
1 doesn’t mean forever, say the length of the maximum human lifespan. You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome. The next generation, slightly after everyone alive is dead, will be immortal, and as unethical or not as you believe future people will be.
I am saying that I don’t see how 1 is very justifiable, it’s also genocide even though in this hypothetical you will fail no legal consequences for committing the atrocity.
I believe this made up hypothetical is a fairly good model for actual reality. I think people working together even by accident* - simply pretending that immortality is impossible, for example and not allowing studies on cryonics to ever be published—could in fact delay human indefinite life extension for some time, maybe as long as the maximum human lifespan. But regardless of the length of the delay, there are ‘assholes’ today, and ‘future assholes’, and it isn’t a valid argument to say your should delay immortality for hope that future people are less, well, bad.
*the reason this won’t last forever is because the technology has immense instrumental utility. Even a small amount of reliable, proven to work life extension would have almost every person who can afford it purchasing it, and advances in other areas make achieving this more and more likely.
Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don’t know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.
Do you think your process could be explained to others in an “external reasoning” way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.
The decision was generated by my intuition since I’ve done the math on this question before, but it did not draw from a specific “gut feeling” beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.
Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:
I spent a large amount of time tracing what virtues I value and what sorts of “value” I care about, and afterwards have spent 5-ish years using that knowledge to “automate” calculations that use such information as input by training my intuition to do as much of the process as is reasonable
I know what my value categories are (even if I don’t usually share the full list) and why they’re on the list (and why some things aren’t on the list)
My “decision engine” is trained to be capable of adding “research X to improve confidence” options when making decisions
If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
This system is actively maintained
I do not consider lives “priceless”, I will perform some sort of valuation if they are relevant to a calculation
An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
Groups I can’t easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
Two “value categories” I track are “allows timelines where Super Good Things happen” and “allows timelines where Super Bad Things happen”
These categories have some of the strongest weights in the list of categories
They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
Awful people can convert “normal” people to expand the numbers of awful people, and given a lack of pressure even “average” people can trend towards being awful
The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride