You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.
Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don’t know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.
Do you think your process could be explained to others in an “external reasoning” way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.
The decision was generated by my intuition since I’ve done the math on this question before, but it did not draw from a specific “gut feeling” beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.
Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:
I spent a large amount of time tracing what virtues I value and what sorts of “value” I care about, and afterwards have spent 5-ish years using that knowledge to “automate” calculations that use such information as input by training my intuition to do as much of the process as is reasonable
I know what my value categories are (even if I don’t usually share the full list) and why they’re on the list (and why some things aren’t on the list)
My “decision engine” is trained to be capable of adding “research X to improve confidence” options when making decisions
If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
This system is actively maintained
I do not consider lives “priceless”, I will perform some sort of valuation if they are relevant to a calculation
An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
Groups I can’t easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
Two “value categories” I track are “allows timelines where Super Good Things happen” and “allows timelines where Super Bad Things happen”
These categories have some of the strongest weights in the list of categories
They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
Awful people can convert “normal” people to expand the numbers of awful people, and given a lack of pressure even “average” people can trend towards being awful
The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride
Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don’t know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.
Do you think your process could be explained to others in an “external reasoning” way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.
The decision was generated by my intuition since I’ve done the math on this question before, but it did not draw from a specific “gut feeling” beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.
Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:
I spent a large amount of time tracing what virtues I value and what sorts of “value” I care about, and afterwards have spent 5-ish years using that knowledge to “automate” calculations that use such information as input by training my intuition to do as much of the process as is reasonable
I know what my value categories are (even if I don’t usually share the full list) and why they’re on the list (and why some things aren’t on the list)
My “decision engine” is trained to be capable of adding “research X to improve confidence” options when making decisions
If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
This system is actively maintained
I do not consider lives “priceless”, I will perform some sort of valuation if they are relevant to a calculation
An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
Groups I can’t easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
Two “value categories” I track are “allows timelines where Super Good Things happen” and “allows timelines where Super Bad Things happen”
These categories have some of the strongest weights in the list of categories
They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
Awful people can convert “normal” people to expand the numbers of awful people, and given a lack of pressure even “average” people can trend towards being awful
The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride