First, a brief summary of my personal stance on immortality:
- Escaping the effects of aging for myself does not currently rate highly on my “satisfying my core desires” metrics at the moment
- Improving my resilience to random chances of dying rates as a medium priority on said metrics, but that puts it in the midst of a decently large group of objectives
- If immortality becomes widely available, we will lose the current guarantee that “awful people will eventually die”, which greatly increases the upper bounds of the awfulness they can spread
- Personal growth can achieve a lot, but there’s also parts of your “self” that can be near-impossible to get rid of, and I’ve noticed they tend to accumulate over time. It isn’t too hard to extrapolate from there and expect a future where things have changed so much that the life you want to live just isn’t possible anymore, and none of the options available are acceptable.
Some final notes:
- There are other maybe-impossible-maybe-not objectives I personally care more about that can be pursued (I am not ready to speak publicly on most of them)
- I place a decent amount of prioritization pressure to objectives that support a “duty” or “role” that I take up, when relevant, and according to my estimations my stance would change if I somehow took up a role where personal freedom from aging was required to fulfill the duty
- I do not care strongly enough to oppose non-”awful” (by my own definitions) people from pursuing immortality; my priorities mostly affect my own allocations of resources
- I mentioned in several places things I’m not willing to fight over, but I am somewhat willing to explain some aspects of my trains of thought. Note, however, that I am a somewhat private person and often elect silence over even acknowledging a boundary was approached.
You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180+. Such a being could think up novel ways of inflicting lasting suffering on other immortals, creating substantial s-risk. Moreover, this single misaligned immortal could, with time, devise a misaligned AI, and when the misaligned AI turns on the misaligned immortal and also on the other immortals and the mortals (if any are left), you are left with suffering that would make Hitler blanch.
If immortality becomes widely available, we will lose the current guarantee that “awful people will eventually die”, which greatly increases the upper bounds of the awfulness they can spread
I mean… amazingly good people die too. Sure, a society of immortals would obviously very weird, and possibly quite static, but I don’t see how eventual random death is some kind of saving grace here. Awful people die and new ones are born anyway.
- If immortality becomes widely available, we will lose the current guarantee that “awful people will eventually die”, which greatly increases the upper bounds of the awfulness they can spread
Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?
Assuming they do—remember, every software system humans have ever built already is immortal, so AIs will all have that property—what bounds the awfulness of future people but not the people alive right now? Why do you think future people will be better people?
If you had some authority to affect the outcome—whether or not current people get to be immortal, or you can reserve the treatment for future people who don’t exist yet—does your belief that future people will be better people justify this genocide of current people?
Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?
I do not estimate the probability to be zero, but other than that my estimation metrics do not have any meaningful data to report.
Assuming they do—remember, every software system humans have ever built already is immortal, so AIs will all have that property—what bounds the awfulness of future people but not the people alive right now?
First, I’m not sure I agree that software systems are immortal. I’ve encountered quite a few tools and programs that are nigh-impossible to use on modern computers without extensive layers of emulation, and I expect that problem to get worse over time.
Second, I mainly track three primary limitations on somebody’s “maximum awfulness”:
In a pre-immortality world, they have only a fixed amount of time to exert direct influence and spread awfulness that way
The “society” in which we operate exerts pressure on nearly everyone it encompasses, amplifying the effects of “favored” actions and reducing the effects of “unpopular” actions. This is a massive oversimplification of a very multi-pronged concept, but this isn’t the right time to delve into this concept.
Nobody is alone in the “game”, and there will almost always be someone else whose actions and influence exerts pressure on whatever a given person is trying to do, although the degree of this effect varies wildly.
If immortality enters the picture, the latter two bullet points will still apply, but I estimate that they would not be nearly as effective on their own. Given infinite time, awful people can spread their influence and create awful organizations, especially given that people I consider “awful” tend to more easily acquire influence than people I consider “good” (since they have fewer inhibitions and more willingness to disrespect boundaries), so that would suggest a strong indication towards imbalance in the long term.
Why do you think future people will be better people?
I don’t necessarily think future people will be better people. I don’t feel confident estimating how their “awfulness rating” would compare to current people, but if held at gunpoint I would estimate little to no change. I am curious what made you think that I held such an expectation, but you don’t have to answer.
If you had some authority to affect the outcome—whether or not current people get to be immortal, or you can reserve the treatment for future people who don’t exist yet—does your belief that future people will be better people justify this genocide of current people?
There would be several factors in a decision to use such authority:
If I gained the authority through a specific role or duty, what would the expectations of that role or duty suggest I should do? This would be a calculation in its own right, but this should be a sufficient summary.
Do I expect my choice to prevent the spread of immortality to be meaningful long-term? The sub-questions here would look like “If I don’t allow the spread, will someone else get to make a similar choice later?”
Is this the right time to make the decision? (I often recommend people ask this question during important decision-making)
The first and third factors I feel are self-explanatory, but I will talk a bit more on the second factor.
I would expect others given the same decision to not necessarily make the same choice, so by most statistical distributions even one or two other people facing the same decision would greatly increase my estimation of “likelihood that someone else chooses to hit the ‘immortality button’”. Therefore, if I expect the chance of “someone else chooses to press the button” to be “likely”, I would then have to calculate further on how much I trusted the others I expected to be making such decisions. If I expected awful people to have the opportunity to choose whether to press the button, I would favor pressing it under my own control and circumstances, but if I expected good people to be my “competition”, I would likely refrain and let them pursue the matter themselves.
… does your belief that future people will be better people justify this genocide of current people?
I do not currently consider myself to have enough ability to influence the pursuit of immortality, but I have consciously chosen to prioritize other things. I also prefer to frame such matters in the case of “how much change from the expected outcome can you achieve” rather than focusing upon all the perceived badness of the expected outcome. I’ve found such framing to be more efficient and stabilizing in my work as a software engineer.
As a general note to wrap things up, I prefer to avoid exerting major influence on matters where I do not feel strongly. I find that this tends to reduce “backsplash” from such exertions and shows respect for boundaries of people in general. As the topic of pursuing immortality is clearly a strong interest of many people and it is not a strong interest of mine, I tend to refrain from taking action more overt than being willing to discuss my perspective.
First, I’m not sure I agree that software systems are immortal. I’ve encountered quite a few tools and programs that are nigh-impossible to use on modern computers without extensive layers of emulation, and I expect that problem to get worse over time.
I’m not sure your position is coherent. You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches. (for example, early game console games depended on UDB to work at all). It’s irrelevant if it isn’t economically feasible to do so. I think you and I can both agree that an “immortal” human is a human that will not die of aging or any disease that doesn’t cause instant death. It doesn’t mean that it will be economically feasible to produce food to feed them in the far future, they could die from that, but they are still biologically immortal. Similarly, software is digitally immortal and eternal...as long as you are willing to keep building emulators or replacement hardware from specs.
There would be several factors in a decision to use such authority:
If I gained the authority through a specific role or duty, what would the expectations of that role or duty suggest I should do? This would be a calculation in its own right, but this should be a sufficient summary.
Do I expect my choice to prevent the spread of immortality to be meaningful long-term? The sub-questions here would look like “If I don’t allow the spread, will someone else get to make a similar choice later?”
Is this the right time to make the decision? (I often recommend people ask this question during important decision-making)
While I found your careful thought process here inspiring, the normal hypothetical assumption is to assume you have the authority to make the decision without any consequences or duty, and are immortal. Meaning that none of these apply. You hypothetically can ‘click the mouse’* and choose no immortality until some later date, but you personally have no authority to influence how worthy future humans are.
*such as in a computer game like Civilization
Finally, the implicit assumption I make, and I think you should make given the existing evidence that software is immortal, is that:
If I don’t allow the spread, will someone else get to make a similar choice later?”
There is a slightly less than 100% chance that within 1000 years, barring cataclysmic event, that some kind of life with the cognitive abilities of humans+ will exist in the solar system that is immortal. There are large practical advantages to having this property, from being able to make longer term plans to simply not losing information with time.
Human lifespans were not evolved in an environment with modern tools and complex technology, they are completely unsuitable to an environment where it takes say years to transfer between planets on the most efficient trajectory, and possibly centuries to reach the nearest star, depending on engineering limitations.
Again, though, its reasonable to have doubt that biological ‘meatware’ can ever be made eternal, but since software already is, immortality exists the moment software can mimic all the important human cognitive capabilities.
I think you and I can both agree that an “immortal” human is a human that will not die of aging or any disease that doesn’t cause instant death. It doesn’t mean that it will be economically feasible to produce food to feed them in the far future, they could die from that, but they are still biologically immortal.
I don’t know about either of you, but I do NOT agree with that definition as the default meaning in this discussion. Human immortality, colloquially in conversations I’ve had and seen in LW and related circles, means “a representative individual who experiences the universe in ways similar to me, has a high probability of continuing to do so for thousands of years.”
“pattern-immortal, but probably never going to actually live” is certainly not what most people mean.
I’m not sure your position is coherent. You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches. (for example, early game console games depended on UDB to work at all).
Source code and binary files would qualify as “immortal” by most definitions, but my experience using Linux and assisting in software rehosts has made me very dubious of the “immortality” of the software’s usability.
Here’s a brief summary of factors that contribute to that doubt:
Source code is usually not as portable as people think it is, and can be near-impossible to build correctly without access to sufficient documentation or copies of the original workspace(s)
Operating systems can be very picky about what executables they’ll run, and executables also care a lot about what versions of libraries they want are present
There’s a lot of architectures out there for workspaces, networks, and systems nowadays, and information about a lot of them is quietly being lost to brain drain and failures to document; some of that information can be near-impossible to re-acquire afterwards
It’s irrelevant if it isn’t economically feasible to do so.
I do not consider economic infeasibility irrelevant when a problem can approach the scope of “a major corporation or government dogpiling the problem might have a 30% chance of solving it, and your reward will be nowhere near the price tag”. It is possible that I am overestimating the feasibility of such rehosts after suffering through some painful rehost efforts, but that is an estimate from my intuition and thus there is little that discussion can achieve.
While I found your careful thought process here inspiring, the normal hypothetical assumption is to assume you have the authority to make the decision without any consequences or duty, and are immortal. Meaning that none of these apply.
First, I make a point of asking those questions even in such a simplified context. I have spent a fair amount of time training my “option generator” and “decision processor” to embed such checklists to minimize the chances of easily-avoided outcomes slipping through. The answer to the first bullet point would easily calculate as “your role has no obligations either way”, but the other two questions would still be relevant.
But, to specifically answer within your clarified framing and with the idea of my choice being the governing choice in all resulting timelines, I would currently choose to withhold the information/technology, and very likely would make use of my ability to “lock away” memories to properly control the information.
The rest of your response seems reasonable enough when using the assumption that software is immortal, so I have nothing worth saying about it beyond that.
But, to specifically answer within your clarified framing and with the idea of my choice being the governing choice in all resulting timelines, I would currently choose to withhold the information/technology, and very likely would make use of my ability to “lock away” memories to properly control the information.
Ok. So remember, your choices are:
Lock away the technology for some time
Release it now
1 doesn’t mean forever, say the length of the maximum human lifespan. You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome. The next generation, slightly after everyone alive is dead, will be immortal, and as unethical or not as you believe future people will be.
I am saying that I don’t see how 1 is very justifiable, it’s also genocide even though in this hypothetical you will fail no legal consequences for committing the atrocity.
I believe this made up hypothetical is a fairly good model for actual reality. I think people working together even by accident* - simply pretending that immortality is impossible, for example and not allowing studies on cryonics to ever be published—could in fact delay human indefinite life extension for some time, maybe as long as the maximum human lifespan. But regardless of the length of the delay, there are ‘assholes’ today, and ‘future assholes’, and it isn’t a valid argument to say your should delay immortality for hope that future people are less, well, bad.
*the reason this won’t last forever is because the technology has immense instrumental utility. Even a small amount of reliable, proven to work life extension would have almost every person who can afford it purchasing it, and advances in other areas make achieving this more and more likely.
You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.
Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don’t know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.
Do you think your process could be explained to others in an “external reasoning” way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.
The decision was generated by my intuition since I’ve done the math on this question before, but it did not draw from a specific “gut feeling” beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.
Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:
I spent a large amount of time tracing what virtues I value and what sorts of “value” I care about, and afterwards have spent 5-ish years using that knowledge to “automate” calculations that use such information as input by training my intuition to do as much of the process as is reasonable
I know what my value categories are (even if I don’t usually share the full list) and why they’re on the list (and why some things aren’t on the list)
My “decision engine” is trained to be capable of adding “research X to improve confidence” options when making decisions
If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
This system is actively maintained
I do not consider lives “priceless”, I will perform some sort of valuation if they are relevant to a calculation
An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
Groups I can’t easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
Two “value categories” I track are “allows timelines where Super Good Things happen” and “allows timelines where Super Bad Things happen”
These categories have some of the strongest weights in the list of categories
They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
Awful people can convert “normal” people to expand the numbers of awful people, and given a lack of pressure even “average” people can trend towards being awful
The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride
First, a brief summary of my personal stance on immortality:
- Escaping the effects of aging for myself does not currently rate highly on my “satisfying my core desires” metrics at the moment
- Improving my resilience to random chances of dying rates as a medium priority on said metrics, but that puts it in the midst of a decently large group of objectives
- If immortality becomes widely available, we will lose the current guarantee that “awful people will eventually die”, which greatly increases the upper bounds of the awfulness they can spread
- Personal growth can achieve a lot, but there’s also parts of your “self” that can be near-impossible to get rid of, and I’ve noticed they tend to accumulate over time. It isn’t too hard to extrapolate from there and expect a future where things have changed so much that the life you want to live just isn’t possible anymore, and none of the options available are acceptable.
Some final notes:
- There are other maybe-impossible-maybe-not objectives I personally care more about that can be pursued (I am not ready to speak publicly on most of them)
- I place a decent amount of prioritization pressure to objectives that support a “duty” or “role” that I take up, when relevant, and according to my estimations my stance would change if I somehow took up a role where personal freedom from aging was required to fulfill the duty
- I do not care strongly enough to oppose non-”awful” (by my own definitions) people from pursuing immortality; my priorities mostly affect my own allocations of resources
- I mentioned in several places things I’m not willing to fight over, but I am somewhat willing to explain some aspects of my trains of thought. Note, however, that I am a somewhat private person and often elect silence over even acknowledging a boundary was approached.
You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180+. Such a being could think up novel ways of inflicting lasting suffering on other immortals, creating substantial s-risk. Moreover, this single misaligned immortal could, with time, devise a misaligned AI, and when the misaligned AI turns on the misaligned immortal and also on the other immortals and the mortals (if any are left), you are left with suffering that would make Hitler blanch.
I mean… amazingly good people die too. Sure, a society of immortals would obviously very weird, and possibly quite static, but I don’t see how eventual random death is some kind of saving grace here. Awful people die and new ones are born anyway.
Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?
Assuming they do—remember, every software system humans have ever built already is immortal, so AIs will all have that property—what bounds the awfulness of future people but not the people alive right now? Why do you think future people will be better people?
If you had some authority to affect the outcome—whether or not current people get to be immortal, or you can reserve the treatment for future people who don’t exist yet—does your belief that future people will be better people justify this genocide of current people?
I do not estimate the probability to be zero, but other than that my estimation metrics do not have any meaningful data to report.
First, I’m not sure I agree that software systems are immortal. I’ve encountered quite a few tools and programs that are nigh-impossible to use on modern computers without extensive layers of emulation, and I expect that problem to get worse over time.
Second, I mainly track three primary limitations on somebody’s “maximum awfulness”:
In a pre-immortality world, they have only a fixed amount of time to exert direct influence and spread awfulness that way
The “society” in which we operate exerts pressure on nearly everyone it encompasses, amplifying the effects of “favored” actions and reducing the effects of “unpopular” actions. This is a massive oversimplification of a very multi-pronged concept, but this isn’t the right time to delve into this concept.
Nobody is alone in the “game”, and there will almost always be someone else whose actions and influence exerts pressure on whatever a given person is trying to do, although the degree of this effect varies wildly.
If immortality enters the picture, the latter two bullet points will still apply, but I estimate that they would not be nearly as effective on their own. Given infinite time, awful people can spread their influence and create awful organizations, especially given that people I consider “awful” tend to more easily acquire influence than people I consider “good” (since they have fewer inhibitions and more willingness to disrespect boundaries), so that would suggest a strong indication towards imbalance in the long term.
I don’t necessarily think future people will be better people. I don’t feel confident estimating how their “awfulness rating” would compare to current people, but if held at gunpoint I would estimate little to no change. I am curious what made you think that I held such an expectation, but you don’t have to answer.
There would be several factors in a decision to use such authority:
If I gained the authority through a specific role or duty, what would the expectations of that role or duty suggest I should do? This would be a calculation in its own right, but this should be a sufficient summary.
Do I expect my choice to prevent the spread of immortality to be meaningful long-term? The sub-questions here would look like “If I don’t allow the spread, will someone else get to make a similar choice later?”
Is this the right time to make the decision? (I often recommend people ask this question during important decision-making)
The first and third factors I feel are self-explanatory, but I will talk a bit more on the second factor.
I would expect others given the same decision to not necessarily make the same choice, so by most statistical distributions even one or two other people facing the same decision would greatly increase my estimation of “likelihood that someone else chooses to hit the ‘immortality button’”. Therefore, if I expect the chance of “someone else chooses to press the button” to be “likely”, I would then have to calculate further on how much I trusted the others I expected to be making such decisions. If I expected awful people to have the opportunity to choose whether to press the button, I would favor pressing it under my own control and circumstances, but if I expected good people to be my “competition”, I would likely refrain and let them pursue the matter themselves.
I do not currently consider myself to have enough ability to influence the pursuit of immortality, but I have consciously chosen to prioritize other things. I also prefer to frame such matters in the case of “how much change from the expected outcome can you achieve” rather than focusing upon all the perceived badness of the expected outcome. I’ve found such framing to be more efficient and stabilizing in my work as a software engineer.
As a general note to wrap things up, I prefer to avoid exerting major influence on matters where I do not feel strongly. I find that this tends to reduce “backsplash” from such exertions and shows respect for boundaries of people in general. As the topic of pursuing immortality is clearly a strong interest of many people and it is not a strong interest of mine, I tend to refrain from taking action more overt than being willing to discuss my perspective.
I’m not sure your position is coherent. You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches. (for example, early game console games depended on UDB to work at all). It’s irrelevant if it isn’t economically feasible to do so. I think you and I can both agree that an “immortal” human is a human that will not die of aging or any disease that doesn’t cause instant death. It doesn’t mean that it will be economically feasible to produce food to feed them in the far future, they could die from that, but they are still biologically immortal. Similarly, software is digitally immortal and eternal...as long as you are willing to keep building emulators or replacement hardware from specs.
While I found your careful thought process here inspiring, the normal hypothetical assumption is to assume you have the authority to make the decision without any consequences or duty, and are immortal. Meaning that none of these apply. You hypothetically can ‘click the mouse’* and choose no immortality until some later date, but you personally have no authority to influence how worthy future humans are.
*such as in a computer game like Civilization
Finally, the implicit assumption I make, and I think you should make given the existing evidence that software is immortal, is that:
There is a slightly less than 100% chance that within 1000 years, barring cataclysmic event, that some kind of life with the cognitive abilities of humans+ will exist in the solar system that is immortal. There are large practical advantages to having this property, from being able to make longer term plans to simply not losing information with time.
Human lifespans were not evolved in an environment with modern tools and complex technology, they are completely unsuitable to an environment where it takes say years to transfer between planets on the most efficient trajectory, and possibly centuries to reach the nearest star, depending on engineering limitations.
Again, though, its reasonable to have doubt that biological ‘meatware’ can ever be made eternal, but since software already is, immortality exists the moment software can mimic all the important human cognitive capabilities.
I don’t know about either of you, but I do NOT agree with that definition as the default meaning in this discussion. Human immortality, colloquially in conversations I’ve had and seen in LW and related circles, means “a representative individual who experiences the universe in ways similar to me, has a high probability of continuing to do so for thousands of years.”
“pattern-immortal, but probably never going to actually live” is certainly not what most people mean.
Source code and binary files would qualify as “immortal” by most definitions, but my experience using Linux and assisting in software rehosts has made me very dubious of the “immortality” of the software’s usability.
Here’s a brief summary of factors that contribute to that doubt:
Source code is usually not as portable as people think it is, and can be near-impossible to build correctly without access to sufficient documentation or copies of the original workspace(s)
Operating systems can be very picky about what executables they’ll run, and executables also care a lot about what versions of libraries they want are present
There’s a lot of architectures out there for workspaces, networks, and systems nowadays, and information about a lot of them is quietly being lost to brain drain and failures to document; some of that information can be near-impossible to re-acquire afterwards
I do not consider economic infeasibility irrelevant when a problem can approach the scope of “a major corporation or government dogpiling the problem might have a 30% chance of solving it, and your reward will be nowhere near the price tag”. It is possible that I am overestimating the feasibility of such rehosts after suffering through some painful rehost efforts, but that is an estimate from my intuition and thus there is little that discussion can achieve.
First, I make a point of asking those questions even in such a simplified context. I have spent a fair amount of time training my “option generator” and “decision processor” to embed such checklists to minimize the chances of easily-avoided outcomes slipping through. The answer to the first bullet point would easily calculate as “your role has no obligations either way”, but the other two questions would still be relevant.
But, to specifically answer within your clarified framing and with the idea of my choice being the governing choice in all resulting timelines, I would currently choose to withhold the information/technology, and very likely would make use of my ability to “lock away” memories to properly control the information.
The rest of your response seems reasonable enough when using the assumption that software is immortal, so I have nothing worth saying about it beyond that.
Ok. So remember, your choices are:
Lock away the technology for some time
Release it now
1 doesn’t mean forever, say the length of the maximum human lifespan. You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome. The next generation, slightly after everyone alive is dead, will be immortal, and as unethical or not as you believe future people will be.
I am saying that I don’t see how 1 is very justifiable, it’s also genocide even though in this hypothetical you will fail no legal consequences for committing the atrocity.
I believe this made up hypothetical is a fairly good model for actual reality. I think people working together even by accident* - simply pretending that immortality is impossible, for example and not allowing studies on cryonics to ever be published—could in fact delay human indefinite life extension for some time, maybe as long as the maximum human lifespan. But regardless of the length of the delay, there are ‘assholes’ today, and ‘future assholes’, and it isn’t a valid argument to say your should delay immortality for hope that future people are less, well, bad.
*the reason this won’t last forever is because the technology has immense instrumental utility. Even a small amount of reliable, proven to work life extension would have almost every person who can afford it purchasing it, and advances in other areas make achieving this more and more likely.
Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don’t know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.
Do you think your process could be explained to others in an “external reasoning” way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.
The decision was generated by my intuition since I’ve done the math on this question before, but it did not draw from a specific “gut feeling” beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.
Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:
I spent a large amount of time tracing what virtues I value and what sorts of “value” I care about, and afterwards have spent 5-ish years using that knowledge to “automate” calculations that use such information as input by training my intuition to do as much of the process as is reasonable
I know what my value categories are (even if I don’t usually share the full list) and why they’re on the list (and why some things aren’t on the list)
My “decision engine” is trained to be capable of adding “research X to improve confidence” options when making decisions
If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
This system is actively maintained
I do not consider lives “priceless”, I will perform some sort of valuation if they are relevant to a calculation
An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
Groups I can’t easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
Two “value categories” I track are “allows timelines where Super Good Things happen” and “allows timelines where Super Bad Things happen”
These categories have some of the strongest weights in the list of categories
They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
Awful people can convert “normal” people to expand the numbers of awful people, and given a lack of pressure even “average” people can trend towards being awful
The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride