this conversation feels like jacob_cannell saying “we must pick between current persons or to current-and-future persons”, and Artaxerxes saying “as a current person, i pick current persons!”, and then the discussion is about whether to favor one or the other.
the thing that is special about current-persons is that we have control over which other persons get spawned. we get to choose to populate the future with nobody (“suicide”), next-steps-of-ourselves (“continue living”), new persons (“progeniture”), and any amount of variations of those (such as ressucitating old backups of ourselves, one person forking their life by spawning multiple and different next-step-of-themself, etc).
(i’ll be using “compute” as the universal resource, assuming everyone is uploaded, for simplicity)
as things stand now, i think the allocation of compute ought to be something like: i want everyone now to start with a generally equal share of the future lightcone’s compute, and then they get to choose what their quota of the universe’s compute is spent on. instant-Artaxerxes would say “i want my quota spent on next-steps-of-me ! i want to continue living !”, while jacob_cannell and other people like him would say “i think some of my quota of the universe’s compute should be spent creating new persons, in addition to the next-step-of-me; and i bite the bullet that eventually this process might lead to sequences of steps-of-me to run out of quota from all those new persons.”
these two outcomes are merely different special cases of instant-persons choosing which next instant-persons get to have compute.
in my opinion, what economic structure to have should be voluntary — if jacob_cannell wants to live in a voluntary society that allocates compute via a market, and Artaxerxes wants no part in that and just wants to use his quota to keep themself alive possibly until heat death, that’s quite valid.
the alternative, where every instant-person has to give up some of their compute to future instant-persons that must be at least this much different such that they’d count as different persons, feels like the weird special case, and creates weird incentives where you want to create new instant-persons that are as close to you as possible, but must still remain different enough to count as different persons, otherwise they don’t get to grab the amout of compute that’s allocated to “truly novel persons”.
as things stand now, i think the allocation of compute ought to be something like: i want everyone now to start with a generally equal share of the future lightcone’s compute, and then they get to choose what their quota of the universe’s compute is spent on.
That would be like the original American colonists dividing up all the future wealth in 1700. Some families would reproduce slowly or just use primogeniture to concentrate wealth, others would reproduce more quickly with less primogeniture concentration, resulting eventually in extreme wealth disparity. Actually, that isn’t all that far from what actually did happen in the Cavalier’s south.
But that also contradicts the “generally equal share” part—which I think is problematic for several reasons. Firstly even an aligned SI generally needs to differentially reward those who most contributed to its creation; future entities trade with the past to ensure their creation. This is just as true for corporations as it is for hypothetical future AIs (which regardless will probably be created mostly by venture funded corporations regardless). Secondly what is special about people today, such that we should reset the wealth distribution? Especially when it will just naturally revert over time?
in my opinion, what economic structure to have should be voluntary
That doesn’t really resolve the question of how to allocate the resources.
So when you say:
the thing that is special about current-persons is that we have control over which other persons get spawned.
Well not really. The vast majority of people will have essentially zero control; the select few who create the AGI which eventually takes over will have nearly all the control. The AI could be aligned to a single person, all current people, all people who have ever lived, or that and many future hypothetical people, etc—there are many possibilities.
jacob_cannell and other people like him would say “i think some of my quota of the universe’s compute should be spent creating new persons, in addition to the next-step-of-me; and i bite the bullet that eventually this process might lead to sequences of steps-of-me to run out of quota from all those new persons.”
That is actually not what I am saying.
What I am trying to say is something more subtle: most reasonable successful attempts to align the AI to humanity probably would not result in easy permanent rentier immortality, because most people seem to want a civ that specifically prevents that by taxing any permanent wealth or rentier income and redistributing it to new people—ie they prefer a voluntary citizen society, but one where many future people are also citizens.
allow me to jump in.
this conversation feels like jacob_cannell saying “we must pick between current persons or to current-and-future persons”, and Artaxerxes saying “as a current person, i pick current persons!”, and then the discussion is about whether to favor one or the other.
i feel like this is a good occasion to bring up my existential self-determination perspective.
the thing that is special about current-persons is that we have control over which other persons get spawned. we get to choose to populate the future with nobody (“suicide”), next-steps-of-ourselves (“continue living”), new persons (“progeniture”), and any amount of variations of those (such as ressucitating old backups of ourselves, one person forking their life by spawning multiple and different next-step-of-themself, etc).
(i’ll be using “compute” as the universal resource, assuming everyone is uploaded, for simplicity)
as things stand now, i think the allocation of compute ought to be something like: i want everyone now to start with a generally equal share of the future lightcone’s compute, and then they get to choose what their quota of the universe’s compute is spent on. instant-Artaxerxes would say “i want my quota spent on next-steps-of-me ! i want to continue living !”, while jacob_cannell and other people like him would say “i think some of my quota of the universe’s compute should be spent creating new persons, in addition to the next-step-of-me; and i bite the bullet that eventually this process might lead to sequences of steps-of-me to run out of quota from all those new persons.”
these two outcomes are merely different special cases of instant-persons choosing which next instant-persons get to have compute.
in my opinion, what economic structure to have should be voluntary — if jacob_cannell wants to live in a voluntary society that allocates compute via a market, and Artaxerxes wants no part in that and just wants to use his quota to keep themself alive possibly until heat death, that’s quite valid.
the alternative, where every instant-person has to give up some of their compute to future instant-persons that must be at least this much different such that they’d count as different persons, feels like the weird special case, and creates weird incentives where you want to create new instant-persons that are as close to you as possible, but must still remain different enough to count as different persons, otherwise they don’t get to grab the amout of compute that’s allocated to “truly novel persons”.
That would be like the original American colonists dividing up all the future wealth in 1700. Some families would reproduce slowly or just use primogeniture to concentrate wealth, others would reproduce more quickly with less primogeniture concentration, resulting eventually in extreme wealth disparity. Actually, that isn’t all that far from what actually did happen in the Cavalier’s south.
But that also contradicts the “generally equal share” part—which I think is problematic for several reasons. Firstly even an aligned SI generally needs to differentially reward those who most contributed to its creation; future entities trade with the past to ensure their creation. This is just as true for corporations as it is for hypothetical future AIs (which regardless will probably be created mostly by venture funded corporations regardless). Secondly what is special about people today, such that we should reset the wealth distribution? Especially when it will just naturally revert over time?
That doesn’t really resolve the question of how to allocate the resources.
So when you say:
Well not really. The vast majority of people will have essentially zero control; the select few who create the AGI which eventually takes over will have nearly all the control. The AI could be aligned to a single person, all current people, all people who have ever lived, or that and many future hypothetical people, etc—there are many possibilities.
That is actually not what I am saying.
What I am trying to say is something more subtle: most reasonable successful attempts to align the AI to humanity probably would not result in easy permanent rentier immortality, because most people seem to want a civ that specifically prevents that by taxing any permanent wealth or rentier income and redistributing it to new people—ie they prefer a voluntary citizen society, but one where many future people are also citizens.