I don’t feel strongly about this one way or another, but I think it’s reasonable to expend the term cryonics to mean any brain preservation method done with the hope for future revival as that seems like the core concept people are referring to when using the term. When the term was first coined, room temperature options weren’t a thing. https://www.lesswrong.com/posts/PG4D4CSBHhijYDvSz/refactoring-cryonics-as-structural-brain-preservation
Oregon Brain Preservation uses a technique allowing fridge temperature storage, and seem well funded, so idk if the argument works out
Idk the finances for Cryonics Germany, but I would indeed guess that Tomorrow Bio has more funding + provides better SST. I would recommend using Tomorrow Bio over Cryonics Germany if you can afford it
To be clear, it’s subsidized. So it’s not like there’s no money to maintain you in preservation. As far as I know, Oregon Brain Preservation has a trust similar to Alcor in terms of money per volume preserved for it’s cryonics patients. Which seems more than enough to maintain in storage just with the interests. Of course, there could be major economic disruptions that change that. I’m not sure about how much Cryonics Germany is putting aside though.
Plus, Oregon Brain Preservation’s approach seems to work at fridge temperature rather than requiring PB2 temperature.
What would a guarantee mean here? Like they give money to your heirs if they accidentally thaw you? I’m not sure what you’re asking.
Alternatives to that are paid versions of cryonics or otherwise burial and cremation.
fair enough! maybe i should edit my post with “brain preservation some through cryonics for indefinite storage with the purpose of future reanimation is sufficiently subsidized to be free or marginally free in some regions of the world” 😅
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️
if you’re alive, you can kill yourself when s-risks increases beyond your comfort point. if you’re preserved, then you rely on other people to execute on those wishes
I mean, it’s not a big secret, there’s a wealthy person behind it. And there’s 2 potential motivations for it: 1) altruistic/mission-driven 2) helps improve the service to have more cases, which can benefit themselves as well.
But also, Oregon Brain Preservation is less expensive as a result of: 1) doing brain-only (Alcor doesn’t extract the brain for its neuro cases) 2) using chemical preservation which doesn’t require LN2 (this represents a significant portion of the cost) 3) not including the cost of stand-by, which is also a significant portion (ie. staying at your bedside in advance until you die) 4) collaborating with local funeral homes (instead of having a fully in-house team that can be deployed anywhere) 5) only offering the service locally (no flights)
I visited Oregon Brain Preservation, talked with Jordan Spark and exchanged emails, and been following them for many years, and Jordan seems really solid IMO.
Cryonics Germany people seem very caring and seem to understand well how to work with a thanatologist. I also had email exchanges with them, but not as much.
I mean, you can trust it to preserve your brain more than you can trust a crematorium to preserve your brain.
And if you do chemical preservation, the operational complexity of maintaining a brain in storage is fairly simple. LN2 isn’t that complex either, but does have higher risks.
That said, I would generally suggest using Tomorrow Biostasis for Europe residents if you can afford it.
here’s my new fake-religion, taking just-world bias to its full extreme
the belief that we’re simulations and we’ll get transcended to Utopia in 1 second because future civilisation is creating many simulations of all possible people in all possible contexts and then uploading them to Utopia so that from anyone’s perspective you have a very high probability of transcending to Utopia in 1 second
epistemic status: speculative, probably simplistic and ill defined
Someone asked me “What will I do once we have AGI?”
I generally define the AGI-era starting at the point where all economically valuable tasks can be performed by AIs at a lower cost than a human (at subsistance level, including buying any available augmentations for the human). This notably excludes:
1) any tasks that humans can do that still provide value at the margin (ie. the caloric cost of feeding that human while they’re working vs while they’re not working rather than while they’re not existing)
2) things that are not “tasks”, such as:
a) caring about the internal experience of the service provider (ex.: wanting a DJ that feels human emotions regardless of its actions) --> although, maybe you could include that in the AGI definition too. but what if you value having a DJ be exactly a human? then the best an AGI could do is 3D print a human or something like that. or maybe you’re even more specific, and you want a “pre-singulatarian natural human”, in which case AGI seems impossible by (very contrived) definition.
b) the value of the memories encoded in human brains
c) the value of doing scientific experiments on humans
For my answer to the question, I wanted to say something like, think about what I should do with my time for a long time, and keep my options open (ex.: avoid altering my mind in ways I don’t understand the consequences well). But then, that seems like something that might be economically useful to sell, so using the above definition, it seems like I should have AI system that are able to do that better/cheaper than me (unless I intrinsically didn’t want that, or something like that). So maybe I have AI systems computing that for me and keeping me posted with advice while I do whatever I want.
But maybe I can still do work that is useful at the margin, as per (1), and so would probably do that. But what if even that wasn’t worth the marginal caloric cost, and it was better to feed those calories into AI systems?
(2) is a bit complex, but probably(?) wouldn’t impact the answer to the initial question much.
So, what would I do? I don’t know. Main thing that comes to mind is observe how the worlds unfold (and listen to what the AGIs are telling me).
But maybe “AGI” shouldn’t be defined as “aligned AGI”. Maybe a better definition of AGI is like “outperforming humans at all games/tasks that are well defined” (ie. where humans don’t have a comparative advantage just by knowing what humans value). In which case, my answer would be “alignment research” (assuming it’s not “die”).
imagine (maybe all of a sudden) we’re able to create barely superhuman-level AIs aligned to whatever values we want at a barely subhuman-level operation cost
we might decide to have anyone able to buy AI agents aligned with their values
or we might (generally) think this way to give access to that tech would be bad, but many companies are already incentivized to do that individually and can’t all cooperate not to (and they actually reached this point gradually, previously selling near human-level AIs)
then it seems like everyone/most people would start to run such an AI and give it access to all their resources—at which point that AI can decide what to do, whether that’s investing in some companies and then paysing themselves periodically or invest in running more copies of themselves, etc. deciding when to use those resources for the human to consume vs reinvesting them
maybe people would wish for everyone to run AI systems with “aggregated human values” instead of their personal values, but given others aren’t doing that, they won’t either
now, intelligence isn’t static anymore—presumably, the more money you have, the more intelligence you have, and the more intelligence the more money.
so let’s say we suddenly have this tech and everyone is instantiating one such agent (which will make decisions about number and type of agents) that has access to all their resources
what happens?
maximally optimist scenario: solving coordination is not too late and gets done easily and at a low cost. utopia
optimist scenario: we don’t substantially improve coordination, but our current coordination level is good enough for an Okay Outcome
pessimist scenario: agents are incentived to create subagents with other goals for instrumentally convergent purposes. defecting is better than cooperating individually, but defecting-defecting still leads to extremely bad outcomes (just not as bad as if you had cooperated in a population of cooperators). those subagents quickly take over and kill all humans (those who cooperated are killed slightly sooner). or, not requiring misaligned AIs, maybe the aestivation hypothesis is true but we won’t coordinate to delay energy consumption or wars will use all surplus leaving nothing for humans to consume
I’m not confident we’re in an optimist scenario. being able to download one’s values and then load them in an AI system (and having initial conditions where that’s all that happens) might not be sufficient for good outcomes
this is evidence for the importance of coordinating on how AGI systems get used, and that distributing that wealth/intelligence directly might not be the way to go. rather, it might be better to keep that intelligence concentrated and have some value/decision aggregation mechanism to decide what to do with it (rather than distributing it and later not being able to pool it back together if that’s needed, which seems plausible to be)
a similar reasoning can apply for poverty alleviation: if you want to donate money to a group of people (say residents of a poor country) and if you think they haven’t solved their coordination problem, then maybe instead of distributing that money and let them try to coordinate to put back (part of) that money in a shared pool for collective goods, you can just directly put that money in such a pool—the problem about figuring out the shared goal remains but it at least arguably solves the problem of pooling that money (ex.: to fund research for a remedy to a disease affecting that population)
AI is improving exponentially with researchers having constant intelligence. Once the AI research workforce become itself composed of AIs, that constant will become exponential which would make AI improve even faster (superexponentially?)
it doesn’t need to be the scenario of a singular AI agent self-improving its own self; it can be a large AI population participating in the economy and collectively improving AI as a whole, with various AI clans* focusing on different subdomains (EtA: for the main purpose of making money, and then using that money to buy tech/data/resources that will improve them)
*I’m wanting myself to differentiate between a “template NN” and its multiple instantiation, and maybe adopting the terminology from The Age of Em for that works well
Mati_Roy
See https://matiroy.com
seems likes this has now been automated ^^ https://pdftobrainrot.org/generate
I don’t feel strongly about this one way or another, but I think it’s reasonable to expend the term cryonics to mean any brain preservation method done with the hope for future revival as that seems like the core concept people are referring to when using the term. When the term was first coined, room temperature options weren’t a thing. https://www.lesswrong.com/posts/PG4D4CSBHhijYDvSz/refactoring-cryonics-as-structural-brain-preservation
summary of https://gwern.net/socks
--> https://www.facebook.com/reel/1207983483859787/?mibextid=9drbnH&s=yWDuG2&fs=e
😂
Steven Universe s1e5 is about a being that follows commands literally, and is a metaphor for some AI risks
I don’t know. The brain preservation prize to preserve the connective of a large mammal was won with aldehyde-stabilization though
Oregon Brain Preservation uses a technique allowing fridge temperature storage, and seem well funded, so idk if the argument works out
Idk the finances for Cryonics Germany, but I would indeed guess that Tomorrow Bio has more funding + provides better SST. I would recommend using Tomorrow Bio over Cryonics Germany if you can afford it
To be clear, it’s subsidized. So it’s not like there’s no money to maintain you in preservation. As far as I know, Oregon Brain Preservation has a trust similar to Alcor in terms of money per volume preserved for it’s cryonics patients. Which seems more than enough to maintain in storage just with the interests. Of course, there could be major economic disruptions that change that. I’m not sure about how much Cryonics Germany is putting aside though.
Plus, Oregon Brain Preservation’s approach seems to work at fridge temperature rather than requiring PB2 temperature.
What would a guarantee mean here? Like they give money to your heirs if they accidentally thaw you? I’m not sure what you’re asking.
Alternatives to that are paid versions of cryonics or otherwise burial and cremation.
fair enough! maybe i should edit my post with “brain preservation some through cryonics for indefinite storage with the purpose of future reanimation is sufficiently subsidized to be free or marginally free in some regions of the world” 😅
i don’t think killing yourself before entering the cryotank vs after is qualitatively different, but the latter maintains option value (in that specific regard re MUH) 🤷♂️
if you’re alive, you can kill yourself when s-risks increases beyond your comfort point. if you’re preserved, then you rely on other people to execute on those wishes
I mean, it’s not a big secret, there’s a wealthy person behind it. And there’s 2 potential motivations for it:
1) altruistic/mission-driven
2) helps improve the service to have more cases, which can benefit themselves as well.
But also, Oregon Brain Preservation is less expensive as a result of:
1) doing brain-only (Alcor doesn’t extract the brain for its neuro cases)
2) using chemical preservation which doesn’t require LN2 (this represents a significant portion of the cost)
3) not including the cost of stand-by, which is also a significant portion (ie. staying at your bedside in advance until you die)
4) collaborating with local funeral homes (instead of having a fully in-house team that can be deployed anywhere)
5) only offering the service locally (no flights)
I visited Oregon Brain Preservation, talked with Jordan Spark and exchanged emails, and been following them for many years, and Jordan seems really solid IMO.
Cryonics Germany people seem very caring and seem to understand well how to work with a thanatologist. I also had email exchanges with them, but not as much.
🤷♂️
Concerns about personal s-risks makes sense.
I mean, you can trust it to preserve your brain more than you can trust a crematorium to preserve your brain.
And if you do chemical preservation, the operational complexity of maintaining a brain in storage is fairly simple. LN2 isn’t that complex either, but does have higher risks.
That said, I would generally suggest using Tomorrow Biostasis for Europe residents if you can afford it.
here’s my new fake-religion, taking just-world bias to its full extreme
the belief that we’re simulations and we’ll get transcended to Utopia in 1 second because future civilisation is creating many simulations of all possible people in all possible contexts and then uploading them to Utopia so that from anyone’s perspective you have a very high probability of transcending to Utopia in 1 second
^^
Is the opt-in button for Petrov Day a trap? Kinda scary to press on large red buttons 😆
Llifelogging as life extension version of this post would be like “You Only Live 1.5 Times” ^^
epistemic status: speculative, probably simplistic and ill defined
Someone asked me “What will I do once we have AGI?”
I generally define the AGI-era starting at the point where all economically valuable tasks can be performed by AIs at a lower cost than a human (at subsistance level, including buying any available augmentations for the human). This notably excludes:
1) any tasks that humans can do that still provide value at the margin (ie. the caloric cost of feeding that human while they’re working vs while they’re not working rather than while they’re not existing)
2) things that are not “tasks”, such as:
a) caring about the internal experience of the service provider (ex.: wanting a DJ that feels human emotions regardless of its actions) --> although, maybe you could include that in the AGI definition too. but what if you value having a DJ be exactly a human? then the best an AGI could do is 3D print a human or something like that. or maybe you’re even more specific, and you want a “pre-singulatarian natural human”, in which case AGI seems impossible by (very contrived) definition.
b) the value of the memories encoded in human brains
c) the value of doing scientific experiments on humans
For my answer to the question, I wanted to say something like, think about what I should do with my time for a long time, and keep my options open (ex.: avoid altering my mind in ways I don’t understand the consequences well). But then, that seems like something that might be economically useful to sell, so using the above definition, it seems like I should have AI system that are able to do that better/cheaper than me (unless I intrinsically didn’t want that, or something like that). So maybe I have AI systems computing that for me and keeping me posted with advice while I do whatever I want.
But maybe I can still do work that is useful at the margin, as per (1), and so would probably do that. But what if even that wasn’t worth the marginal caloric cost, and it was better to feed those calories into AI systems?
(2) is a bit complex, but probably(?) wouldn’t impact the answer to the initial question much.
So, what would I do? I don’t know. Main thing that comes to mind is observe how the worlds unfold (and listen to what the AGIs are telling me).
But maybe “AGI” shouldn’t be defined as “aligned AGI”. Maybe a better definition of AGI is like “outperforming humans at all games/tasks that are well defined” (ie. where humans don’t have a comparative advantage just by knowing what humans value). In which case, my answer would be “alignment research” (assuming it’s not “die”).
related: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing
imagine (maybe all of a sudden) we’re able to create barely superhuman-level AIs aligned to whatever values we want at a barely subhuman-level operation cost
we might decide to have anyone able to buy AI agents aligned with their values
or we might (generally) think this way to give access to that tech would be bad, but many companies are already incentivized to do that individually and can’t all cooperate not to (and they actually reached this point gradually, previously selling near human-level AIs)
then it seems like everyone/most people would start to run such an AI and give it access to all their resources—at which point that AI can decide what to do, whether that’s investing in some companies and then paysing themselves periodically or invest in running more copies of themselves, etc. deciding when to use those resources for the human to consume vs reinvesting them
maybe people would wish for everyone to run AI systems with “aggregated human values” instead of their personal values, but given others aren’t doing that, they won’t either
now, intelligence isn’t static anymore—presumably, the more money you have, the more intelligence you have, and the more intelligence the more money.
so let’s say we suddenly have this tech and everyone is instantiating one such agent (which will make decisions about number and type of agents) that has access to all their resources
what happens?
maximally optimist scenario: solving coordination is not too late and gets done easily and at a low cost. utopia
optimist scenario: we don’t substantially improve coordination, but our current coordination level is good enough for an Okay Outcome
pessimist scenario: agents are incentived to create subagents with other goals for instrumentally convergent purposes. defecting is better than cooperating individually, but defecting-defecting still leads to extremely bad outcomes (just not as bad as if you had cooperated in a population of cooperators). those subagents quickly take over and kill all humans (those who cooperated are killed slightly sooner). or, not requiring misaligned AIs, maybe the aestivation hypothesis is true but we won’t coordinate to delay energy consumption or wars will use all surplus leaving nothing for humans to consume
I’m not confident we’re in an optimist scenario. being able to download one’s values and then load them in an AI system (and having initial conditions where that’s all that happens) might not be sufficient for good outcomes
this is evidence for the importance of coordinating on how AGI systems get used, and that distributing that wealth/intelligence directly might not be the way to go. rather, it might be better to keep that intelligence concentrated and have some value/decision aggregation mechanism to decide what to do with it (rather than distributing it and later not being able to pool it back together if that’s needed, which seems plausible to be)
a similar reasoning can apply for poverty alleviation: if you want to donate money to a group of people (say residents of a poor country) and if you think they haven’t solved their coordination problem, then maybe instead of distributing that money and let them try to coordinate to put back (part of) that money in a shared pool for collective goods, you can just directly put that money in such a pool—the problem about figuring out the shared goal remains but it at least arguably solves the problem of pooling that money (ex.: to fund research for a remedy to a disease affecting that population)
AI is improving exponentially with researchers having constant intelligence. Once the AI research workforce become itself composed of AIs, that constant will become exponential which would make AI improve even faster (superexponentially?)
it doesn’t need to be the scenario of a singular AI agent self-improving its own self; it can be a large AI population participating in the economy and collectively improving AI as a whole, with various AI clans* focusing on different subdomains (EtA: for the main purpose of making money, and then using that money to buy tech/data/resources that will improve them)
*I’m wanting myself to differentiate between a “template NN” and its multiple instantiation, and maybe adopting the terminology from The Age of Em for that works well