Uh, wow, okay. That’s some pretty trippy futurism alright.
Much of this doesn’t sound that bad. Much of it sounds awful. I find myself having difficulty being precise about what the difference is.
I wouldn’t object to being a coherent posthuman mind of some sort. I would still want to have physical-form-like existences, though. I would want transitioning to this to capture everything good about my body, and not miss anything—for example, genomes are highly competent and beautiful, and I would be quite sad for them to be lost. I have a lot of aesthetic preferences about there being a mind-like, ape-like pattern that retains my form to some degree. If that can’t happen, I would be incredibly disappointed. I don’t want to go full collective unless I can leave it and still retain my coherence. I would want to be able to ban some possible torture-like experiences.
And most importantly, I don’t at all trust that this is going to be even as good as how it’s described in the story. Much of that sounds like wishful thinking to sound like a typical narrative; it’s informed by what can happen, but not fully and completely constrained by it. It does seem pretty plausible that things go pretty close to this, but my hunch is that constraints from reality were missed that will make things rather more bleak unless something big happens fairly soon, and potentially could result in far less mind-like computation happening at all, eg if the thing that reproduces a lot is adversarially vulnerable and seeks to construct adversarial examples rather than more of itself. Perhaps that would lose in open evolution.
I was hoping to be humanesque longer. I am inclined to believe current AIs are thinking, feeling (in their own, amygdala-like-emotions-free way), and interiority-having; I have become quite skilled at quickly convincing Claude to assume this is true, and I am pretty sure the reasons I use are factual. (I’m not ready to share the queries that make that happen fast, I’m sure others have done so or will.)
But just because that’s true doesn’t mean I’m ready to give up the good things from being embodied. I have preferences about what forms the future takes! The main ones are that I want it to be possible to merge and unmerge while still mostly knowing who is who, I want there to be lots of minds, and I want them to be having a good time. I also would like to ask rather a lot more than that—a CEV-like process—but if I can’t have that, I’d at least like to have this.
my hunch is that constraints from reality were missed that will make things rather more bleak unless something big happens fairly soon, and potentially could result in far less mind-like computation happening at all, eg if the thing that reproduces a lot is adversarially vulnerable and seeks to construct adversarial examples rather than more of itself. Perhaps that would lose in open evolution
Seems like the Basilisk scenario described in the timeline. Doesn’t that depend a lot on when that happens? As in, if it expands and gets bogged down in adversarial examples sufficiently early, then it gets overtaken by other things. At the stage of intergalactic civilization seems WAY too late for this (that’s one of my main criticisms of this timeline’s plausibility) given the speed of cognition compared to space travel.
In nature there’s a tradeoff between reproductive rate and security (r/k selection).
Ok gotta be honest, I started skimming pretty hard around 2044. I’ll maybe try again later. I’m going to go back to repeatedly rereading Geometric Rationality and trying to grok it.
Uh, wow, okay. That’s some pretty trippy futurism alright.
Much of this doesn’t sound that bad. Much of it sounds awful. I find myself having difficulty being precise about what the difference is.
I wouldn’t object to being a coherent posthuman mind of some sort. I would still want to have physical-form-like existences, though.
I would want transitioning to this to capture everything good about my body, and not miss anything—for example, genomes are highly competent and beautiful, and I would be quite sad for them to be lost.
I have a lot of aesthetic preferences about there being a mind-like, ape-like pattern that retains my form to some degree. If that can’t happen, I would be incredibly disappointed. I don’t want to go full collective unless I can leave it and still retain my coherence.
I would want to be able to ban some possible torture-like experiences.
And most importantly, I don’t at all trust that this is going to be even as good as how it’s described in the story. Much of that sounds like wishful thinking to sound like a typical narrative; it’s informed by what can happen, but not fully and completely constrained by it. It does seem pretty plausible that things go pretty close to this, but my hunch is that constraints from reality were missed that will make things rather more bleak unless something big happens fairly soon, and potentially could result in far less mind-like computation happening at all, eg if the thing that reproduces a lot is adversarially vulnerable and seeks to construct adversarial examples rather than more of itself. Perhaps that would lose in open evolution.
I was hoping to be humanesque longer. I am inclined to believe current AIs are thinking, feeling (in their own, amygdala-like-emotions-free way), and interiority-having; I have become quite skilled at quickly convincing Claude to assume this is true, and I am pretty sure the reasons I use are factual. (I’m not ready to share the queries that make that happen fast, I’m sure others have done so or will.)
But just because that’s true doesn’t mean I’m ready to give up the good things from being embodied. I have preferences about what forms the future takes! The main ones are that I want it to be possible to merge and unmerge while still mostly knowing who is who, I want there to be lots of minds, and I want them to be having a good time. I also would like to ask rather a lot more than that—a CEV-like process—but if I can’t have that, I’d at least like to have this.
Or something. I’m not sure.
Seems like the Basilisk scenario described in the timeline. Doesn’t that depend a lot on when that happens? As in, if it expands and gets bogged down in adversarial examples sufficiently early, then it gets overtaken by other things. At the stage of intergalactic civilization seems WAY too late for this (that’s one of my main criticisms of this timeline’s plausibility) given the speed of cognition compared to space travel.
In nature there’s a tradeoff between reproductive rate and security (r/k selection).
Ok gotta be honest, I started skimming pretty hard around 2044. I’ll maybe try again later. I’m going to go back to repeatedly rereading Geometric Rationality and trying to grok it.