my hunch is that constraints from reality were missed that will make things rather more bleak unless something big happens fairly soon, and potentially could result in far less mind-like computation happening at all, eg if the thing that reproduces a lot is adversarially vulnerable and seeks to construct adversarial examples rather than more of itself. Perhaps that would lose in open evolution
Seems like the Basilisk scenario described in the timeline. Doesn’t that depend a lot on when that happens? As in, if it expands and gets bogged down in adversarial examples sufficiently early, then it gets overtaken by other things. At the stage of intergalactic civilization seems WAY too late for this (that’s one of my main criticisms of this timeline’s plausibility) given the speed of cognition compared to space travel.
In nature there’s a tradeoff between reproductive rate and security (r/k selection).
Ok gotta be honest, I started skimming pretty hard around 2044. I’ll maybe try again later. I’m going to go back to repeatedly rereading Geometric Rationality and trying to grok it.
Seems like the Basilisk scenario described in the timeline. Doesn’t that depend a lot on when that happens? As in, if it expands and gets bogged down in adversarial examples sufficiently early, then it gets overtaken by other things. At the stage of intergalactic civilization seems WAY too late for this (that’s one of my main criticisms of this timeline’s plausibility) given the speed of cognition compared to space travel.
In nature there’s a tradeoff between reproductive rate and security (r/k selection).
Ok gotta be honest, I started skimming pretty hard around 2044. I’ll maybe try again later. I’m going to go back to repeatedly rereading Geometric Rationality and trying to grok it.