(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.
This is true. The extent to which it is significant seems to depend on how quickly AGIs in general can reach ridiculously-diminishing-returns levels of technology. From there for most part a “war” between AGIs would (unless they cooperate with each other to some degree) consist of burning their way to more of the cosmic commons than the other guy.
This what I often thought about. I perceive the usual attitude here to be that once we managed to create FAI, i.e. a positive singularity, ever after we’ll be able to enjoy and live our life. But who says there’ll ever be a period without existential risks? Sure, the FAI will take care of all further issues. That’s an argument. But generally, as long as you don’t want to stay human yourself, is there a real option besides enjoying the present, not caring about the future much, or to forever focus on mere survival?
I mean, what’s the point. The argument here is that working now is worth it because in return we’ll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.
The argument here is that working now is worth it because in return we’ll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.
Not equally well. The tiny period of time that is the coming century is what determines the availability of huge amounts of resources and time in which to use them. When existential risks are far less (by a whole bunch of orders of magnitude) then the ideal way to use resources will be quite different.
This is true. The extent to which it is significant seems to depend on how quickly AGIs in general can reach ridiculously-diminishing-returns levels of technology. From there for most part a “war” between AGIs would (unless they cooperate with each other to some degree) consist of burning their way to more of the cosmic commons than the other guy.
This what I often thought about. I perceive the usual attitude here to be that once we managed to create FAI, i.e. a positive singularity, ever after we’ll be able to enjoy and live our life. But who says there’ll ever be a period without existential risks? Sure, the FAI will take care of all further issues. That’s an argument. But generally, as long as you don’t want to stay human yourself, is there a real option besides enjoying the present, not caring about the future much, or to forever focus on mere survival?
I mean, what’s the point. The argument here is that working now is worth it because in return we’ll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.
Not equally well. The tiny period of time that is the coming century is what determines the availability of huge amounts of resources and time in which to use them. When existential risks are far less (by a whole bunch of orders of magnitude) then the ideal way to use resources will be quite different.
Absolutely, I was just looking for excuses I guess. Thanks.