but it still says “it’s easy for others to get their own superintelligences with different values”, with ‘superintelligence’ referring to the ‘superhuman’ AI of 2035?
my response is the same, the story ends before what i meant by superintelligence has occurred.
(it’s okay if this discussion was secretly a definition difference till now!)
Yeah, the crux is I don’t think the story ends before superintelligence
what i meant by “the story ends before what i meant by superintelligence has occurred” is that the written one ends there in 2035, but at that point there’s still time to effect what the first long-term-decisive thing will be.
but it still says “it’s easy for others to get their own superintelligences with different values”, with ‘superintelligence’ referring to the ‘superhuman’ AI of 2035?
still confused about this btw. in my second reply to you i wrote:
(i wonder if you’re using the term ‘superintelligence’ in a different way though, e.g. to mean “merely super-human”?)
and you did not say you were, but it looks like you are here?
I was assuming very strongly superhumanly intelligent AI, but yeah no promises of optimality were made here.
That said, I suspect a crux is that optimality ends up with multipolarity, assuming a one world government hasn’t happened by then, because I think the offense-defense balance moderately favors defense even at optimality, assuming optimal defense and offense.
I was assuming very strongly superhumanly intelligent AI
oh okay, i’ll have to reinterpret then. edit: i just tried, but i still don’t get it; if it’s “very strongly superhuman”, why is it merely “when the economy starts getting seriously disrupted”? (<- this feels like it’s back at where this thread started)
I think the offense-defense balance moderately favors defense even at optimality
oh okay, i’ll have to reinterpret then. edit: i just tried, but i still don’t get it; if it’s “very strongly superhuman”, why is it merely “when the economy starts getting seriously disrupted”? (<- this feels like it’s back at where this thread starte
I should probably edit that at some point, but I’m on my phone, so I’ll do it tomorrow.
why?
A big reason for this is logistics, as how you are getting to the fight can actually hamper you a lot, and this especially bites hard on offense, because it’s easier to get supplies to your area than it is to get supplies to an offensive unit.
This especially matters if physical goods need to be transported from one place to another place.
A big reason for this is logistics, as how you are getting to the fight can actually hamper you a lot, and this especially bites hard on offense, because it’s easier to get supplies to your area than it is to get supplies to an offensive unit.
ah. for ‘at optimality’ which you wrote, i don’t imagine it to take place on that high of a macroscopic level (the one on which ‘supplies’ could be transported), i think the limit is more things that look to us like the category of ‘angling rays of light just right to cause distant matter to interact in such away as to create an atomic explosion, or some even more destructive reaction we don’t yet know about, or to suddenly carve out a copy of itself there to start doing things locally’, and also i’m not imagining the competitors being ‘solid’ macroscopic entities anymore, but rather being patterns imbued (and dispersed) in a relatively ‘lower’ level of physics (which also do not need ‘supplies’). (edit: maybe this picture is wrong, at optimality you can maybe absorb the energy of such explosions / not be damaged by them, if you’re not a macroscopic thing. which does actually defeat the main way macroscopic physics has an offense advantage?)
(i’m just exploring what it would be like to be clear, i don’t think such conflicts will happen because i still expect just one optimal-level-agent to come from earth)
(i’m just exploring what it would be like to be clear, i don’t think such conflicts will happen because i still expect just one optimal-level-agent to come from earth)
I am willing to concede that here, the assumption of non-optimal agents were more necessary than I thought for my argument, and I think you are right on the necessity of the assumption in order to guarantee anything like a normal future (though it still might be multipolar), so I changed a comment.
My new point is that I don’t think optimal agents will exist when we lose all control, but yes I didn’t realize an assumption was more load-bearing than I thought.
My new point is that I don’t think optimal agents will exist when we lose all control
(btw I also realized I didn’t strictly mean ‘optimal’ by ‘superintelligent’, but at least close enough to it / ‘strongly superhuman enough’ for us to not be able to tell the difference. I originally used the ‘optimal’ wording trying to find some other definition apart from ‘super-human’)
it is also plausible to me that life-caring beings first lose control to much narrower programs[1] or moderately superhuman unaligned agents totally outcompeting them economically (if it turns out that making better agents is hard enough that they can’t just directly do that instead), or something.
also, a ‘multipolar AI-driven but still normal-ish’ scenario seems to continue at most until a strong enough agent is created. (e.g. that could be what a race is towards).
(maybe after ‘loss of control to weaker AI’ scenarios, those weaker AIs also keep making better agents afterwards, but i’m not sure about that, because they could be myopic and in some stable pattern/equilibrium)
but it still says “it’s easy for others to get their own superintelligences with different values”, with ‘superintelligence’ referring to the ‘superhuman’ AI of 2035?
my response is the same, the story ends before what i meant by superintelligence has occurred.
(it’s okay if this discussion was secretly a definition difference till now!)
Yeah, the crux is I don’t think the story ends before superintelligence, for a combination of reasons
what i meant by “the story ends before what i meant by superintelligence has occurred” is that the written one ends there in 2035, but at that point there’s still time to effect what the first long-term-decisive thing will be.
still confused about this btw. in my second reply to you i wrote:
and you did not say you were, but it looks like you are here?
I was assuming very strongly superhumanly intelligent AI, but yeah no promises of optimality were made here.
That said, I suspect a crux is that optimality ends up with multipolarity, assuming a one world government hasn’t happened by then, because I think the offense-defense balance moderately favors defense even at optimality, assuming optimal defense and offense.
oh okay, i’ll have to reinterpret then. edit: i just tried, but i still don’t get it; if it’s “very strongly superhuman”, why is it merely “when the economy starts getting seriously disrupted”? (<- this feels like it’s back at where this thread started)
why?
I should probably edit that at some point, but I’m on my phone, so I’ll do it tomorrow.
A big reason for this is logistics, as how you are getting to the fight can actually hamper you a lot, and this especially bites hard on offense, because it’s easier to get supplies to your area than it is to get supplies to an offensive unit.
This especially matters if physical goods need to be transported from one place to another place.
ah. for ‘at optimality’ which you wrote, i don’t imagine it to take place on that high of a macroscopic level (the one on which ‘supplies’ could be transported), i think the limit is more things that look to us like the category of ‘angling rays of light just right to cause distant matter to interact in such away as to create an atomic explosion, or some even more destructive reaction we don’t yet know about, or to suddenly carve out a copy of itself there to start doing things locally’, and also i’m not imagining the competitors being ‘solid’ macroscopic entities anymore, but rather being patterns imbued (and dispersed) in a relatively ‘lower’ level of physics (which also do not need ‘supplies’). (edit: maybe this picture is wrong, at optimality you can maybe absorb the energy of such explosions / not be damaged by them, if you’re not a macroscopic thing. which does actually defeat the main way macroscopic physics has an offense advantage?)
(i’m just exploring what it would be like to be clear, i don’t think such conflicts will happen because i still expect just one optimal-level-agent to come from earth)
I am willing to concede that here, the assumption of non-optimal agents were more necessary than I thought for my argument, and I think you are right on the necessity of the assumption in order to guarantee anything like a normal future (though it still might be multipolar), so I changed a comment.
My new point is that I don’t think optimal agents will exist when we lose all control, but yes I didn’t realize an assumption was more load-bearing than I thought.
(btw I also realized I didn’t strictly mean ‘optimal’ by ‘superintelligent’, but at least close enough to it / ‘strongly superhuman enough’ for us to not be able to tell the difference. I originally used the ‘optimal’ wording trying to find some other definition apart from ‘super-human’)
it is also plausible to me that life-caring beings first lose control to much narrower programs[1] or moderately superhuman unaligned agents totally outcompeting them economically (if it turns out that making better agents is hard enough that they can’t just directly do that instead), or something.
also, a ‘multipolar AI-driven but still normal-ish’ scenario seems to continue at most until a strong enough agent is created. (e.g. that could be what a race is towards).
(maybe after ‘loss of control to weaker AI’ scenarios, those weaker AIs also keep making better agents afterwards, but i’m not sure about that, because they could be myopic and in some stable pattern/equilibrium)
(e.g. the ‘going out with a whimper’ part of this post)