I guess we need to maximase different good possible outcome, and each of them
for example to rise propability of Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, humans could
prohibit all autonomous AGI use.
Esspecially those that use uncontrolled clusters of graphical proccessors in authocraties without international AI-safe supervisors like Eliezer Yudkowsky, Nick Bostrom or their crew
this, restrictions of weak APIs systems and need to use human operators
make nature borders of AI scalability so AGI find that it’s more fervour to mimick and consensus with people and other AGI, at least to use humans like operators that work under AGI advises or make humanlike persons that simpler to work with human culture and other people
detection systems often use categorisation principles,
so even if AGI prohibit some rules without scalability it could function without danger longer cause security systems (that also some kind of tech officers with AI) couldn’t find and destroy them,
this could create conditions to encourage the diversity and uniqueness of different AGIs
so all neurone beings, AGI, people with AI, could win some time to find new balances of using atoms of multiverse
more borders, more time to conquer longer live to every human, even win of two second for every 8kkk people worth it
more chances that different fuctions will find some kind of balance of AGI, people with AGI, people under AGI, other fractions
I remember autonomose poker AIs destroy weak ecosystems one by one, but now industry in sustainable growth with separate actors, each of them use AI but in very different manners
More separate systems, more chances that with time of destroying them one by one in one time AGI will find way how to function without destroying it’s environment
PS separate way: send spacehips with prohibitaion of AGI (maybe only with life, no apes) as far as posible so when AGI happened on Earth it’s couldn’t get all of them)
I guess we need to maximase different good possible outcome, and each of them
for example to rise propability of Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, humans could
prohibit all autonomous AGI use.
Esspecially those that use uncontrolled clusters of graphical proccessors in authocraties without international AI-safe supervisors like Eliezer Yudkowsky, Nick Bostrom or their crew
this, restrictions of weak APIs systems and need to use human operators
make nature borders of AI scalability so AGI find that it’s more fervour to mimick and consensus with people and other AGI, at least to use humans like operators that work under AGI advises or make humanlike persons that simpler to work with human culture and other people
detection systems often use categorisation principles,
so even if AGI prohibit some rules without scalability it could function without danger longer cause security systems (that also some kind of tech officers with AI) couldn’t find and destroy them,
this could create conditions to encourage the diversity and uniqueness of different AGIs
so all neurone beings, AGI, people with AI, could win some time to find new balances of using atoms of multiverse
more borders, more time to conquer longer live to every human, even win of two second for every 8kkk people worth it
more chances that different fuctions will find some kind of balance of AGI, people with AGI, people under AGI, other fractions
I remember autonomose poker AIs destroy weak ecosystems one by one, but now industry in sustainable growth with separate actors, each of them use AI but in very different manners
More separate systems, more chances that with time of destroying them one by one in one time AGI will find way how to function without destroying it’s environment
PS separate way: send spacehips with prohibitaion of AGI (maybe only with life, no apes) as far as posible so when AGI happened on Earth it’s couldn’t get all of them)