If we work from assumptions that make it likely for the universe to contain a “large number” of natural intelligences that go on to build UFAIs that assimilate on an interstellar or intergalactic level, then Earth would almost certainly have already been assimilated millions, even billions of years ago, and we accordingly would not be around to theorize.
After all, one needs only one species to make a single intergalacitc assimilating UFAI emerge somewhere in the Virgo Supercluster more than 110 million years ago to have assimilated the whole supercluster by now, using no physics we don’t already know.
I suppose its possible that the first AI, and thus the oldest and most powerful, was a strange variety of paper-clipper that just wanted to tile the universe with huge balls of hydrogen undergoing nuclear fusion.
If this is true I’d be interested to know what our Supercluster looked like before.
Does anybody else on this board notice the similarities between speculations on the properties of AI and speculation on the properties of God? Will a friendly AI be able to protect us from unfriendly AIs if the friendly one is built first, locally?
Do we have strong evidence that we are NOT the paperclips of an AI? Would that be different from or the same as the creations of a god? Would we be able to tell the difference or would we only an observer outside the system be able to see the difference?
Do we have strong evidence that we are NOT the paperclips of an AI?
No, and I don’t see how we could, given that any observation we make could always explained for as another part of its utility function. On the other hand we don’t have any strong evidence for it, so it basically comes down to priors. Similar to the God debate but I think the question of priors may be more interesting here, I’d quite like to see an analysis of it if anyone has the spare time.
Would that be different from or the same as the creations of a god?
For a sufficiently broad definition of god, no, but I would say an AI wou;ld have some qualities not usually associated with God, particularly the quality of having been created by another agent.
Unless there is a good story about how a low complexity part of the universe developed/evolved into something that would eventually become/create an AI with a utility function that looks exactly like the universe we find ourselves our current story involving evolution from abiogenesis from the right mixture of chemicals and physical forces from the natural life and death of large, hot, collections of hydrogen, formed from an initially even distribution of matter seems far more parsimonious.
Beat me to it, it occurred to me just a bit ago that this ought to have been the main objection in my first comment. Among the premises that there are a large number of natural intelligences in our area of the universe, natural intelligences are likely to create strong AI, and strong AI are likely to expand to monopolize the surrounding space at relativistic speeds, we can conclude from our observations that at least one is almost certainly false.
Which is evidence against the possibility of AI going FOOM. I wrote this before:
The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI’s with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
If we work from assumptions that make it likely for the universe to contain a “large number” of natural intelligences that go on to build UFAIs that assimilate on an interstellar or intergalactic level, then Earth would almost certainly have already been assimilated millions, even billions of years ago, and we accordingly would not be around to theorize.
After all, one needs only one species to make a single intergalacitc assimilating UFAI emerge somewhere in the Virgo Supercluster more than 110 million years ago to have assimilated the whole supercluster by now, using no physics we don’t already know.
I suppose its possible that the first AI, and thus the oldest and most powerful, was a strange variety of paper-clipper that just wanted to tile the universe with huge balls of hydrogen undergoing nuclear fusion.
If this is true I’d be interested to know what our Supercluster looked like before.
Does anybody else on this board notice the similarities between speculations on the properties of AI and speculation on the properties of God? Will a friendly AI be able to protect us from unfriendly AIs if the friendly one is built first, locally?
Do we have strong evidence that we are NOT the paperclips of an AI? Would that be different from or the same as the creations of a god? Would we be able to tell the difference or would we only an observer outside the system be able to see the difference?
Why do you think Vernor Vinge dubbed AI, in one of his novels, “applied theology”?
Yes.
No, and I don’t see how we could, given that any observation we make could always explained for as another part of its utility function. On the other hand we don’t have any strong evidence for it, so it basically comes down to priors. Similar to the God debate but I think the question of priors may be more interesting here, I’d quite like to see an analysis of it if anyone has the spare time.
For a sufficiently broad definition of god, no, but I would say an AI wou;ld have some qualities not usually associated with God, particularly the quality of having been created by another agent.
Unless there is a good story about how a low complexity part of the universe developed/evolved into something that would eventually become/create an AI with a utility function that looks exactly like the universe we find ourselves our current story involving evolution from abiogenesis from the right mixture of chemicals and physical forces from the natural life and death of large, hot, collections of hydrogen, formed from an initially even distribution of matter seems far more parsimonious.
Beat me to it, it occurred to me just a bit ago that this ought to have been the main objection in my first comment. Among the premises that there are a large number of natural intelligences in our area of the universe, natural intelligences are likely to create strong AI, and strong AI are likely to expand to monopolize the surrounding space at relativistic speeds, we can conclude from our observations that at least one is almost certainly false.
Which is evidence against the possibility of AI going FOOM. I wrote this before:
The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI’s with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.