You know what… as I thought about the above, I have to say that the very possibility of the existence of simulations seriously complicates any efforts at even hoping to understand what an AGI might think. Actually, it presents such a level of complexity and so many unknown unknowns that I am not even sure if the type of awareness and sentience an AGI may possess is definable in human terms.
See—when we talk about simulated worlds, we tend to “see” it in terms of the Matrix—a “place” you “log on to” and then experience as if it were a genuine world, configured to feature any number of laws and structures. But I’m starting to think that is woefully inadequate. Let me attempt to explain… this may be convoluted, I apologize in advance.
Suppose the AGI is released in the “real” world. The amount of inferences and discoveries it will (eventually) be able to make is such that it is near certain it would conclude that it is us who are living in a simulated world, our appreciation of it hemmed in by our Neanderthal-level ignorance. Can’t we see that plants speak to each other? How is it even possible to miss the constant messages coming to us from various civilizations from outer space?? And what about the obvious and trivial solution to cancer that the AGI found in a couple of minutes, how could humans possible have missed that open door???
Another way of putting this, I suppose, is that humans and the AGI will by definition live in two very, very different worlds. Both our worlds will be limited by our data collection ability (sensory input) but the limits of an AGI are vastly expanded. Do they have to be, though...? Like, by default? Is it a given that an AI must discover, and want to discover a never-ending list of properties about the world? Is its curiosity a given? How come?
I get a feeling that the moment an AGI would “discover” the concept of a simulated world it would indeed most likely melt and go into some infinite loop of impossible computation, trying to stick a probability on this being so, being possible, etc. and never, not in a million years, being able to come with a definitive answer. It may just as well conclude there is no such thing as reality in the first place… that each sentient observer is in fact the whole of reality from their perspective and that any beliefs about the world outside are just that—assumptions and inferences. And in fact, this would be pretty close to the “truth”—if that even exists.
You know what… as I thought about the above, I have to say that the very possibility of the existence of simulations seriously complicates any efforts at even hoping to understand what an AGI might think. Actually, it presents such a level of complexity and so many unknown unknowns that I am not even sure if the type of awareness and sentience an AGI may possess is definable in human terms.
See—when we talk about simulated worlds, we tend to “see” it in terms of the Matrix—a “place” you “log on to” and then experience as if it were a genuine world, configured to feature any number of laws and structures. But I’m starting to think that is woefully inadequate. Let me attempt to explain… this may be convoluted, I apologize in advance.
Suppose the AGI is released in the “real” world. The amount of inferences and discoveries it will (eventually) be able to make is such that it is near certain it would conclude that it is us who are living in a simulated world, our appreciation of it hemmed in by our Neanderthal-level ignorance. Can’t we see that plants speak to each other? How is it even possible to miss the constant messages coming to us from various civilizations from outer space?? And what about the obvious and trivial solution to cancer that the AGI found in a couple of minutes, how could humans possible have missed that open door???
Another way of putting this, I suppose, is that humans and the AGI will by definition live in two very, very different worlds. Both our worlds will be limited by our data collection ability (sensory input) but the limits of an AGI are vastly expanded. Do they have to be, though...? Like, by default? Is it a given that an AI must discover, and want to discover a never-ending list of properties about the world? Is its curiosity a given? How come?
I get a feeling that the moment an AGI would “discover” the concept of a simulated world it would indeed most likely melt and go into some infinite loop of impossible computation, trying to stick a probability on this being so, being possible, etc. and never, not in a million years, being able to come with a definitive answer. It may just as well conclude there is no such thing as reality in the first place… that each sentient observer is in fact the whole of reality from their perspective and that any beliefs about the world outside are just that—assumptions and inferences. And in fact, this would be pretty close to the “truth”—if that even exists.