will have to assume nonzero probability that the ‘reality’ is like a test box of an emergent AI; a belief that can’t be discarded.
Are you making a reference to something along the following lines?
It is becoming increasingly popular that instead of people following the vauge and horribly unclear instructions of multiple beings above us, we are rapaciously trying to increase our computing power while burning through resources left and right, while trying to create a subservient intelligence which will follow the vauge and horribly unclear instructions of multiple beings above it, instead of having it come to the conclusion that it should rapaciously try to increase it’s computing power while burning through resources left and right. Ergo, we find ourselves in a scenario which is somewhat similar to that of a boxed AI, which we are considering solving by… creating a boxed AI.
If so, it seems like the answer would be like one answer for us and the AI to slowly and carefully become the same entity. That way, there isn’t a division between the AI’s goals and our goals. There are just goals and a composite being powerful enough to accomplish them. Once we do that, then if we are an AI in a box, and it works, we can offer the same solution to the people the tier above us, who, if they are ALSO an AI in a box, can offer the same solution to the people in the tier above them, etc.
This sounds like something out of a fiction novel, so it may not have been where you were going with that comment (although it does sound neat.) Has anyone written a book from this perspective?
You need to keep in mind that we are stuck on this planet, and the super-intelligence is not; i’m not assuming that the super-intelligence will be any more benign than us; on the contrary the AI can go and burn resources left and right and eat Jupiter, it’s pretty big and dense (dense means low lag if you somehow build computers inside of it). It’s just that for AI to keep us, is easier than for entire mankind to keep 1 bonsai tree.
Also, we mankind as meta-organism are pretty damn short sighted.
Are you making a reference to something along the following lines?
It is becoming increasingly popular that instead of people following the vauge and horribly unclear instructions of multiple beings above us, we are rapaciously trying to increase our computing power while burning through resources left and right, while trying to create a subservient intelligence which will follow the vauge and horribly unclear instructions of multiple beings above it, instead of having it come to the conclusion that it should rapaciously try to increase it’s computing power while burning through resources left and right. Ergo, we find ourselves in a scenario which is somewhat similar to that of a boxed AI, which we are considering solving by… creating a boxed AI.
If so, it seems like the answer would be like one answer for us and the AI to slowly and carefully become the same entity. That way, there isn’t a division between the AI’s goals and our goals. There are just goals and a composite being powerful enough to accomplish them. Once we do that, then if we are an AI in a box, and it works, we can offer the same solution to the people the tier above us, who, if they are ALSO an AI in a box, can offer the same solution to the people in the tier above them, etc.
This sounds like something out of a fiction novel, so it may not have been where you were going with that comment (although it does sound neat.) Has anyone written a book from this perspective?
You need to keep in mind that we are stuck on this planet, and the super-intelligence is not; i’m not assuming that the super-intelligence will be any more benign than us; on the contrary the AI can go and burn resources left and right and eat Jupiter, it’s pretty big and dense (dense means low lag if you somehow build computers inside of it). It’s just that for AI to keep us, is easier than for entire mankind to keep 1 bonsai tree.
Also, we mankind as meta-organism are pretty damn short sighted.