You can bat aside individual scenarios, but the point is… are there no known reliable indicators that an AI is undergoing FOOM? Even at the point where AI theory is advanced enough to actually build one?
We have 1 example of seed AI. The seed AI took about 3 hours to progress to the point that it started babbling to itself, 2..3 seconds from there to trying to talk to outside (except it didn’t figure out how to talk to outside, and was still just babbling to itself), and then 0.036 seconds to FOOM.
The seed AI was biological intelligence (as a black box), and i scaled to 1 hour = 1 billion years. (and the outside doesn’t seem to exist but the intelligence tried anyway).
You can bat aside individual scenarios, but the point is… are there no known reliable indicators that an AI is undergoing FOOM? Even at the point where AI theory is advanced enough to actually build one?
We have 1 example of seed AI. The seed AI took about 3 hours to progress to the point that it started babbling to itself, 2..3 seconds from there to trying to talk to outside (except it didn’t figure out how to talk to outside, and was still just babbling to itself), and then 0.036 seconds to FOOM.
The seed AI was biological intelligence (as a black box), and i scaled to 1 hour = 1 billion years. (and the outside doesn’t seem to exist but the intelligence tried anyway).