Do we really need it to talk before we recognise FOOM? Seed AI you were building starts downloading lot of data from the internetz, and its rate of downloading data seems to increase with time. Congrats, it FOOMed, just as you’d hoped it would.
It’s a different matter if you accidentally managed to build a potentially super-intelligent AI. In which case.… WTF?
After ton of failed attempts, its extraordinary claims needing extraordinary evidence.
Also, when AI is downloading stuff off internet its already not boxed. Reading copy of internet maybe. Keep in mind that dumbest AI can read that stuff the fastest, cos it was only e.g. looking for how first letter correlates with last letter. I sure won’t assume that the raytracer is working correctly just because it did load all the objects in a scene. Let alone experimental AI.
You can bat aside individual scenarios, but the point is… are there no known reliable indicators that an AI is undergoing FOOM? Even at the point where AI theory is advanced enough to actually build one?
We have 1 example of seed AI. The seed AI took about 3 hours to progress to the point that it started babbling to itself, 2..3 seconds from there to trying to talk to outside (except it didn’t figure out how to talk to outside, and was still just babbling to itself), and then 0.036 seconds to FOOM.
The seed AI was biological intelligence (as a black box), and i scaled to 1 hour = 1 billion years. (and the outside doesn’t seem to exist but the intelligence tried anyway).
Do we really need it to talk before we recognise FOOM? Seed AI you were building starts downloading lot of data from the internetz, and its rate of downloading data seems to increase with time. Congrats, it FOOMed, just as you’d hoped it would.
It’s a different matter if you accidentally managed to build a potentially super-intelligent AI. In which case.… WTF?
After ton of failed attempts, its extraordinary claims needing extraordinary evidence.
Also, when AI is downloading stuff off internet its already not boxed. Reading copy of internet maybe. Keep in mind that dumbest AI can read that stuff the fastest, cos it was only e.g. looking for how first letter correlates with last letter. I sure won’t assume that the raytracer is working correctly just because it did load all the objects in a scene. Let alone experimental AI.
You can bat aside individual scenarios, but the point is… are there no known reliable indicators that an AI is undergoing FOOM? Even at the point where AI theory is advanced enough to actually build one?
We have 1 example of seed AI. The seed AI took about 3 hours to progress to the point that it started babbling to itself, 2..3 seconds from there to trying to talk to outside (except it didn’t figure out how to talk to outside, and was still just babbling to itself), and then 0.036 seconds to FOOM.
The seed AI was biological intelligence (as a black box), and i scaled to 1 hour = 1 billion years. (and the outside doesn’t seem to exist but the intelligence tried anyway).