“This is why, when you encounter the AGI wannabe who hasn’t planned out a whole technical approach to FAI, and confront them with the problem for the first time, and they say, “Oh, we’ll test it to make sure that doesn’t happen, and if any problem like that turns up we’ll correct it, now let me get back to the part of the problem that really interests me,” know then that this one has not yet leveled up high enough to have interesting opinions.”
There is an overwhelming assumption here in a Terminator series hard takeoff. Whereas the plausible reality IMO seems to be more like an ecosystem of groups of intelligences of varying degrees all of which will likely have survival rationale for disallowing a peer to hit nutso escape velocity. And at any rate, someone in 2025 with a infrahuman intelligent AI is likely to be much better off at solving the 100th meta-derivative of these toy problems than someone working with 200 hz neurons alone. Now I gotta go, I think windows needs to reboot or something..
“This is why, when you encounter the AGI wannabe who hasn’t planned out a whole technical approach to FAI, and confront them with the problem for the first time, and they say, “Oh, we’ll test it to make sure that doesn’t happen, and if any problem like that turns up we’ll correct it, now let me get back to the part of the problem that really interests me,” know then that this one has not yet leveled up high enough to have interesting opinions.”
There is an overwhelming assumption here in a Terminator series hard takeoff. Whereas the plausible reality IMO seems to be more like an ecosystem of groups of intelligences of varying degrees all of which will likely have survival rationale for disallowing a peer to hit nutso escape velocity. And at any rate, someone in 2025 with a infrahuman intelligent AI is likely to be much better off at solving the 100th meta-derivative of these toy problems than someone working with 200 hz neurons alone. Now I gotta go, I think windows needs to reboot or something..