Depends how cool. I don’t know the space of self-modifying programs very well. Anything cooler than anything that’s been tried before, even marginally cooler, has a noticeable subjective probability of going to shit. I mean, if you kept on making it marginally cooler and cooler, it’d go to “oh, shit” one day after a sequence of “ooh, cools” and I don’t know how long that sequence is.
I mean, if you kept on making it marginally cooler and cooler, it’d go to “oh, shit” one day after a sequence of “ooh, cools” and I don’t know how long that sequence is.
This means we should feel pretty safe, since AI does not appear to be making even incremental progress.
Really, it’s hard for anyone who is well-versed in the “state of the art” of AI to feel any kind of alarm about the possibility of an imminent FOOM. Take a look at this paper. Skim through the intro, note the long and complicated reinforcement learning algorithm, and check out the empirical results section. The test domain involves a monkey in a 5x5 playroom. There are some fun little complications, like a light switch and a bell. Note that these guys are top-class (Andrew Barto basically invented RL), and the paper was published at one of the top-tier machine learning conferences (NIPS), in 2005.
Call me a denier, but I just don’t think the monkey is going to bust out of his playroom and take over the world. At least, not anytime soon.
Taking progress in AI to mean more real world effectiveness:
Intelligence seems to have jumps in real world effectiveness, e.g. the brains of great apes and humans are very similar, the difference in effectiveness is obvious.
So coming to the conclusion that we are fine based on the state of the art not being any more effective (not making progress) would be very dangerous. Perhaps tomorrow, some team of AI researchers will combine the current state of the art solutions in just the right way, resulting in a massive jump in real world effectiveness? maybe enough to have an “oh, shit” moment?
Regardless of the time frame, if the AI community is working towards AGI rather than FAI, we will likely have (eventually) an AI go FOOM or at the very least, and “oh, shit” moment (I’m not sure if they are equivalent).
Depends how cool. I don’t know the space of self-modifying programs very well. Anything cooler than anything that’s been tried before, even marginally cooler, has a noticeable subjective probability of going to shit. I mean, if you kept on making it marginally cooler and cooler, it’d go to “oh, shit” one day after a sequence of “ooh, cools” and I don’t know how long that sequence is.
This means we should feel pretty safe, since AI does not appear to be making even incremental progress.
Really, it’s hard for anyone who is well-versed in the “state of the art” of AI to feel any kind of alarm about the possibility of an imminent FOOM. Take a look at this paper. Skim through the intro, note the long and complicated reinforcement learning algorithm, and check out the empirical results section. The test domain involves a monkey in a 5x5 playroom. There are some fun little complications, like a light switch and a bell. Note that these guys are top-class (Andrew Barto basically invented RL), and the paper was published at one of the top-tier machine learning conferences (NIPS), in 2005.
Call me a denier, but I just don’t think the monkey is going to bust out of his playroom and take over the world. At least, not anytime soon.
Taking progress in AI to mean more real world effectiveness:
Intelligence seems to have jumps in real world effectiveness, e.g. the brains of great apes and humans are very similar, the difference in effectiveness is obvious.
So coming to the conclusion that we are fine based on the state of the art not being any more effective (not making progress) would be very dangerous. Perhaps tomorrow, some team of AI researchers will combine the current state of the art solutions in just the right way, resulting in a massive jump in real world effectiveness? maybe enough to have an “oh, shit” moment?
Regardless of the time frame, if the AI community is working towards AGI rather than FAI, we will likely have (eventually) an AI go FOOM or at the very least, and “oh, shit” moment (I’m not sure if they are equivalent).
Zing!
Also, good point, but this post is designed produce such progress, is it not?