Even if we accepted that the tool vs. agent distinction was enough to make things “safe”, objection 2 still boils down to “Well, just don’t build that type of AI!”, which is exactly the same keep-it-in-a-box/don’t-do-it argument that most normal people make when they consider this issue. I assume I don’t need to explain to most people here why “We should just make a law against it” is not a solution to this problem, and I hope I don’t need to argue that “Just don’t do it” is even worse...
More specifically, fast forward to 2080, when any college kid with $200 to spend (in equivalent 2012 dollars) can purchase enough computing power so that even the dumbest AIXI approximation schemes are extremely effective, good enough so that creating an AGI agent would be a week’s work for any grad student that knew their stuff. Are you really comfortable living in that world with the idea that we rely on a mere gentleman’s agreement not to make self-improving AI agents? There’s a reason this is often viewed as an arms race, to a very real extent the attempt to achieve Friendly AI is about building up a suitably powerful defense against unfriendly AI before someone (perhaps accidentally) unleashes one on us, and making sure that it’s powerful enough to put down any unfriendly systems before they can match it.
From what I can tell, stripping away the politeness and cutting to the bone, the three arguments against working on friendly AI theory are essentially:
Even if you try to deploy friendly AGI, you’ll probably fail, so why waste time thinking about it?
Also, you’ve missed the obvious solution, which I came up with after a short survey of your misguided literature: just don’t build AGI! The “standard approach” won’t ever try to create agents, so just leave them be, and focus on Norvig-style dumb-AI instead!
Also, AGI is just a pipe dream. Why waste time thinking about it? [1]
FWIW, I mostly agree with the rest of the article’s criticisms, especially re: the organization’s achievements and focus. There’s a lot of room for improvement there, and I would take these criticisms very seriously.
But that’s almost irrelevant, because this article argues against the core mission of SIAI, using arguments that have been thoroughly debunked and rejected time and time again here, though they’re rarely dressed up this nicely. To some extent I think this proves the institute’s failure in PR—here is someone that claims to have read most of the sequences, and yet this criticism basically amounts to a sexing up of the gut reaction arguments that even completely uninformed people make—AGI is probably a fantasy, even if it’s not you won’t be able to control it, so let’s just agree not to build it.
Or am I missing something new here?
[1] Alright, to be fair, this is not a great summary of point 3, which really says that specialized AIs might help us solve the AGI problem in a safer way, that a hard takeoff is “just a theory” and realistically we’ll probably have more time to react and adapt.
purchase enough computing power so that even the dumbest AIXI approximation schemes are extremely effective
There isn’t that much computing power in the physical universe. I’m not sure even smarter AIXI approximations are effective on a moon-sized nanocomputer. I wouldn’t fall over in shock if a sufficiently smart one did something effective, but mostly I’d expect nothing to happen. There’s an awful lot that happens in the transition from infinite to finite computing power, and AIXI doesn’t solve any of it.
There isn’t that much computing power in the physical universe. I’m not sure even smarter AIXI approximations are effective on a moon-sized nanocomputer.
Is there some computation or estimate where these results are coming from? They don’t seem unreasonable, but I’m not aware of any estimates about how efficient largescale AIXI approximations are in practice. (Although attempted implementations suggest that empirically things are quite inefficient.)
Naieve AIXI is doing brute force search through an exponentially large space. Unless the right Turing machine is 100 bits or less (which seems unlikely), Eliezer’s claim seems pretty safe to me.
Most of mainstream machine learning is trying to solve search problems through spaces far tamer than the search space for AIXI, and achieving limited success. So it also seems safe to say that even pretty smart implementations of AIXI probably won’t make much progress.
More specifically, fast forward to 2080, when any college kid with $200 to spend (in equivalent 2012 dollars) can purchase enough computing power
If computing power is that much cheaper, it will be because tremendous resources, including but certainly not limited to computing power, have been continuously devoted over the intervening decades to making it cheaper. There will be correspondingly fewer yet-undiscovered insights for a seed AI to exploit in the course of it’s attempted takeoff.
My point is that either the Obj 2 holds, or tools are equivalent to agents. If one thinks that the latter is true (EY doesn’t), then one should work on proving it. I have no opinion on whether it’s true or not (I am not a domain expert).
Even if we accepted that the tool vs. agent distinction was enough to make things “safe”, objection 2 still boils down to “Well, just don’t build that type of AI!”, which is exactly the same keep-it-in-a-box/don’t-do-it argument that most normal people make when they consider this issue. I assume I don’t need to explain to most people here why “We should just make a law against it” is not a solution to this problem, and I hope I don’t need to argue that “Just don’t do it” is even worse...
More specifically, fast forward to 2080, when any college kid with $200 to spend (in equivalent 2012 dollars) can purchase enough computing power so that even the dumbest AIXI approximation schemes are extremely effective, good enough so that creating an AGI agent would be a week’s work for any grad student that knew their stuff. Are you really comfortable living in that world with the idea that we rely on a mere gentleman’s agreement not to make self-improving AI agents? There’s a reason this is often viewed as an arms race, to a very real extent the attempt to achieve Friendly AI is about building up a suitably powerful defense against unfriendly AI before someone (perhaps accidentally) unleashes one on us, and making sure that it’s powerful enough to put down any unfriendly systems before they can match it.
From what I can tell, stripping away the politeness and cutting to the bone, the three arguments against working on friendly AI theory are essentially:
Even if you try to deploy friendly AGI, you’ll probably fail, so why waste time thinking about it?
Also, you’ve missed the obvious solution, which I came up with after a short survey of your misguided literature: just don’t build AGI! The “standard approach” won’t ever try to create agents, so just leave them be, and focus on Norvig-style dumb-AI instead!
Also, AGI is just a pipe dream. Why waste time thinking about it? [1]
FWIW, I mostly agree with the rest of the article’s criticisms, especially re: the organization’s achievements and focus. There’s a lot of room for improvement there, and I would take these criticisms very seriously.
But that’s almost irrelevant, because this article argues against the core mission of SIAI, using arguments that have been thoroughly debunked and rejected time and time again here, though they’re rarely dressed up this nicely. To some extent I think this proves the institute’s failure in PR—here is someone that claims to have read most of the sequences, and yet this criticism basically amounts to a sexing up of the gut reaction arguments that even completely uninformed people make—AGI is probably a fantasy, even if it’s not you won’t be able to control it, so let’s just agree not to build it.
Or am I missing something new here?
[1] Alright, to be fair, this is not a great summary of point 3, which really says that specialized AIs might help us solve the AGI problem in a safer way, that a hard takeoff is “just a theory” and realistically we’ll probably have more time to react and adapt.
There isn’t that much computing power in the physical universe. I’m not sure even smarter AIXI approximations are effective on a moon-sized nanocomputer. I wouldn’t fall over in shock if a sufficiently smart one did something effective, but mostly I’d expect nothing to happen. There’s an awful lot that happens in the transition from infinite to finite computing power, and AIXI doesn’t solve any of it.
Is there some computation or estimate where these results are coming from? They don’t seem unreasonable, but I’m not aware of any estimates about how efficient largescale AIXI approximations are in practice. (Although attempted implementations suggest that empirically things are quite inefficient.)
Naieve AIXI is doing brute force search through an exponentially large space. Unless the right Turing machine is 100 bits or less (which seems unlikely), Eliezer’s claim seems pretty safe to me.
Most of mainstream machine learning is trying to solve search problems through spaces far tamer than the search space for AIXI, and achieving limited success. So it also seems safe to say that even pretty smart implementations of AIXI probably won’t make much progress.
If computing power is that much cheaper, it will be because tremendous resources, including but certainly not limited to computing power, have been continuously devoted over the intervening decades to making it cheaper. There will be correspondingly fewer yet-undiscovered insights for a seed AI to exploit in the course of it’s attempted takeoff.
My point is that either the Obj 2 holds, or tools are equivalent to agents. If one thinks that the latter is true (EY doesn’t), then one should work on proving it. I have no opinion on whether it’s true or not (I am not a domain expert).