Agreed. I think they’ve explicitly denied that they’re working on AGI, but I’m not too reassured. They could be doing it in secret, probably without much consideration of Friendliness, and even if not, they’re probably among the entities most likely (along with, I’d say, DARPA and MIT) to stumble upon seed AI mostly by accident (which is pretty unlikely, but not completely negligible, I think).
If Google were to work on AGI in secret, I’m pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.
Personally, I doubt that they’e working on AGI yet. They’re getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.
If they’ve explicitly denied doing research into AGI, they would have no reason to talk about friendliness research; that isn’t additional evidence. I do think the OP is extremely overconfident though.
I confess that I probably exaggerated the certainty. It’s more like 55-60%.
I actually used to have a (mostly joking) theory about how Google would accidentally create a sentient internet that would have control over everything and send a robot army to destroy us. Someone gave me a book called “How to survive a Robot Uprising” which described the series of events that would lead to a Terminator-like Robot apocalypse, and Google was basically following it like a checklist.
Then I came here and learned more about nanotechnology and the singularity and the joke became a lot less funny. (The techniques described in the Robot Uprising are remarkably useless when you have about a day between noticing something is wrong and the whole world turning into paperclips.) It seems to me that with the number of extremely smart people in Google, there’s gotta be at least some who are pondering this issue and thinking about it seriously. The actual evidence of Google being a genuinely idealistic company that just wants information to be free and to provide a good internet experience vs them having SOME kind of secret agenda seems about 50⁄50 to me—there’s no way I can think of to tell the difference until they actually DO something with their massively accumulated power.
Given that I have no control of it, basically I just feel more comfortable believing they are doing something that a) uses their power in a way I can perceive as good or at least good-intentioned, which might actually help, b) lines up with the particular set of capabilities and interests.
I’d also note that the type of Singularity I’m imagining isn’t necessarily AI per se. More of the internet and humanity (or parts of it) merging into a superintelligent consciousness, gradually outsourcing certain brain functions to the increasingly massive processing power of computers.
I do think it’s possible and not unlikely that Google is purposefully trying to steer the future in a positive direction; although I think people there are likely to be more skeptical of “singularity” rhetoric than LWers (I know at least three people who have worked at Google and I have skirted the subject sufficiently to feel pretty strongly that they don’t have a hidden agenda. This isn’t very strong evidence but it’s the only evidence I have).
I would assign up to a 30% probability or so of “Google is planning something which might be described as preparing to implement a positive singularity.” But less than a 5% chance that I would describe it that way, due to more detailed definitions of “singularity” and “positive.”
I don’t entirely trust Google because they want everyone else’s information to be available. Google is somewhat secretive about its own information. There are good commercial reasons for them to do that, but it does show a lack of consistency.
I wish. Google is the single most likely source of unfriendly AIs anywhere, and as far as I know they haven’t done any research into friendliness.
Agreed. I think they’ve explicitly denied that they’re working on AGI, but I’m not too reassured. They could be doing it in secret, probably without much consideration of Friendliness, and even if not, they’re probably among the entities most likely (along with, I’d say, DARPA and MIT) to stumble upon seed AI mostly by accident (which is pretty unlikely, but not completely negligible, I think).
If Google were to work on AGI in secret, I’m pretty sure that somebody in power there would want to make sure it was friendly. Peter Norvig, for example, talks about AI friendliness in the third edition of AI: A modern approach, and he has a link to the SIAI on his home page.
Personally, I doubt that they’e working on AGI yet. They’re getting a lot of mileage out of statistical approaches and clever tricks; AGI research would be a lot of work for very uncertain benefit.
Google has one employee working (sometimes) on AGI.
http://research.google.com/pubs/author37920.html
It’s comforting, friendliness-wise, that one of his papers cites “personal communication with Steve Rayhawk.”
If they’ve explicitly denied doing research into AGI, they would have no reason to talk about friendliness research; that isn’t additional evidence. I do think the OP is extremely overconfident though.
I confess that I probably exaggerated the certainty. It’s more like 55-60%.
I actually used to have a (mostly joking) theory about how Google would accidentally create a sentient internet that would have control over everything and send a robot army to destroy us. Someone gave me a book called “How to survive a Robot Uprising” which described the series of events that would lead to a Terminator-like Robot apocalypse, and Google was basically following it like a checklist.
Then I came here and learned more about nanotechnology and the singularity and the joke became a lot less funny. (The techniques described in the Robot Uprising are remarkably useless when you have about a day between noticing something is wrong and the whole world turning into paperclips.) It seems to me that with the number of extremely smart people in Google, there’s gotta be at least some who are pondering this issue and thinking about it seriously. The actual evidence of Google being a genuinely idealistic company that just wants information to be free and to provide a good internet experience vs them having SOME kind of secret agenda seems about 50⁄50 to me—there’s no way I can think of to tell the difference until they actually DO something with their massively accumulated power.
Given that I have no control of it, basically I just feel more comfortable believing they are doing something that a) uses their power in a way I can perceive as good or at least good-intentioned, which might actually help, b) lines up with the particular set of capabilities and interests.
I’d also note that the type of Singularity I’m imagining isn’t necessarily AI per se. More of the internet and humanity (or parts of it) merging into a superintelligent consciousness, gradually outsourcing certain brain functions to the increasingly massive processing power of computers.
I do think it’s possible and not unlikely that Google is purposefully trying to steer the future in a positive direction; although I think people there are likely to be more skeptical of “singularity” rhetoric than LWers (I know at least three people who have worked at Google and I have skirted the subject sufficiently to feel pretty strongly that they don’t have a hidden agenda. This isn’t very strong evidence but it’s the only evidence I have).
I would assign up to a 30% probability or so of “Google is planning something which might be described as preparing to implement a positive singularity.” But less than a 5% chance that I would describe it that way, due to more detailed definitions of “singularity” and “positive.”
I don’t entirely trust Google because they want everyone else’s information to be available. Google is somewhat secretive about its own information. There are good commercial reasons for them to do that, but it does show a lack of consistency.