Is the following (conspiracy?) theory implausible? Could it be that Google and the other big players are well aware of the dangers of AI and are actively working to thwart the dangers internally? They could present an image of no concern so as to not spark any government intervention into their affairs or public concern getting in the way of their business plans. But should we not believe that they have considered very thoroughly whether there is immanent danger from AI, and are running their business in accordance to the resulting utility function?
Isn’t viewing the present situation with the big AI players as a tragedy of the commons scenario too simplistic? Even from say Google’s selfish perspective, should it not be working to stave off an AI apocalypse?
I believe Google has an AI ethics board due to MIRI/FHI/&c. influence. If you are correct about their long game, then MIRI&c almost certainly knows of it, and perhaps you could test out the hypothesis by looking for shadows in those organizations’ strategy reports or something if you’re truly interested. May or may not be a great idea to shout the results from the rooftops, however.
When Google acquired DeepMind part of the deal was that Google get’s an ethics boards. The impression I got, was that the purpose of that board is that DeepMind doesn’t evolve into a UFAI.
Jaan Tallinn who made his fortune in Skype is one of the big donors for MIRI and was also a huge investor in DeepMind. I would imagine that he and a few other people who are less public are responsible for demanding the board.
I think every MIRI workshop did include someone from Google.
Luke did write somewhere that MIRI got six-figures in it’s lifetime from Google matching funds for employee donations.
On the other hand Google is a big corporation. “well aware” is not precise term when thinking about giant companies.
Who has given the issue serious consideration? The only example I can think of of someone giving it serious consideration and concluding we don’t need to worry is Robin Hanson, but I really have no idea how to identify, or even estimate the number of people who seriously considered the issue, decided there was nothing to worry about and then went about their lives without mentioning it. Any thoughts on how to approach the question?
It would be special pleading to bring up “Google has seriously considered it” when that is part of “Google has seriously considered it and is hiding it”, yet to not bring that up when it is part of “Google has seriously considered it and has decided it’s nothing to worry about”.
It is of course possible that Google has not considered it at all, but that would apply to Rasputin496′s original suggestion as much as it applies to mine, so mine still would be a more straightforward explanaion than his, even if it’s not the most straightforward explanation on an absolute level.
(If you think I shouldn’t have used the word “most”, but should have said something like “most out of all other explanations that make the same assumptions”, then sure, I’ll accept that.)
Is the following (conspiracy?) theory implausible? Could it be that Google and the other big players are well aware of the dangers of AI and are actively working to thwart the dangers internally? They could present an image of no concern so as to not spark any government intervention into their affairs or public concern getting in the way of their business plans. But should we not believe that they have considered very thoroughly whether there is immanent danger from AI, and are running their business in accordance to the resulting utility function?
Isn’t viewing the present situation with the big AI players as a tragedy of the commons scenario too simplistic? Even from say Google’s selfish perspective, should it not be working to stave off an AI apocalypse?
I believe Google has an AI ethics board due to MIRI/FHI/&c. influence. If you are correct about their long game, then MIRI&c almost certainly knows of it, and perhaps you could test out the hypothesis by looking for shadows in those organizations’ strategy reports or something if you’re truly interested. May or may not be a great idea to shout the results from the rooftops, however.
When Google acquired DeepMind part of the deal was that Google get’s an ethics boards. The impression I got, was that the purpose of that board is that DeepMind doesn’t evolve into a UFAI.
Jaan Tallinn who made his fortune in Skype is one of the big donors for MIRI and was also a huge investor in DeepMind. I would imagine that he and a few other people who are less public are responsible for demanding the board.
I think every MIRI workshop did include someone from Google.
Luke did write somewhere that MIRI got six-figures in it’s lifetime from Google matching funds for employee donations.
On the other hand Google is a big corporation. “well aware” is not precise term when thinking about giant companies.
The most straightforward explanation of Google’s behavior is:
As you say, they have considered very thoroughly whether there is imminent danger from AI, and are running their business accordingly.
The conclusion of their consideration is that there is no imminent danger from AI.
You can’t assume that the danger from AI is so definitely true that anyone competent who considers it will come out agreeing that there is danger.
Who has given the issue serious consideration? The only example I can think of of someone giving it serious consideration and concluding we don’t need to worry is Robin Hanson, but I really have no idea how to identify, or even estimate the number of people who seriously considered the issue, decided there was nothing to worry about and then went about their lives without mentioning it. Any thoughts on how to approach the question?
It would be special pleading to bring up “Google has seriously considered it” when that is part of “Google has seriously considered it and is hiding it”, yet to not bring that up when it is part of “Google has seriously considered it and has decided it’s nothing to worry about”.
It is of course possible that Google has not considered it at all, but that would apply to Rasputin496′s original suggestion as much as it applies to mine, so mine still would be a more straightforward explanaion than his, even if it’s not the most straightforward explanation on an absolute level.
(If you think I shouldn’t have used the word “most”, but should have said something like “most out of all other explanations that make the same assumptions”, then sure, I’ll accept that.)