I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?
1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf
Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.
Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.
The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.
There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.
The doubling time in some benchmarks in deep learning seems to be 1 year.
Media overhype AI achievements.
Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).
A lot of new investment going into AI research and salaries in field are rising.
Military are increasingly interested in implementing AI in warfare.
Google has AI ethic board, but what it is doing is unclear.
It seems like AI safety and implementation is lagging from actual AI development.
OpenAI is significantly more nuanced than you might expect. E.g look at interviews with Ilya Sustskever where he discusses AI safety, or consider that Paul Christiano is (briefly) working for them. Also, where did you get the description of Bostrom as “Elon Musk’s mentor?”
Musks seems to be using many ideas from Bostrom: he tweets about his book on AI, he mention his idea about simulation.
I think that there is difference between idea of Open AI as it was suggested by Musk in the beginning and actual work in the organisation named “Open AI”. The latter seems to be more balanced.
I think that there is difference between idea of Open AI as it was suggested by Musk in the beginning and actual work in the organisation named “Open AI”. The latter seems to be more balanced.
Public understanding by reading a few blog posts might not give a good overview over the reasons for which Open AI was started. I think looking at the actual actions might be a better way to try to understand what the project is supposed to do.
I read that you joined Open AI and I think it is good project now, but the idea of “openness of AI” was fairly criticised by Bostrom in his new article. But it seems that the organisation named “OpenAI” will do much more than promote openness. There is a little confusion between the name of organisation and the idea of letting everybody to run their own AI code.
I joked that the same way we could create Open Nuke project which will deliver reactors to every household which would probably result in in very balanced world where every household could annihilate any other household, and so everybody is very polite and crime is almost extinct.
I have no affiliations with Open AI. In this case I’m driven by “Don’t judge a book by it’s cover”-motivations. Especially in high stakes situations.
But it seems that the organisation named “OpenAI” will do much more than promote openness. There is a little confusion between the name of organisation and the idea of letting everybody to run their own AI code.
I think taking the name of an organisation as ultimate authority of what the organisation is about is a bit near-sighted.
Making good strategic decisions is complicated. It requires looking at where a move is likely to lead in the future.
I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?
1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.
Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.
The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.
There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.
The doubling time in some benchmarks in deep learning seems to be 1 year.
Media overhype AI achievements.
Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).
A lot of new investment going into AI research and salaries in field are rising.
Military are increasingly interested in implementing AI in warfare.
Google has AI ethic board, but what it is doing is unclear.
It seems like AI safety and implementation is lagging from actual AI development.
OpenAI is significantly more nuanced than you might expect. E.g look at interviews with Ilya Sustskever where he discusses AI safety, or consider that Paul Christiano is (briefly) working for them. Also, where did you get the description of Bostrom as “Elon Musk’s mentor?”
Musks seems to be using many ideas from Bostrom: he tweets about his book on AI, he mention his idea about simulation.
I think that there is difference between idea of Open AI as it was suggested by Musk in the beginning and actual work in the organisation named “Open AI”. The latter seems to be more balanced.
Public understanding by reading a few blog posts might not give a good overview over the reasons for which Open AI was started. I think looking at the actual actions might be a better way to try to understand what the project is supposed to do.
I read that you joined Open AI and I think it is good project now, but the idea of “openness of AI” was fairly criticised by Bostrom in his new article. But it seems that the organisation named “OpenAI” will do much more than promote openness. There is a little confusion between the name of organisation and the idea of letting everybody to run their own AI code.
I joked that the same way we could create Open Nuke project which will deliver reactors to every household which would probably result in in very balanced world where every household could annihilate any other household, and so everybody is very polite and crime is almost extinct.
I have no affiliations with Open AI. In this case I’m driven by “Don’t judge a book by it’s cover”-motivations. Especially in high stakes situations.
I think taking the name of an organisation as ultimate authority of what the organisation is about is a bit near-sighted.
Making good strategic decisions is complicated. It requires looking at where a move is likely to lead in the future.