Do you think any part of what MIRI does is at all useful?
It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).
I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI’s position is extreme.
Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.
There should be a study group whose sole purpose is to think about the issue.
All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
Personally, most of the time, I alternate between position 3 and 4.
Some people associated with MIRI take positions that are even more extreme than position 11 and go as far as banning the discussion of outlandish thought experiments related to AI. I believe that to be crazy.
Extensive and baseless fear-mongering might very well cause MIRI’s value to be overall negative.
Okay, I’ll bite. Do you think any part of what MIRI does is at all useful?
It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).
I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI’s position is extreme.
Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.
There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
Personally, most of the time, I alternate between position 3 and 4.
Some people associated with MIRI take positions that are even more extreme than position 11 and go as far as banning the discussion of outlandish thought experiments related to AI. I believe that to be crazy.
Extensive and baseless fear-mongering might very well cause MIRI’s value to be overall negative.
Upvoted solely for the handy scale.