Eliezer is the only staff we still have around from 2010, and I’m not sure what he’d say his biggest updates have been. I believe he’s shifted significantly in the direction of thinking that the best option is to develop AI that’s high-capability and safe but has limited power and autonomy (e.g., Bostrom’s ‘genie AI’ as opposed to ‘sovereign AI’), which is interesting.
I came on at the end of 2013, so I’ve observed that MIRI staff were very surprised by how quickly people started taking AI more seriously and discussing it more publicy over the last year—how positive the reception to Superintelligence was, how successful the FLI conference was, etc. Also, I know that Nate now assigns moderate probability to the development of smarter-than-human AI systems being an event that plays out on the international stage, rather than taking most of the world by surprise.
Nate also mentioned on the EA Forum that Luke learned (and passed on to him) a number of lessons from SIAI’s old mistakes:
The concrete list includes things like (a) constantly drive to systematize, automate, and outsource the busywork; (b) always attack the biggest constraint (by contrast, most people seem to have a default mode of “try and do everything that meets a certain importance level”); (c) put less emphasis on explicit models that you’ve built yourself an more emphasis on advice from others who have succeeded in doing something similar to what you’re trying to do.’”
Aside from the impact of FLI etc., I’d guess MIRI’s median beliefs have changed at least as much due to our staff changing as due to updates by individuals. Some new staff have longer AI timelines than Eliezer, assign higher probability to multipolar outcomes, etc. (I think Eliezer’s timelines lengthened too, but I could be wrong there.)
Are there major points that MIRI considered to be true 5 years ago but doesn’t consider to be true today?
Eliezer is the only staff we still have around from 2010, and I’m not sure what he’d say his biggest updates have been. I believe he’s shifted significantly in the direction of thinking that the best option is to develop AI that’s high-capability and safe but has limited power and autonomy (e.g., Bostrom’s ‘genie AI’ as opposed to ‘sovereign AI’), which is interesting.
I came on at the end of 2013, so I’ve observed that MIRI staff were very surprised by how quickly people started taking AI more seriously and discussing it more publicy over the last year—how positive the reception to Superintelligence was, how successful the FLI conference was, etc. Also, I know that Nate now assigns moderate probability to the development of smarter-than-human AI systems being an event that plays out on the international stage, rather than taking most of the world by surprise.
Nate also mentioned on the EA Forum that Luke learned (and passed on to him) a number of lessons from SIAI’s old mistakes:
Aside from the impact of FLI etc., I’d guess MIRI’s median beliefs have changed at least as much due to our staff changing as due to updates by individuals. Some new staff have longer AI timelines than Eliezer, assign higher probability to multipolar outcomes, etc. (I think Eliezer’s timelines lengthened too, but I could be wrong there.)