You seem confused about my exact past position. I was arguing against EAs who were like, “We’ll solve AGI with policy, therefore no doom.” I am not presently a great optimist about the likelihood of policy being an easy solution. There is just nothing else left.
You’re reading too much into this review. It’s not about your exact position in April 2021, it’s about the evolution of MIRI’s strategy over 2020-2024, and placing this Time letter in that context. I quoted you to give a flavor of MIRI attitudes in 2021 and deliberately didn’t comment on it to allow readers to draw their own conclusions.
I could have linked MIRI’s 2020 Updates and Strategy, which doesn’t mention AI policy at all. A bit dull.
In September 2021, there was a Discussion with Eliezer Yudkowsky which seems relevant. Again, I’ll let readers draw their own conclusions, but here’s a fun quote:
I wasn’t really considering the counterfactual where humanity had a collective telepathic hivemind? I mean, I’ve written fiction about a world coordinated enough that they managed to shut down all progress in their computing industry and only manufacture powerful computers in a single worldwide hidden base, but Earth was never going to go down that route. Relative to remotely plausible levels of future coordination, we have a technical problem.
I welcome deconfusion about your past positions, but I don’t think they’re especially mysterious.
I was arguing against EAs who were like, “We’ll solve AGI with policy, therefore no doom.”
The thread was started by Grant Demaree, and you were replying to a comment by him. You seem confused about Demaree’s exact past position. He wrote, for example: “Eliezer gives alignment a 0% chance of succeeding. I think policy, if tried seriously, has >50%”. Perhaps this is foolish, dangerous, optimism. But it’s not “no doom”.
You seem confused about my exact past position. I was arguing against EAs who were like, “We’ll solve AGI with policy, therefore no doom.” I am not presently a great optimist about the likelihood of policy being an easy solution. There is just nothing else left.
You’re reading too much into this review. It’s not about your exact position in April 2021, it’s about the evolution of MIRI’s strategy over 2020-2024, and placing this Time letter in that context. I quoted you to give a flavor of MIRI attitudes in 2021 and deliberately didn’t comment on it to allow readers to draw their own conclusions.
I could have linked MIRI’s 2020 Updates and Strategy, which doesn’t mention AI policy at all. A bit dull.
In September 2021, there was a Discussion with Eliezer Yudkowsky which seems relevant. Again, I’ll let readers draw their own conclusions, but here’s a fun quote:
I welcome deconfusion about your past positions, but I don’t think they’re especially mysterious.
The thread was started by Grant Demaree, and you were replying to a comment by him. You seem confused about Demaree’s exact past position. He wrote, for example: “Eliezer gives alignment a 0% chance of succeeding. I think policy, if tried seriously, has >50%”. Perhaps this is foolish, dangerous, optimism. But it’s not “no doom”.