I don’t speak for Nate or Eliezer in this reply; where I speak about Eliezer I am of course describing my model of him, which may be flawed.
Three somewhat disjoint answers:
From my perspective, your point about algorithmic improvement only underlines the importance of having powerful people actually get what the problem is and have accurate working models. If this becomes true, then the specific policy measures have some chance of adapting to current conditions, or of being written in an adaptive manner in the first place.
Eliezer said a few years ago that “I consider the present gameboard to look incredibly grim” and while he has more hope now than he had then about potential political solutions, it is not the case (as I understand it) that he now feels hopeful that these solutions will work. Our policy proposals are an incredible longshot.
One thing we can hope for, if we get a little more time rather than a lot more time, is that we might get various forms of human cognitive enhancement working, and these smarter humans can make more rapid progress on AI alignment.
It seems like including this in the strategy statement is crucial to communicating that strategy clearly (at least to people who understand enough of the background). A long-shot strategy looks very different from one where you expect to achieve at least useful parts of your goals.
A reasonable point, thank you. We said it pretty clearly in the MIRI strategy post in January, and I linked to that post here, but perhaps I should have reiterated it.
For clarity: we mostly just expect to die. But while we can see viable paths forward at all, we’ll keep trying not to.
I don’t speak for Nate or Eliezer in this reply; where I speak about Eliezer I am of course describing my model of him, which may be flawed.
Three somewhat disjoint answers:
From my perspective, your point about algorithmic improvement only underlines the importance of having powerful people actually get what the problem is and have accurate working models. If this becomes true, then the specific policy measures have some chance of adapting to current conditions, or of being written in an adaptive manner in the first place.
Eliezer said a few years ago that “I consider the present gameboard to look incredibly grim” and while he has more hope now than he had then about potential political solutions, it is not the case (as I understand it) that he now feels hopeful that these solutions will work. Our policy proposals are an incredible longshot.
One thing we can hope for, if we get a little more time rather than a lot more time, is that we might get various forms of human cognitive enhancement working, and these smarter humans can make more rapid progress on AI alignment.
It seems like including this in the strategy statement is crucial to communicating that strategy clearly (at least to people who understand enough of the background). A long-shot strategy looks very different from one where you expect to achieve at least useful parts of your goals.
A reasonable point, thank you. We said it pretty clearly in the MIRI strategy post in January, and I linked to that post here, but perhaps I should have reiterated it.
For clarity: we mostly just expect to die. But while we can see viable paths forward at all, we’ll keep trying not to.
Has MIRI considered supporting work on human cognitive enhancement? e.g. Foresight’s work on WBE.