I don’t think the title of this posts capture the actual question studied in it? Like, I don’t think any safety researcher is not, in some sense, for halting AGI research. So the question in the title is trivial (at least in AI Safety). But the actual question debated between you and Daniel (as far as I can see from your post) is whether it’s possible to implement it completely without too many adversarial consequences.
About that question, I agree completely with Daniel’s position that it’s just not a viable option. I see no vaguely probable scenario where the means to act on such a restriction exists.
Thanks for your insights Adam. If every AGI researcher is in some sense for halting AGI research, I’d like to get more confirmation on that. What are their arguments? Would they also work for non-AGI researchers?
I can imagine the combination of Daniel’s point 1 and 2 stops AGI researchers from speaking out on this. But for non-AGI researchers, why not explore something that looks difficult, but may have existential benefits?
I don’t think the title of this posts capture the actual question studied in it? Like, I don’t think any safety researcher is not, in some sense, for halting AGI research. So the question in the title is trivial (at least in AI Safety). But the actual question debated between you and Daniel (as far as I can see from your post) is whether it’s possible to implement it completely without too many adversarial consequences.
About that question, I agree completely with Daniel’s position that it’s just not a viable option. I see no vaguely probable scenario where the means to act on such a restriction exists.
Thanks for your insights Adam. If every AGI researcher is in some sense for halting AGI research, I’d like to get more confirmation on that. What are their arguments? Would they also work for non-AGI researchers?
I can imagine the combination of Daniel’s point 1 and 2 stops AGI researchers from speaking out on this. But for non-AGI researchers, why not explore something that looks difficult, but may have existential benefits?