FWIW I did not interpret Thane as necessarily having “high confidence” in “architecture / internal composition” of AGI. It seemed to me that they were merely (and ~accurately) describing what the canonical views were most worried about. (And I think a discussion about whether or not being able to “model the world” counts as a statement about “internal composition” is sort of beside the point/beyond the scope of what’s really being said)
It’s fair enough if you would say things differently(!) but in some sense isn’t it just pointing out: ‘I would emphasize different aspects of the same underlying basic point’. And I’m not sure if that really progresses the discussion? I.e. it’s not like Thane Ruthenis actually claims that “scarily powerful artificial agents” currently exist. It is indeed true that they don’t exist and may not ever exist. But that’s just not really the point they are making so it seems reasonable to me that they are not emphasizing it.
----
I’d like to see justification of “under what conditions does speculation about ‘superintelligent consequentialism’ merit research attention at all?” and “why do we think ‘future architectures’ will have property X, or whatever?!”.
I think I would also like to see more thought about this. In some ways, after first getting into the general area of AI risk, I was disappointed that the alignment/safety community was not more focussed on questions like this. Like a lot of people, I’d been originally inspired by Superintelligence—significant parts of which relate to these questions imo—only to be told that the community had ‘kinda moved away from that book now’. And so I sort of sympathize with the vibe of Thane’s post (and worry that there has been a sort of mission creep)
FWIW I did not interpret Thane as necessarily having “high confidence” in “architecture / internal composition” of AGI. It seemed to me that they were merely (and ~accurately) describing what the canonical views were most worried about. (And I think a discussion about whether or not being able to “model the world” counts as a statement about “internal composition” is sort of beside the point/beyond the scope of what’s really being said)
It’s fair enough if you would say things differently(!) but in some sense isn’t it just pointing out: ‘I would emphasize different aspects of the same underlying basic point’. And I’m not sure if that really progresses the discussion? I.e. it’s not like Thane Ruthenis actually claims that “scarily powerful artificial agents” currently exist. It is indeed true that they don’t exist and may not ever exist. But that’s just not really the point they are making so it seems reasonable to me that they are not emphasizing it.
----
I think I would also like to see more thought about this. In some ways, after first getting into the general area of AI risk, I was disappointed that the alignment/safety community was not more focussed on questions like this. Like a lot of people, I’d been originally inspired by Superintelligence—significant parts of which relate to these questions imo—only to be told that the community had ‘kinda moved away from that book now’. And so I sort of sympathize with the vibe of Thane’s post (and worry that there has been a sort of mission creep)