Part of my complaint was that the models didn’t seem to include enough gears for me to figure out what I could do to make things better.
I do think it’s fine to discuss models that represent reality accurately, while not knowing what action-relevant implications they might have eventually. A lot of AI-Alignment related thinking is not really suggesting many concrete actions to take, besides “this seems like a problem, no idea what to do about it”.
I do not think we have no idea what to do about it. Creating common knowledge of a mistake, and ceasing to make that mistake yourself, are both doing something about it. If the problem is a coordination game then coordination to create common knowledge of the mistake seems like the obvious first move.
“this seems like a problem, no idea what to do about it”
I think this is fine if made clear, but the post seemed to be implying (which the author later confirmed) that it did offer action-relevant implications.
I do think it’s fine to discuss models that represent reality accurately, while not knowing what action-relevant implications they might have eventually. A lot of AI-Alignment related thinking is not really suggesting many concrete actions to take, besides “this seems like a problem, no idea what to do about it”.
I do not think we have no idea what to do about it. Creating common knowledge of a mistake, and ceasing to make that mistake yourself, are both doing something about it. If the problem is a coordination game then coordination to create common knowledge of the mistake seems like the obvious first move.
I think this is fine if made clear, but the post seemed to be implying (which the author later confirmed) that it did offer action-relevant implications.
FWIW, in slightly different words than my last comment, I agree with this criticism of this post.