As I currently understand your points, they seem like not much evidence at all towards the wrapper-mind conclusion.
There are two wrapper-mind conclusions, and the purpose of my comment was to frame the distinction between them. The post seems to be conflating them in the context of AI risk, mostly talking about one of them while alluding to AI risk relevance that seems to instead mostly concern the other. I cited standard reasons for taking either of them seriously, in the forms that make conflating them easy. That doesn’t mean I accept relevance of those reasons.
You can take a look at this comment for something about my own position on human values, which doesn’t seem relevant to this post or my comments here. Specifically, I agree that human values don’t have wrapper-mind character, as should be expressed in people or likely to get expressed in sufficiently human-like AGIs, but I expect that it’s a good idea for humans or those AGIs to eventually build wrapper-minds to manage the universe (and this point seems much more relevant to AI risk). I’ve maintained this distinction for a while.
There are two wrapper-mind conclusions, and the purpose of my comment was to frame the distinction between them. The post seems to be conflating them in the context of AI risk, mostly talking about one of them while alluding to AI risk relevance that seems to instead mostly concern the other. I cited standard reasons for taking either of them seriously, in the forms that make conflating them easy. That doesn’t mean I accept relevance of those reasons.
You can take a look at this comment for something about my own position on human values, which doesn’t seem relevant to this post or my comments here. Specifically, I agree that human values don’t have wrapper-mind character, as should be expressed in people or likely to get expressed in sufficiently human-like AGIs, but I expect that it’s a good idea for humans or those AGIs to eventually build wrapper-minds to manage the universe (and this point seems much more relevant to AI risk). I’ve maintained this distinction for a while.