Thanks so much for writing this, quite useful to see your perspective!
First, I don’t think that you’ve added anything new to the conversation. Second, I don’t think what you have mentioned even provides a useful summary of the current state of the conversation: it is neither comprehensive, nor the strongest version of various arguments already made.
Fair enough!
I don’t think that’s a popular opinion here. And while I think some people might just have a cluster of “brain/thinky” words in their head when they don’t think about the meaning of things closely, I don’t think this is a popular opinion of people in general unless they’re really not thinking about it.
I’ve seen this in the public a very surprising amount. For example see the New York Times article linked. Agree it’s not remotely popular on LessWrong.
Citation needed.
Fair enough. I’m not very sympathetic to panpsychism, but it probably could have been worth mentioning. Though I am not really sure how much it would add for most readers.
Assuming we make an AI conscious, and that consciousness is actually something like what we mean by it more colloquially (human-like, not just panpsychistly), it isn’t clear that this makes it a moral concern.
That’s true; and it might be a moral concern without consciousness. But on many moral accounts, consciousness is highly relevant. I think probably most people would say it is relevant.
Meanwhile, I feel like there is a lot of lower hanging fruit in neuroscience that would also help solve this problem more easily later in addition to actually being useful now.
Curious what research you think would do here?
Same thing as above, and also the prevailing view here is that it is much more important that AI will kill us, and if we’re theoretically spending (social) capital to make these people care about things, the not killing us is astronomically more important.
I agree with this. But at the same time the public conversation keeps talking about consciousness. I wanted to address it for that reason, and really address it, rather than just brush it aside. I don’t really think it’s true that discussion of this detracts from x-risk; both point in the direction of being substantially more careful, for example.
They cannot choose not to because they don’t know what it is, so this is unactionable and useless advice.
Good point. I think I had meant to say that researchers should not try to do this. I will edit the post to say that.
I think my recommendations are probably not well targeted enough; I didn’t really specify to whom I was recommending them to. I’ll try to avoid doing that in the future.
Thanks so much for writing this, quite useful to see your perspective!
Fair enough!
I’ve seen this in the public a very surprising amount. For example see the New York Times article linked. Agree it’s not remotely popular on LessWrong.
Fair enough. I’m not very sympathetic to panpsychism, but it probably could have been worth mentioning. Though I am not really sure how much it would add for most readers.
That’s true; and it might be a moral concern without consciousness. But on many moral accounts, consciousness is highly relevant. I think probably most people would say it is relevant.
Curious what research you think would do here?
I agree with this. But at the same time the public conversation keeps talking about consciousness. I wanted to address it for that reason, and really address it, rather than just brush it aside. I don’t really think it’s true that discussion of this detracts from x-risk; both point in the direction of being substantially more careful, for example.
Good point. I think I had meant to say that researchers should not try to do this. I will edit the post to say that.
I think my recommendations are probably not well targeted enough; I didn’t really specify to whom I was recommending them to. I’ll try to avoid doing that in the future.