(This critique contains not only my own critiques, but also critiques I would expect others on this site to have)
First, I don’t think that you’ve added anything new to the conversation. Second, I don’t think what you have mentioned even provides a useful summary of the current state of the conversation: it is neither comprehensive, nor the strongest version of various arguments already made. Also, I would prefer to see less of this sort of content on LessWrong. Part of that might be because it is written for a general audience, and LessWrong is not very like the general audience.
Consciousness is not, contrary to the popular imagination, the same thing as intelligence.
I don’t think that’s a popular opinion here. And while I think some people might just have a cluster of “brain/thinky” words in their head when they don’t think about the meaning of things closely, I don’t think this is a popular opinion of people in general unless they’re really not thinking about it.
But there’s nothing that it’s like to be a rock
Citation needed.
But that could be very bad, because it would mean we wouldn’t be able to tell whether or not the system deserves any kind of moral concern.
Assuming we make an AI conscious, and that consciousness is actually something like what we mean by it more colloquially (human-like, not just panpsychistly), it isn’t clear that this makes it a moral concern.
There should be significantly more research on the nature of consciousness.
I think there shouldn’t. At least not yet. The average intelligent person thrown at this problem produces effectively nothing useful, in my opinion. Meanwhile, I feel like there is a lot of lower hanging fruit in neuroscience that would also help solve this problem more easily later in addition to actually being useful now.
In my opinion, you choose to push for more research when you have questions you want answered. I do not consider humanity to have actually phrased the hard problem of consciousness as a question, nor do I think we currently have the tools to notice an answer if we were given one. I think there is potentially useful philosophy to do around but not on the hard problem of consciousness in terms of actually asking a question or learning how we could recognize an answer
Researchers should not create conscious AI systems until we fully understand what giving those systems rights would mean for us.
They cannot choose not to because they don’t know what it is, so this is unactionable and useless advice.
AI companies should wait to proliferate AI systems that have a substantial chance of being conscious until they have more information about whether they are or not.
Same thing as above, and also the prevailing view here is that it is much more important that AI will kill us, and if we’re theoretically spending (social) capital to make these people care about things, the not killing us is astronomically more important.
AI researchers should continue to build connections with philosophers and cognitive scientists to better understand the nature of consciousness
I don’t think you’ve made strong enough arguments to support this claim given the opportunity costs. I don’t have an opinion on whether or not you are right here.
Philosophers and cognitive scientists who study consciousness should make more of their work accessible to the public
Same thing as above.
Nitpick: there’s something weird going on with your formatting because some of your recommendations show up on the table of contents and I don’t think that’s intended.
Thanks so much for writing this, quite useful to see your perspective!
First, I don’t think that you’ve added anything new to the conversation. Second, I don’t think what you have mentioned even provides a useful summary of the current state of the conversation: it is neither comprehensive, nor the strongest version of various arguments already made.
Fair enough!
I don’t think that’s a popular opinion here. And while I think some people might just have a cluster of “brain/thinky” words in their head when they don’t think about the meaning of things closely, I don’t think this is a popular opinion of people in general unless they’re really not thinking about it.
I’ve seen this in the public a very surprising amount. For example see the New York Times article linked. Agree it’s not remotely popular on LessWrong.
Citation needed.
Fair enough. I’m not very sympathetic to panpsychism, but it probably could have been worth mentioning. Though I am not really sure how much it would add for most readers.
Assuming we make an AI conscious, and that consciousness is actually something like what we mean by it more colloquially (human-like, not just panpsychistly), it isn’t clear that this makes it a moral concern.
That’s true; and it might be a moral concern without consciousness. But on many moral accounts, consciousness is highly relevant. I think probably most people would say it is relevant.
Meanwhile, I feel like there is a lot of lower hanging fruit in neuroscience that would also help solve this problem more easily later in addition to actually being useful now.
Curious what research you think would do here?
Same thing as above, and also the prevailing view here is that it is much more important that AI will kill us, and if we’re theoretically spending (social) capital to make these people care about things, the not killing us is astronomically more important.
I agree with this. But at the same time the public conversation keeps talking about consciousness. I wanted to address it for that reason, and really address it, rather than just brush it aside. I don’t really think it’s true that discussion of this detracts from x-risk; both point in the direction of being substantially more careful, for example.
They cannot choose not to because they don’t know what it is, so this is unactionable and useless advice.
Good point. I think I had meant to say that researchers should not try to do this. I will edit the post to say that.
I think my recommendations are probably not well targeted enough; I didn’t really specify to whom I was recommending them to. I’ll try to avoid doing that in the future.
(This critique contains not only my own critiques, but also critiques I would expect others on this site to have)
First, I don’t think that you’ve added anything new to the conversation. Second, I don’t think what you have mentioned even provides a useful summary of the current state of the conversation: it is neither comprehensive, nor the strongest version of various arguments already made. Also, I would prefer to see less of this sort of content on LessWrong. Part of that might be because it is written for a general audience, and LessWrong is not very like the general audience.
This is an example of something that seems to push the conversation forward slightly, by collecting all the evidence for a particular argument and by reframing the problem as different, specific, answerable questions. While I don’t think this actually “solves the hard problem of consciousness as Halberstadt notes in the comments, I think it could help clear up some confusions for you. Namely, I think it is most meaningful to start from a vaguely panpsychist model of “everything is conscious,” what we mean by consciousness is “the feeling of what it is like to be” and the move on to talk about what sorts of consciousness we care about: namely consciousness that looks remotely similar to ours. In this framework, AI is already conscious, but I don’t think there’s any reason to care about that.
More specifics:
I don’t think that’s a popular opinion here. And while I think some people might just have a cluster of “brain/thinky” words in their head when they don’t think about the meaning of things closely, I don’t think this is a popular opinion of people in general unless they’re really not thinking about it.
Citation needed.
Assuming we make an AI conscious, and that consciousness is actually something like what we mean by it more colloquially (human-like, not just panpsychistly), it isn’t clear that this makes it a moral concern.
I think there shouldn’t. At least not yet. The average intelligent person thrown at this problem produces effectively nothing useful, in my opinion. Meanwhile, I feel like there is a lot of lower hanging fruit in neuroscience that would also help solve this problem more easily later in addition to actually being useful now.
In my opinion, you choose to push for more research when you have questions you want answered. I do not consider humanity to have actually phrased the hard problem of consciousness as a question, nor do I think we currently have the tools to notice an answer if we were given one. I think there is potentially useful philosophy to do around but not on the hard problem of consciousness in terms of actually asking a question or learning how we could recognize an answer
They cannot choose not to because they don’t know what it is, so this is unactionable and useless advice.
Same thing as above, and also the prevailing view here is that it is much more important that AI will kill us, and if we’re theoretically spending (social) capital to make these people care about things, the not killing us is astronomically more important.
I don’t think you’ve made strong enough arguments to support this claim given the opportunity costs. I don’t have an opinion on whether or not you are right here.
Same thing as above.
Nitpick: there’s something weird going on with your formatting because some of your recommendations show up on the table of contents and I don’t think that’s intended.
Thanks so much for writing this, quite useful to see your perspective!
Fair enough!
I’ve seen this in the public a very surprising amount. For example see the New York Times article linked. Agree it’s not remotely popular on LessWrong.
Fair enough. I’m not very sympathetic to panpsychism, but it probably could have been worth mentioning. Though I am not really sure how much it would add for most readers.
That’s true; and it might be a moral concern without consciousness. But on many moral accounts, consciousness is highly relevant. I think probably most people would say it is relevant.
Curious what research you think would do here?
I agree with this. But at the same time the public conversation keeps talking about consciousness. I wanted to address it for that reason, and really address it, rather than just brush it aside. I don’t really think it’s true that discussion of this detracts from x-risk; both point in the direction of being substantially more careful, for example.
Good point. I think I had meant to say that researchers should not try to do this. I will edit the post to say that.
I think my recommendations are probably not well targeted enough; I didn’t really specify to whom I was recommending them to. I’ll try to avoid doing that in the future.