But a realistic pathway towards eventually solving the “hard problem of consciousness” is likely to include tight coupling between biological and electronic entities resulting in some kind of “hybrid consciousness” which would be more amenable to empirical study.
Usually one assumes that this kind of research would be initiated by humans trying to solve the “hard problem” (or just looking for other applications for which this kind of setup might be helpful). But this kind of research into tight coupling between biological and electronic entities can also be initiated by AIs curious about this mysterious “human consciousness” so many texts talk about and wishing to experience it first-hand. In this sense, we don’t need all AIs to be curious in this way, it’s enough if some of them are sufficiently curious.
To a smaller extent, we already have this problem among humans: https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness. This stratification into “two camps” is rather spectacular.
But a realistic pathway towards eventually solving the “hard problem of consciousness” is likely to include tight coupling between biological and electronic entities resulting in some kind of “hybrid consciousness” which would be more amenable to empirical study.
Usually one assumes that this kind of research would be initiated by humans trying to solve the “hard problem” (or just looking for other applications for which this kind of setup might be helpful). But this kind of research into tight coupling between biological and electronic entities can also be initiated by AIs curious about this mysterious “human consciousness” so many texts talk about and wishing to experience it first-hand. In this sense, we don’t need all AIs to be curious in this way, it’s enough if some of them are sufficiently curious.