I strongly agree that we should upgrade in this sense.
I also think that a lot of this work might be initially doable with high-end non-invasive BCIs (which is also somewhat less risky, but can also be done much faster). High-end EEG seems already be used successfully to decode the images the person is looking at: https://www.biorxiv.org/content/10.1101/787101v3 And the computer can adjust its audio-visual output to aim for particular EEG changes in real-time (so fairly tight coupling is possible, which carries with it both opportunities and risks).
I have a possible post sitting in the Drafts, and it says the following among other things:
Speaking from the experimental viewpoint, we should ponder feasible experiments in creating hybrid consciousness
between tightly coupled biological entities and electronic circuits. Such experiments might start shedding
some empirical light into the capacity of electronic circuits to support subjective experience and might constitute
initial steps towards acquiring the ability to eventually be able “to look inside the other entity’s subjective realm”.
[ ]
Having Neuralink-like BCIs is not a hard requirement in this sense. A sufficiently tight coupling can probably be achieved
by taking EEG and polygraph-like signals from the biological entity and giving appropriately sculpted audio-visual signals from
the electronic entity. I think it’s highly likely that such non-invasive coupling will be sufficient for initial experiments.
Tight closed loops of this kind represent formidable safety issues even with non-invasive connectivity, and since this line of research
assumes that human volunteers will try this at some point, while observing the resulting subjective experiences and reporting
on them, ethical and safety considerations will have to be dealt with.
Nevertheless, assuming that one finds a way for such experiments to go ahead, one can try various things. E.g. one can
train a variety of differently architected electronic circuits to approximate the same input-output function, and
see if the observed subjective experiences differ substantially depending on the architecture of the electronic circuit
in question. A positive answer would be the first step to figuring out how activity of an electronic circuit can
be directly associated with subjective experiences.
If people start organizing for this kind of work, I’d love to collaborate.
I strongly agree that we should upgrade in this sense.
I also think that a lot of this work might be initially doable with high-end non-invasive BCIs (which is also somewhat less risky, but can also be done much faster). High-end EEG seems already be used successfully to decode the images the person is looking at: https://www.biorxiv.org/content/10.1101/787101v3 And the computer can adjust its audio-visual output to aim for particular EEG changes in real-time (so fairly tight coupling is possible, which carries with it both opportunities and risks).
I have a possible post sitting in the Drafts, and it says the following among other things:
Speaking from the experimental viewpoint, we should ponder feasible experiments in creating hybrid consciousness between tightly coupled biological entities and electronic circuits. Such experiments might start shedding some empirical light into the capacity of electronic circuits to support subjective experience and might constitute initial steps towards acquiring the ability to eventually be able “to look inside the other entity’s subjective realm”.
[ ]
Having Neuralink-like BCIs is not a hard requirement in this sense. A sufficiently tight coupling can probably be achieved by taking EEG and polygraph-like signals from the biological entity and giving appropriately sculpted audio-visual signals from the electronic entity. I think it’s highly likely that such non-invasive coupling will be sufficient for initial experiments. Tight closed loops of this kind represent formidable safety issues even with non-invasive connectivity, and since this line of research assumes that human volunteers will try this at some point, while observing the resulting subjective experiences and reporting on them, ethical and safety considerations will have to be dealt with.
Nevertheless, assuming that one finds a way for such experiments to go ahead, one can try various things. E.g. one can train a variety of differently architected electronic circuits to approximate the same input-output function, and see if the observed subjective experiences differ substantially depending on the architecture of the electronic circuit in question. A positive answer would be the first step to figuring out how activity of an electronic circuit can be directly associated with subjective experiences.
If people start organizing for this kind of work, I’d love to collaborate.