Well, I am not a “leading AI researcher” (at least, not in the sense of having a track record of beating SOTA results on consensus benchmarks, which is how one usually defines that notion), but I am one of those who are trying to change the situation with non-invasive BCI not being more popular. My ability to have any effect on this, of course, does depend on whether I have enough coding and organizational skills.
But one of the points of the dialogue for me was to see if that might actually be counter-productive from the viewpoint of AI existential safety (and if so, whether I should reconsider).
And in this sense, some particular underwater stones to watch for were identified during this dialogue (whereas, I was previously mostly worrying about direct dangers to participants stemming from tight coupling with actively and not always predictably behaving electronic devices, even if the coupling is via non-invasive devices, so I was spending some time minding those personal safety aspects and trying to establish a reasonable personal safety protocol).
Non-invasive BCI, as in, getting ChatGPT suggestions and ads in your thoughts? I think even if we forget about AI safety for a minute, this idea feels so dystopian (especially if you imagine your kids doing it) that it’s better not to go there.
And if you’re thinking about offering this tech to AI researchers only, that doesn’t seem feasible. As soon as it exists, people will know they can make bank by commercializing it and someone will.
But yeah, still, the biggest hurdle for this idea is simply that all eyes are on AI which is moving very fast. So we’re not getting BCI before AI becomes a big part of the economy (which is starting to happen now, well before AGI and self-improvement). And after that we might well get a bunch of sci-fi stuff for awhile, but the world will be careening off course in a way unfixable by humans, which to me counts as losing the game.
Non-invasive BCI, as in, getting ChatGPT suggestions and ads in your thoughts?
I was mostly thinking in terms of computer-to-brain direction represented by psychoactive audio-visual modalities. Yes, this might be roughly on par with taking strong psychedelics or strong stimulants, but with different safety-risks trade-offs (better ability to control the experience, and less physical side effects, if things go well, but with potential for a completely different set of dangers if things go badly).
Yes, this might not necessarily be something one wants to dump on the world at large, at least not until select groups have more experience with it, and the safety-risk tradeoffs are better understood...
And if you’re thinking about offering this tech to AI researchers only, that doesn’t seem feasible. As soon as it exists, people will know they can make bank by commercializing it and someone will.
Well, the spec exists today (and I am sure this is not the only spec of this kind). All that separates this from reality is willingness of a small group of people of get together and experiment with inexpensive ways of building it.
Given that people are very sluggish converting theoretically obvious things to reality as long as those theoretically obvious things are not in the mainstream (cf. it being clear that ReLU must be great since at least the year 2000 paper in Nature, and the field ignoring them till 2011), I don’t know if “internal use tools” would cause all that much excitement.
If you need a more contemporary example related to Cybogism, Janus’ loom is a superpowerful interface to ChatGPT-like systems, it exists, it’s open source, etc. And so what? How many people even know about it, never mind using it?
Of course, if people start advertising it along the lines, “hey, take this drug-free full-strength psychedelic trip”, yeah, then it’ll become popular ;-)
If we want to continue thinking in terms of “us-vs-them”
I think this is mostly determined by economics, to what extent human thinking and AI are complementary goods to each other, and to what extent they’re substitutes for each other. Right now AIs are still used by humans, but it seems to me that the market is heading toward putting humans out of jobs entirely, because an AI query costs much less than an AI-with-human-in-the-loop query.
the market is heading toward putting humans out of jobs entirely
I think so.
There will be some exceptions, e.g. humans who will choose to tightly merge with AIs or otherwise strongly upgrade, or some local economic activities in the communities which will deliberately pursue a non-automation path, but the economic status of most humans will probably be no different from the economic status of children or retirees (that is, if things go well).
So, yes, the problem of making sure that life is interesting and meaningful will definitely exist (if things go well). AIs might help finding various non-trivial solutions to this (since not everyone is happy simply pursuing arts, sciences, meditations, hikes, travels, and social life for their own sake).
So the question is, why might things go well, and what can we do to increase the chances of that...
Well, I am not a “leading AI researcher” (at least, not in the sense of having a track record of beating SOTA results on consensus benchmarks, which is how one usually defines that notion), but I am one of those who are trying to change the situation with non-invasive BCI not being more popular. My ability to have any effect on this, of course, does depend on whether I have enough coding and organizational skills.
But one of the points of the dialogue for me was to see if that might actually be counter-productive from the viewpoint of AI existential safety (and if so, whether I should reconsider).
And in this sense, some particular underwater stones to watch for were identified during this dialogue (whereas, I was previously mostly worrying about direct dangers to participants stemming from tight coupling with actively and not always predictably behaving electronic devices, even if the coupling is via non-invasive devices, so I was spending some time minding those personal safety aspects and trying to establish a reasonable personal safety protocol).
Non-invasive BCI, as in, getting ChatGPT suggestions and ads in your thoughts? I think even if we forget about AI safety for a minute, this idea feels so dystopian (especially if you imagine your kids doing it) that it’s better not to go there.
And if you’re thinking about offering this tech to AI researchers only, that doesn’t seem feasible. As soon as it exists, people will know they can make bank by commercializing it and someone will.
But yeah, still, the biggest hurdle for this idea is simply that all eyes are on AI which is moving very fast. So we’re not getting BCI before AI becomes a big part of the economy (which is starting to happen now, well before AGI and self-improvement). And after that we might well get a bunch of sci-fi stuff for awhile, but the world will be careening off course in a way unfixable by humans, which to me counts as losing the game.
I was mostly thinking in terms of computer-to-brain direction represented by psychoactive audio-visual modalities. Yes, this might be roughly on par with taking strong psychedelics or strong stimulants, but with different safety-risks trade-offs (better ability to control the experience, and less physical side effects, if things go well, but with potential for a completely different set of dangers if things go badly).
Yes, this might not necessarily be something one wants to dump on the world at large, at least not until select groups have more experience with it, and the safety-risk tradeoffs are better understood...
Well, the spec exists today (and I am sure this is not the only spec of this kind). All that separates this from reality is willingness of a small group of people of get together and experiment with inexpensive ways of building it.
Given that people are very sluggish converting theoretically obvious things to reality as long as those theoretically obvious things are not in the mainstream (cf. it being clear that ReLU must be great since at least the year 2000 paper in Nature, and the field ignoring them till 2011), I don’t know if “internal use tools” would cause all that much excitement.
If you need a more contemporary example related to Cybogism, Janus’ loom is a superpowerful interface to ChatGPT-like systems, it exists, it’s open source, etc. And so what? How many people even know about it, never mind using it?
Of course, if people start advertising it along the lines, “hey, take this drug-free full-strength psychedelic trip”, yeah, then it’ll become popular ;-)
I do think collaborative human-AI mindset instead of adversarial mindset is the only feasible way, cf. my comments on Ilya Sutskever thinking in https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a.
If we want to continue thinking in terms of “us-vs-them”, the game has been lost already.
I think this is mostly determined by economics, to what extent human thinking and AI are complementary goods to each other, and to what extent they’re substitutes for each other. Right now AIs are still used by humans, but it seems to me that the market is heading toward putting humans out of jobs entirely, because an AI query costs much less than an AI-with-human-in-the-loop query.
I think so.
There will be some exceptions, e.g. humans who will choose to tightly merge with AIs or otherwise strongly upgrade, or some local economic activities in the communities which will deliberately pursue a non-automation path, but the economic status of most humans will probably be no different from the economic status of children or retirees (that is, if things go well).
So, yes, the problem of making sure that life is interesting and meaningful will definitely exist (if things go well). AIs might help finding various non-trivial solutions to this (since not everyone is happy simply pursuing arts, sciences, meditations, hikes, travels, and social life for their own sake).
So the question is, why might things go well, and what can we do to increase the chances of that...