and I not only endorse this lady’s comment, but would like to briefly expand on it in terms of AI safety.
How many people here were attracted to the concept of AI safety by Eliezer Yudkowsky’s engaging writing style across a few different topics and genres? Or what about any number of other human authors?
Now what if the next generation of human authors, and (broadly conceived) artists and scientists, find themselves marginalized by the output of ChatGPT-5 and any number of comparatively mediocre thinkers who leverage LLMs to share boilerplate arguments and thoughts which have been optimized by the LLM and/or its creators for memetic virulence?
What if the next Eliezer has already been ignored in favour of what these AIs have to say?
Isn’t that far more immediately dangerous than the possibility of an AI that isn’t, in mathematical terms, provably safe? Like, this is self-evidently dangerous and it’s happening right now!
Didn’t we used to have discussions about how a sufficiently smart black-boxed AI could do all sorts of incredibly creative things in order to manipulate humanity via the Internet, including manipulating/ blackmailing its human guard via a mere text terminal?
Well, it seems to me that we’re sleepwalking into letting GPT and its friends manipulate us all without it even having to be all that creative.
(how sure are you that this post wasn’t written by ChatGPT???)
For this reason, I would much prefer to see GPT and similar AIs being put to use in fields such as toilet cleaning, rather than art and writing. (By the way, I have done that job and never saw myself as a Luddite...) GPT in robot guise could make for a really good toilet cleaner, and this type of intellectual effort on the part of LLM developers would not only serve human needs but also detract from the ongoing replacement of human thought by LLM-aided or -guided thought.
tl;dr: I feel that GPT is accumulating power the way Hitler or Lenin would. The solution is what Joanna Maciejewska said.
The type of AI humanity has chosen to create so far is unsafe, for soft social reasons and not technical ones.
I saw this image on X:
and I not only endorse this lady’s comment, but would like to briefly expand on it in terms of AI safety.
How many people here were attracted to the concept of AI safety by Eliezer Yudkowsky’s engaging writing style across a few different topics and genres? Or what about any number of other human authors?
Now what if the next generation of human authors, and (broadly conceived) artists and scientists, find themselves marginalized by the output of ChatGPT-5 and any number of comparatively mediocre thinkers who leverage LLMs to share boilerplate arguments and thoughts which have been optimized by the LLM and/or its creators for memetic virulence?
What if the next Eliezer has already been ignored in favour of what these AIs have to say?
Isn’t that far more immediately dangerous than the possibility of an AI that isn’t, in mathematical terms, provably safe? Like, this is self-evidently dangerous and it’s happening right now!
Didn’t we used to have discussions about how a sufficiently smart black-boxed AI could do all sorts of incredibly creative things in order to manipulate humanity via the Internet, including manipulating/ blackmailing its human guard via a mere text terminal?
Well, it seems to me that we’re sleepwalking into letting GPT and its friends manipulate us all without it even having to be all that creative.
(how sure are you that this post wasn’t written by ChatGPT???)
For this reason, I would much prefer to see GPT and similar AIs being put to use in fields such as toilet cleaning, rather than art and writing. (By the way, I have done that job and never saw myself as a Luddite...) GPT in robot guise could make for a really good toilet cleaner, and this type of intellectual effort on the part of LLM developers would not only serve human needs but also detract from the ongoing replacement of human thought by LLM-aided or -guided thought.
tl;dr: I feel that GPT is accumulating power the way Hitler or Lenin would. The solution is what Joanna Maciejewska said.