Take, for example, AISafety.info, that tries to explain.
“In recent years, AI has exceeded people’s expectations in a wide variety of domains — including playing Go, composing human-like text, writing code, and modeling protein folding”
Not a single current “AI” can do all of it simultaneously. All of them are neuros, who can’t even learn and perform over 1 task, to say nothing of escaping the power of alt+f4.
“Advanced AI could provide great benefits, but it could also cause unprecedented disasters, and even human extinction, unless difficult technical safety problems are solved and humanity cooperates to deploy advanced AI wisely.”
As if other pieces of technology weren’t just as “autonomous” and dangerous.
“Rapid progress in the capabilities of AI systems has recently pushed the topic of existential risk from AI into the mainstream. The abilities of systems like GPT-4 used to seem out of reach in the foreseeable future.”
Ah, GPT-4. This neuro is recognising text. The text may be sent from an image recognition neuro, and be sent to an image generation neuro.
It still sees the world as text, and doesn’t even learn from user’s inputs, nor it acts without an input button being pressed. How is it dangerous? Especially since...
“The leading AI labs today are aiming to create “artificial general intelligence” in the not-too-distant future”
Now my words start to sound blatant and I look like an overly confident noob, but… This phrase… Most likely is an outright lie. GPT-4 and Gemini aren’t even two-task neuros. Both cannot take a picture and edit it. Instead, an image recognition neuro gives the text to the blind texting neuro, that only works with text and lacks the basic understanding of space. That neuro creates a prompt for an image generator neuro, that can’t see the original image.
Talking about the general intelligence, (with ability to learn any task), when the largest companies only lie about a TWO-task one, is what initially made me ragequit the theme.
To say nothing about that “AI cashier” fraud, and the fact openAI doesn’t seem to consider AI safety (safety of a hypothetical project that isn’t being developed?) all that important.
“AI alignment researchers haven’t figured out how to take an objective and ensure that a powerful AI will reliably pursue that exact objective. The way the most capable systems are trained today makes it hard to understand how they even work. The research community has been working on these problems, trying to invent techniques and concepts for building safe systems.”
That, probably, is the question with the greatest potential to change my mind, and I must ask it the most politely. Did they make any progress since the Terminator franchise, I was rightfully told on Lesswrong not to think of as a good example?
“It’s unclear whether these problems can be solved before a misaligned system causes an irreversible catastrophe.”
You can’t solve these, by definition. Suppose that AI accidentally develops (since not that many man-hours are trying to make it on purpose), and intelligence safety is possible through the theories and philosophy. Then, AI will be better and faster at developing “human intelligence safety” than you would be at developing safety of the said AI. The moment you decide to play this game, no move could make you closer to victory. Don’t waste resources on it.
“Agency” Yet no one have told that it isn’t just a (quite possibly wrong) hypothesis. Humans don’t work like that: no one is having any kind of a primary unchangeable goal they didn’t get from learning, or wouldn’t overcome for a reason. Nothing seems to impose why a general AI would, even if (a highly likely “if”) it doesn’t work like a human mind.
I just can’t agree with AI safety. Why am I wrong?
Take, for example, AISafety.info, that tries to explain.
Not a single current “AI” can do all of it simultaneously. All of them are neuros, who can’t even learn and perform over 1 task, to say nothing of escaping the power of alt+f4.
As if other pieces of technology weren’t just as “autonomous” and dangerous.
Ah, GPT-4. This neuro is recognising text. The text may be sent from an image recognition neuro, and be sent to an image generation neuro.
It still sees the world as text, and doesn’t even learn from user’s inputs, nor it acts without an input button being pressed. How is it dangerous? Especially since...
Now my words start to sound blatant and I look like an overly confident noob, but… This phrase… Most likely is an outright lie. GPT-4 and Gemini aren’t even two-task neuros. Both cannot take a picture and edit it. Instead, an image recognition neuro gives the text to the blind texting neuro, that only works with text and lacks the basic understanding of space. That neuro creates a prompt for an image generator neuro, that can’t see the original image.
Talking about the general intelligence, (with ability to learn any task), when the largest companies only lie about a TWO-task one, is what initially made me ragequit the theme.
To say nothing about that “AI cashier” fraud, and the fact openAI doesn’t seem to consider AI safety (safety of a hypothetical project that isn’t being developed?) all that important.
Guardian News Article
Washington Post News Article
Reddit Post
That, probably, is the question with the greatest potential to change my mind, and I must ask it the most politely. Did they make any progress since the Terminator franchise, I was rightfully told on Lesswrong not to think of as a good example?
You can’t solve these, by definition. Suppose that AI accidentally develops (since not that many man-hours are trying to make it on purpose), and intelligence safety is possible through the theories and philosophy. Then, AI will be better and faster at developing “human intelligence safety” than you would be at developing safety of the said AI. The moment you decide to play this game, no move could make you closer to victory. Don’t waste resources on it.
“Agency” Yet no one have told that it isn’t just a (quite possibly wrong) hypothesis. Humans don’t work like that: no one is having any kind of a primary unchangeable goal they didn’t get from learning, or wouldn’t overcome for a reason. Nothing seems to impose why a general AI would, even if (a highly likely “if”) it doesn’t work like a human mind.