I just can’t agree with AI safety. Why am I wrong?
Take, for example, AISafety.info, that tries to explain.
“In recent years, AI has exceeded people’s expectations in a wide variety of domains — including playing Go, composing human-like text, writing code, and modeling protein folding”
Not a single current “AI” can do all of it simultaneously. All of them are neuros, who can’t even learn and perform over 1 task, to say nothing of escaping the power of alt+f4.
“Advanced AI could provide great benefits, but it could also cause unprecedented disasters, and even human extinction, unless difficult technical safety problems are solved and humanity cooperates to deploy advanced AI wisely.”
As if other pieces of technology weren’t just as “autonomous” and dangerous.
“Rapid progress in the capabilities of AI systems has recently pushed the topic of existential risk from AI into the mainstream. The abilities of systems like GPT-4 used to seem out of reach in the foreseeable future.”
Ah, GPT-4. This neuro is recognising text. The text may be sent from an image recognition neuro, and be sent to an image generation neuro.
It still sees the world as text, and doesn’t even learn from user’s inputs, nor it acts without an input button being pressed. How is it dangerous? Especially since...
“The leading AI labs today are aiming to create “artificial general intelligence” in the not-too-distant future”
Now my words start to sound blatant and I look like an overly confident noob, but… This phrase… Most likely is an outright lie. GPT-4 and Gemini aren’t even two-task neuros. Both cannot take a picture and edit it. Instead, an image recognition neuro gives the text to the blind texting neuro, that only works with text and lacks the basic understanding of space. That neuro creates a prompt for an image generator neuro, that can’t see the original image.
Talking about the general intelligence, (with ability to learn any task), when the largest companies only lie about a TWO-task one, is what initially made me ragequit the theme.
To say nothing about that “AI cashier” fraud, and the fact openAI doesn’t seem to consider AI safety (safety of a hypothetical project that isn’t being developed?) all that important.
“AI alignment researchers haven’t figured out how to take an objective and ensure that a powerful AI will reliably pursue that exact objective. The way the most capable systems are trained today makes it hard to understand how they even work. The research community has been working on these problems, trying to invent techniques and concepts for building safe systems.”
That, probably, is the question with the greatest potential to change my mind, and I must ask it the most politely. Did they make any progress since the Terminator franchise, I was rightfully told on Lesswrong not to think of as a good example?
“It’s unclear whether these problems can be solved before a misaligned system causes an irreversible catastrophe.”
You can’t solve these, by definition. Suppose that AI accidentally develops (since not that many man-hours are trying to make it on purpose), and intelligence safety is possible through the theories and philosophy. Then, AI will be better and faster at developing “human intelligence safety” than you would be at developing safety of the said AI. The moment you decide to play this game, no move could make you closer to victory. Don’t waste resources on it.
“Agency” Yet no one have told that it isn’t just a (quite possibly wrong) hypothesis. Humans don’t work like that: no one is having any kind of a primary unchangeable goal they didn’t get from learning, or wouldn’t overcome for a reason. Nothing seems to impose why a general AI would, even if (a highly likely “if”) it doesn’t work like a human mind.
If you’re actually interested, see my Capabilities and alignment of LLM cognitive architectures. That’s one we we can get from where we are, which is very limited, to “Real AGI” that will be both useful and dangerous.
This community mostly isn’t worried about current AI. We’re worried about future AIs.
The rate at which they get there is difficult to predict. But it’s not “anyone’s guess”. People with more time-on-task in thinking about both current AI and what it takes to constitute for a useful, competent, and dangerous mind (e.g., human cognition or hypothetical general AI) tend to have short timelines.
We could be wrong, but assuming it’s a long way off is even more speculative.
That’s why we’re trying to solve alignment ASAP (or, in some cases, arguing that it’s so difficult that we must stop building AGI altogether). It’s not clear which is the better strategy, because we haven’t gotten far enough on alignment theory. That’s why you see a lot of confliciting laims from well-informed people.
But dismissing the whole thing is just wishful thinking. Even when experts do it, it just doesn’t make sense, because there are other experts with equally good arguments that it’s deathly dangerous in the short term. Nobody knows. So seeing non-experts dismiss it because they “trust their intuitions” is somewhere between tragedy and comedy.
Thanks.
Unlike humans, machines can be extended / combined. If you have two humans, one of them is a chess grandmaster and the other is a famous poet… you have two human specialists. But if you have two machines, one great at chess and another great at poetry, you could in principle combine them to get one machine that is good at both. (You would need one central module that gives commands to the specialized modules, but that seems like something an LLM could already manage.)
LLMs can learn new things. At least in the sense that they have a long-term memory which was trained and probably cannot be updated (I don’t understand in detail how these things work) but also a smaller short-term memory, where they can choose to store some information (it’s basically as if the information stored there would be added to every prompt made afterwards). This feature was added recently to ChatGPT.
When an AI becomes smart enough to make or steal some money, obtain fake human credentials, rent some space in the cloud, and copy itself there, you can keep pressing alt+f4 as much as you want.
Are we there yet? No. But remember that five years ago if someone described ChatGPT, most people would laugh at them and say we wouldn’t get there in hundred years.
Ignoring for the moment the “text-to-image and image-to-text models use a shared latent space to translate between the two domains and so they are, to a significant extent operating on the same conceptual space” quibble...
GPT-4 and Gemini can both use tools, and can also build tools. Humans without access to tools aren’t particularly scary on a global scale. Humans with tools can be terrifying.
There are indeed a number of posts and comments and debates on this very site making approximately that point, yeah.
You seem to miss the point. It’s not about concepts. The GPT-4 is advertised as a system that can work with both text and images. It doesn’t. And isn’t being developed any further, beyond increasing the amount of symbols and other quantitative stuff.
No matter the amount of gigabytes it writes per second, I’m not afraid of something that sees the world as text. +1 point to capitalism for not making something that is overly expensive to develop and may doom the world.
Thanks, I’ll read.