Now my words start to sound blatant and I look like an overly confident noob, but… This phrase… Most likely is an outright lie. GPT-4 and Gemini aren’t even two-task neuros. Both cannot take a picture and edit it. Instead, an image recognition neuro gives the text to the blind texting neuro, that only works with text and lacks the basic understanding of space. That neuro creates a prompt for an image generator neuro, that can’t see the original image.
Ignoring for the moment the “text-to-image and image-to-text models use a shared latent space to translate between the two domains and so they are, to a significant extent operating on the same conceptual space” quibble...
GPT-4 and Gemini can both use tools, and can also build tools. Humans without access to tools aren’t particularly scary on a global scale. Humans with tools can be terrifying.
“Agency” Yet no one have told that it isn’t just a (quite possibly wrong) hypothesis. Humans don’t work like that: no one is having any kind of a primary unchangeable goal they didn’t get from learning, or wouldn’t overcome for a reason. Nothing seems to impose why a general AI would, even if (a highly likely “if”) it doesn’t work like a human mind.
You seem to miss the point. It’s not about concepts. The GPT-4 is advertised as a system that can work with both text and images. It doesn’t. And isn’t being developed any further, beyond increasing the amount of symbols and other quantitative stuff.
No matter the amount of gigabytes it writes per second, I’m not afraid of something that sees the world as text. +1 point to capitalism for not making something that is overly expensive to develop and may doom the world.
Ignoring for the moment the “text-to-image and image-to-text models use a shared latent space to translate between the two domains and so they are, to a significant extent operating on the same conceptual space” quibble...
GPT-4 and Gemini can both use tools, and can also build tools. Humans without access to tools aren’t particularly scary on a global scale. Humans with tools can be terrifying.
There are indeed a number of posts and comments and debates on this very site making approximately that point, yeah.
You seem to miss the point. It’s not about concepts. The GPT-4 is advertised as a system that can work with both text and images. It doesn’t. And isn’t being developed any further, beyond increasing the amount of symbols and other quantitative stuff.
No matter the amount of gigabytes it writes per second, I’m not afraid of something that sees the world as text. +1 point to capitalism for not making something that is overly expensive to develop and may doom the world.
Thanks, I’ll read.