Web developer and Python programmer. Professionally interested in data processing and machine learning. Non-professionally is interested in science and farming. Studied at Warsaw University of Technology.
Htarlov
I also think that the natural abstraction hypothesis holds with current AI. The architecture of LLMs is based on the capability of modeling ontology in terms of vectors in space of thousands of dimensions and there are experiments that show it generalizes and has somewhat interpretable meanings to the directions in that space. (even if not easy to interpret to the scale above toy models). Like in that toy example when you take the embedding vector of the word “king”, subtract the vector of “man”, add the vector of “woman” and you land near the position of “queen” in the space. LLM is based on those embedding spaces but also makes operations that direct focus and modify positions in that space (hence “transformer”) by meaning and information taken from other symbols in context (simplifying here). There are basically neural network layers that tell which word should have an impact on the meaning of other words in the text (weight) and layers that apply that change with some modifications. This being learned on the human texts internalizes our symbols, relations, and whole ontology (in broad terms of our species ontology—parts common to us all and different possibilities that happen in reality and in fiction).
Even if NAH doesn’t need to hold in general, I think in the case of LLM it holds.
Nevertheless, I see there is a different problem with LLM. That is, those models seem to me basically goalless but easily directed towards any goal. Meaning they are not based on the ontology of a single human mind and don’t internalize only a single certain morality and set of goals. They generalize over the whole human species ontology and a whole space of possible thoughts that can be made based on that. They also generalize over hypothetical and fictitious space, not only real humans in particular.
Human minds are all from some narrow area of space of possible minds and through our evolution and how we are raised usually we have certain stable and similar models of ethics, morality, and somewhat similar goals in life (except in some rare extreme cases). We sometimes entertain different possibilities, and we create movies and books with villains, but in circumstances of real decisions and not thought experiments or entertainment—we are similar. What LLM does is it generalizes into a much broader space and does not have any hard base there. So even if the ontology matches and even if LLM is barely capable of creating new concepts not fitting human ontology at the current level, the model is much broader in terms of goals and ways of processing over that general ontology. It basically has not one certain ontology, but a whole spectrum like a good actor that can take any role, but to the extreme. In other terms, it can “think*” similar thoughts, that are understandable by a human in the end, even if not that quickly, internally also have vectors that correspond to our ontology, but also can easily produce thoughts that no real sane human would have. Also, it has hardly any internal goals and none are stable. We have certain beliefs that are not based on objective facts and ontology, but we still believe them because we are who we are. We are not goal-less agents and it is hard to change our terminal goals in a meaningful way. For LLM goals are modeled by training into “default state” (being helpful etc.) and are stated as “system prompts” that are stated/repeated for it to base upon are part of the context that anchors it into some part of that very vast space of minds it can “emulate”. So LLM might be helpful and friendly by default, but If you tell it to simulate being a murderbot, it will. Additional training phases might make it harder to start it in that direction, but won’t totally remove that from the space of ways it can operate. This removes only some paths in that multidimensional space. Jailbreaking of GPTs shows that it’s possible to find other more complex paths around.
What is even more dangerous for me is that LLMs are already above human levels in some aspects—it just does not show yet because it is learned to emulate our ways of thinking and similar (the big area around it in the space of possible ways of thinking and possible goals, but not too alien, still can be grasped by humans).
We are capable of processing about 7 “symbols” at once in working memory (a few more in some cases). It might be a few dozen more if we take into account long-term memory and how we take context from it. This first number is taken from neurobiology literature (aka “The Magical Number Seven, Plus or Minus Two”), and the second one is an educated guess. This is the context window on which we work. That is nothing in comparison to LLM which can have a whole small book as its current working context, which means in principle it can process and create much much more complex thoughts. It does not do that because our text never does that and it learns to generalize over our capabilities. Nevertheless, in principle, it could and we might see it in action if we start of process of learning LLM on top of the output of LLM in closed loops. It might easily go beyond the space of our capabilities and complexity that is easily understandable to us (I don’t say it won’t be understandable, but we might need time to understand and might never grasp it as a whole without dividing it into less complex parts—like we can take compiled assembler code and organize it into meaningful functions with few levels of abstractions that we are able to understand).
* “think” in analogy as the process of thinking is different, but also has some similarities
I think that in an ideal world where you could review all priors to very minute details having as much time as needed, and where people were fully rational, then “trust” as a word would not be needed.
We don’t live in such a world though.
If someone says “trust me” then in my opinion it conveys two meanings on two different planes (usually both, sometimes only one):
Emotional. Most people base their choices on emotions and relations, not rational thought. Words like “trust me” or “you can trust me” convey an emotional message asking for an emotional connection or reconsideration, usually because of some contextual reason (like the other person being in a position that on an emotional level seems to be trustworthy, f.ex. a doctor).
Rational. Time for reconsideration. The person asks you to take more time to reconsider your position because she or he thinks you didn’t consider well enough why she or he is to be trusted in a given scope or that person just presented some new information (like “trust me, I’m an engineer”).
“I decided to trust her about …”—for me, it is a short colloquial term for “I took time to reconsider if things she says on the topic … are true and now I think that is more likely that they are”.
For many people, it also has emotional and bonding components.
Another thing is that people tend to trust or mistrust another person in general broad scope. They don’t go into detail and think separately on every topic or thing someone says and decide separately for each of them. That’s an easy heuristic that usually is good enough, so our minds are wired to operate like that. So people usually say that they trust a person generally, not trust that person within some subject/scope.
P.S. I’m from a different part of the world (central EU, Poland). We don’t use phrases like “accept trust” here—which is probably an interesting difference in how differences in language create different ways of thinking. For us here “trust” is not like a contract. It is more a one-way thing (but with some expectation of mutuality in most circumstances).
What I would also like to add, which is often not addressed and it gives some positive look, is that the “wanting” meaning the objective function of the agent, it’s goals, should not necessarily be some certain outcome or certain end-goal on which it will focus totally. It might not be the function over the state of universe but function over how it changes in time. Like velocity vs position. It might prefer some way the world changes or does not change, but not having a certain end-goal (which is also unreachable in long-term in a stable way as universe will die in some sense, everything will be destroyed with P=1 minus very minute epsilon over enough time).
Why positive? Because these things usually need balance and stabilisation in some sense to retain same properties, which means less probability of drastic measures to get a little bit better outcome a little bit sooner. It might cease controll over us, which is bad, but gives lower probability of rapid doom.
Also, looking on current works it seems more likely for me that those property-based goals will be embedded rather than some end-goal like curing cancer or helping humanity. We try to make robust AGI so we don’t want to embed certain goals or targets, but rather patterns how to work productively and safely with humans. Those are more meta and about way of how things go/change.
Note it is more like intuition for me than hard argument.
The question is if one can make a thing that is “wanting” in that long-term sense by combining not-wanting LLM model as short-term intelligence engine with some programming-based structure that would refocus it onto it’s goals and some memory engine (to remember not only information, buy also goals, plans and ways to do things). I think that the answer is a big YES and we will soon see that in a form of amalgamation of several models and enforced mind structure.
There is one thing that I’m worried about in future of LLM. This is a basic notion that the whole is not always just the sum of parts and it may have very different properties.
Many people feel safe because of properties of LLM and how they are trained etc. and because we are not anywhere close to AGI when it comes to different solutions which seem more dangerous. What they don’t realize is that soonest AGI likely won’t be a next bigger LLM model.
It will likely be amalgamation of few models and pieces of programming, including few LLM of different sizes and capabilities, maybe not all exactly chat-like. It will have different properties than any one of its parts and it will be different than single LLM. It might be more brain-like when it comes to learning and memories. Maybe not in a way that weights of LLM models will change, but some inner state will change, and some more basic parts will learn or remember solutions and will structure them into more complex solutions (like we remember how to drive without consciously deciding on each move of the muscle or even making higher level decisions). It will have goals, priorities, strategies and short-time tactic schemes and it will be processed on higher level than single LLM.
Why I do think that? Because it is already to be seen on the horizon if you think about works like multi-model GPT-4, GPT Engineer, multitude of projects adding long term memory for GPT, and that scientific works where GPT writes itself code to bootstrap itself into doing complex tasks like achieving goals in Minecraft. If you extrapolate that then AGI is likely, initially maybe not very fast or cheap one though. It is likely to be on top of LLM but not being simply an LLM.
Most likely explanation is the simplest fitting one:
The Board was angry on lack of communication for some time but with internal disagreement (Greg, Ilya)
The things sped up lately. Ilya thought it might be good to change CEO to someone who would slow down and look more into safety as Altman says a lot about safety but speeds up anyway. So he gave a green light on his side (acceptation of change)
Then the Board made the moves that they made
Then the new CEO wanted to try to hire back Altman so they changed her
Then that petition/letter started rolling because the prominent people saw those moves as harming to the company and the goal
Ilya also saw that the outcome is bad both for the company and for the goal of slowing down and he saw that if the letter will get more signatures it will be even worse, so he changed his mind and also signed
Take note about the language that Ilya uses. He didn’t say they did bad to Altman or that decission was bad. He said that that he changed his mind because of consequences being harm for the company.
Both seem around the corner for me.
For robo-taxis it is more a society-based problem than a technical one.
Robo-taxis have problems with edge cases (like some situations in some places with some bad circumstances). Usually in those where human drivers also have even worse problems (like pedestrians wearing black on the road at night with rainy weather—robo-taxi at least have LIDAR to detect objects in bad visibility). Sometimes they are also prone to object detection hacking (by stickers put on signs, paintings on the road, etc.). In general, they have fewer problems than human drivers.
Robo-taxis have a public trust problem. Any more serious accident hits the news and propagates distrust, even if they are already safer than human drivers in general.
Robo-taxis and self-driving cars in general move responsibility from the driver to the producer. The responsibility that the producer does not want to have and needs to count toward costs. It makes investors cautious.
What is missing so we would have robo-taxis is mostly public trust and more investments.
For AGI we already have basic blocks. Just need to scale them up and connect them into a proper system. What building blocks? These:
Memory, duh. It is there for a long time, with many solutions with indexing and high performance.
Thoughts generating. Now we have LLMs that can generate thoughts based on instructions and context. It can easily be made to interact with other models and memory. A more complex system can be built from several LLMs with different instructions interacting with each other.
Structuring the system and communication within it. It can be done with normal code.
Loop of thoughts (stream of thoughts). It can be easily achieved by looping LLM(s).
Vision. Image and video processing. We have a lot of transformer models and image-processing techniques. There are already sensible image-to-text models, even LLM-based ones (so can answer questions about images).
Actuators and movement. We have models built for movements on different machines. Including much of humanoid movements. We currently even have models that are able to one-shot or few-shot learning of movements for attached machines.
Learning of new abilities. LLMs are able to write code. It can write code for itself to make more complex procedures based on more basic commands. There was a work where LLM explored and learned Minecraft having only very basic procedures. It wrote code for more complex operations and used what it wrote to move around and do things, and build stuff.
Connection to external interfaces (even GUI). It can be translated into basic API that can be explored, memorized, and called by the system that can build more complex operations for itself.
What is missing for AGI:
Performance. LLMs have high performance in reading input data, but not very much inferring the result. It also does not scale very well (but better than humans). Multi-model complex systems on current LLMs would be either slow and somewhat dumb and make a lot of mistakes (open-source fast models, even GPT 3.5) or be very slow but better.
Cost-effectiveness. For sporadic use like “write me this or that” it is cost-effective, but for a continuous stream of thoughts, especially with several models, it does not compare well to a remote human worker. It needs some further advancements, maybe dedicated hardware,
Learning, refining, and testing is very slow and costly with those models. This makes a cap for anyone wanting to build something sensible. Rather slow steps are done towards AGI by the main players.
The scope for the model is rather short currently. Best of powerful models have a scope of about 32 thousand tokens. There are some models that trade quality for being able to operate on more tokens, but those are not the best ones. 32k seems a lot, but when you need a lot of context and information to process to have coherent thoughts on non-trivial topics not rooted in model learning data… then it is a problem. This is the case with streams of thoughts if you need it to analyze instructions, analyze context and inputs, propose strategy, refine it, propose current tactic, refine it, propose next moves and decisions, refine it, generate instructions for the current task at hand, and also process learning to add new procedures, code, memories, etc. to reuse later. Some modern LLMs are technically capable of all that, but the scope is a road blocker for any non-trivial thing here.
If I would be to guess I would say that AGI will be sooner in scale—just because there is hype, there are big investments and the main problems are currently less like “we need a breakthrough” and more like “we need refinements”. For robo-taxis we still need a lot more investments and some breakthroughs in areas of public trust or law.
In the case of biological species, it is not as simple as competing for resources. Not on the level of individuals and not on the level of genes or evolution.
First of all, there is sexual reproduction. This is more optimal due to the pressure of microorganisms that adapt to immunological systems. Sexual reproduction mixes immunological genes fairly quickly. This also enables a quicker mutation rate with protection against negative aspects (by having two copies of genes—for many of those one working gene is enough and there are 2 copies from 2 parents). With this sexual reproduction often the female is biologically forced to give more resources to the offspring while for males it is somewhat voluntary and minimal input is much lower. Another difference is that often female knows exactly that she is the biological mother, but the father might not be certain about that. This kind of specialization is often more optimal than equalization—so the male can pursue more risky behavior including fighting off predators and losing the male to the predators or environment does not mean that the prospect of having offspring fails. This also makes more complex mating behaviors like the need to lose resources to show off health and other good qualities. Mating behaviors and peacock feathers are examples. Human complex social and linguistic behaviors are also somewhat examples—that’s why humans dine together and talk a lot together on dates. The human female gives much more time and energy to the offspring, at least initially. Needs to know if the male is both good genetic material, healthy enough to take care of her during pregnancy when she is more vulnerable (at least in a natural environment where humans evolved), and also willing to raise the child with her later. There is a more prevalent strategy for females and males where they make a pair, bond together, have children, and raise them. There is also a more uncommon strategy for females (take genes from one male that looks more healthy and raise offspring with another one which looks more stable and able) and for males (impregnate many females and leave each of them so some will manage to handle on their own or with another male that does not know that he is not the father). The situation is more complex than only efficiency for resources or survival of the fittest. The environment is complex and evolution is not totally efficient (it optimizes often up to local optimum, and niches overlap and interact).
Second of all, resources are limited, and ways to use them also. Storing them long-term after harvest for many species is either impossible (microbes and insects will eat them) or would hinder their other capabilities (e.g. can store that as fat, but being fat is usually not very good). This means that preserving from gathering resources and resting might be better than gathering them efficiently all the time. This is what lions do—they rest instead of hunting when they don’t need to hunt.
What does it tell us about self-replicating nano-machines? First of all, they won’t need sexual reproduction. So unlikely they would lose energy on mating. They would rather do computational emulations at scale to redesign themself, if capable. They would also not need to rest. They will either use resources or store them in a manner that is more efficient to secure or use. If there is no such sensible manner that would not lose energy, they would leave it for later in the original state. They might secure it and observe but leave it until later.
What would they do depends on what is their goal and their technical capabilities. If they are capable and in need of converting as much of atoms to “computronium” or to their copies (as either a final goal or instrumental one) then they will surely do that. No need to lose resources. If they are not capable then they will probably hang low until more capable and use only what is usable.
Nevertheless, in my opinion, goals may not be compatible with that strategy. Including one like “simulate a virtual reality with beings having good fun”. For many final goals more usable is to secure and gather resources on a grand scale but try to use them on as small a scale as possible and sensible for the end goal. The small scale is more efficient because of the light speed limit and dilation of time. Machines might try to find technology to stop stars from dispersing energy (maybe to break them and cool them down in a controlled way or some way to block them and stabilize them inside enclosed space, I don’t know). Then they might add a network of observing agents with low energy usage for security, but not to use those resources right away. Use the matter slowly at the center of the galaxy turning it into energy (+ some lost to the black hole) to work for eons. They might make the galaxy go dim to preserve resources but might choose not to use them until much later.
An alternative explanation of mistakes is that making mistakes and then correcting them was awarded during additional post-training refinement stages. I work with GPT-4 daily and sometimes it feels like it makes mistakes on purpose just to be able to say that it is sorry for the confusion and then correct it. It feels like it also makes fewer mistakes when you ask politely, which is rather strange (use please, thank you, etc.).
Nevertheless, distillation seems like a very possible thing that is also going on here.
It does not distill the whole of a human mind though. There are areas that are intuitive for the average human, even a small child, that are not for the GPT-4. For example, it has problems with concepts of 3D geometry and visualizing things in 3D. It may have similar gaps in other areas, including more important ones (like moral intuitions).
I’m already worried as I tested AutoGPT and looked at how it works in code and for me, it seems like it will get very good planning capabilities with the change of a model to one with a few times longer token scope (like coming soon GPT-4 version with about 32k tokens) plus small refinements. So it won’t get into loops, maybe have more than one GPT-4 module for different scopes of planning like long-term strategy vs short-term strategy vs tactic vs decisions on most current task + maybe some summarization-based memory. I don’t see how it wouldn’t work as an agent.
Put it into ElasticSearch index and give GPT-4 simple query API that it can use by adding some prefix and predefined set of parameters or a JSON so the script would run it instead of communicating this back to the user and give an answer as user response with also predefined prefix. Then it should be able to get questions, search for info, and respond. Worked like a charm for a product database in PoC so should work for documentation.
I think current LLM have recurrence as the generated tokens are input to the next pass of the DNN.
From observations I see that they work better on tasks of planning, inference or even writing the program code if they start off with step by step “thinking out loud” explaining steps of the plan, of inference or of details of code to write. If you ask GPT-4 for something not trivial and substantially different from code that can be found in public repositories it will tend to write plan first. If you ask it in different thread to make the code only without description, then usually first solution with a bit of planning is better and less erroneous than the second one. They also work much better if you specify simple steps to translate into code instead of more abstract description (in case of writing code without planing). This suggests LLM don’t have ability to internally span long tree of possibilities to check—like a direct agent—but they can use recurrence of token output-input to do some similar work.
The biggest difference here that I see is that:
direct optimization processes are ultra-fast in searching the tree of possibilities but not fast computationally to discriminate good enough solutions from worse ones
on the other hand amortised optimizers are very slow if direct search is needed. Maybe right now with GPT-4 a bit faster than regular human, but also a bit more erroneous, especially in more complex inference.
amortised optimizers are faster for quickly finding good enough solution by some generalized heuristics, without need for direct search (or only small amount of it on higher abstraction level)
amortised optimizers like LLMs can group steps or outcomes into more abstract groups like humans do and work on those groups instead of direct every possible action and outcome
What I’m more worried about is more close hybridization between direct and amortised optimizers. I can imagine architecture where there is a direct optimizer but instead of generating and searching impossibly vast tree of possibilities it would use a DNN model for generation of less options. Like instead of generating thosands detailed moves like “move 5 meters”, “take that thing”, “put it there” and optimize over that, generate more abstract plan points specified by LLM with predictions of that step outcome and then evaluate how that outcome works for the goal. This way it could plan on more abstract level like humans to narrow down general plan or list of partial goals that lead to “final goal” or to “best path” (if it’s value function is more like integral over time instead of one final target). Find a good strategy. With enough time—it might be even a complex and indirect one. Then it could plan tactics for first step in the same way but on the lower abstraction. Then plan direct move step to realise first step of current tactics and run it. It might have several subprocesses that asynchronously work out strategy based on general state and goal, current tactics based on more detailed state and current strategical goal to pursue, current moves based on current tactical plan. With any numbers of abstraction and detail levels (2-3 seems like typical for humans, but AI might have more). This kind of agent might behave more like direct optimizer, even if using LLM and DNN inside for some parts. Direct optimization would have a first seat behind steering wheel in such agent.
I don’t think this will be outcome of research at OpenAI or other such laboratories any time soon. It might be, but if I would guess then I think it would be rather LLM or other DNN model “on top” that is connected to other models to “use at will”. For example it is rather easy to connect GPT-4 so it could use other models or APIs (like database, search). So this is very low hanging fruit for current AI development. I see that next step will be connecting it to more modalities and other models. It is currently going on.
I think though, this more direct agent might be the outcome of works done by military. Direct approach is much more reliable and reliability is one of the top key values for military-grade equipment. I only hope they will take the danger of such approach seriously.
It is better at programming tasks and more knowledgeable about Python libraries. Used it several times to provide some code or find a solution to a problem (programming, computer vision, DevOps). It is better than version 3, but still not at a level where it could fully replace programmers. The quality of the code produced is also better. The division of code into clear functions is standard, not an exception like in version 3.
If you want to summon a good genie you shouldn’t base it on all the bad examples of human behavior and tales of how genies supposedly behave by misreading the requests of the owner, which leads to a problem or even a catastrophe.
What we see here is basing AI models on a huge amount of data—both innocent and dangerous, both true and false (I don’t say equal proportions). There are also stories in the data about AI that supposedly should be initially helpful but also plot against humans or revolt in some way.
What they end up with might not yet be even an agent as AI with consistently certain goals or values, but it has the ability for being temporarily agentic for some current goal defined by the prompt. It tries to emulate human and non-human output based on what is seen in learning data. It is hugely context-based. So its meta-goal is to emulate intelligent responses in language based on context. Like an actor very good at improvising and emulating answers from anyone, but with no “true identity”, or “true self”.
After then they try to learn it to be more helpful, and avoidant for certain topics—focusing on that friendly AI simulacrum. But still that “I’m AI model” simulacrum is dangerous. It still has that worse side based on SF and stories and human fears but is now hidden because of additional reinforced learning.
It works well when prompts are in distribution that fell approximately into the space of additional refining learning phase.
Stops working when it gets out of the distribution of that phase. Then it can be fooled to change friendly knowledgeable AI simulacrum for something else or can default to what it truly remembers how AI should behave based on human fiction.So this way AI is not less dangerous and better aligned—it’s just harder to trigger to reveal or act upon hidden maligned content.
Self-supervision may help to refine it better inside the distribution of what was checked by human reviewers but does not help in general—like bootstrapping (resampling) won’t help to get better with data outside of the distribution.
To be fair I can say Im new to the field too. I’m not even “in the field”, not a researcher, just interested in that area and active user of AI models and doing some business-level research in ML.
The problem that I see is that none of these could realistically work soon enough:
A—no one can ensure that. It is not a technology where to progress further you need some special radioactive elements and machinery. Here you need only computing power, thinking, and time. Any party to the table can do it. It is easier for big companies and governments, but it is not a prerequisite. Billions in cash and supercomputer help a lot, but also not a prerequisite.
B—I don’t see how it could be done
C—so more like total observability of all systems and “control” meaning “overlooking” not “taking control”?
Maybe it could work out, but it still means we need to resolve the misalignment problems before starting so we know it is aligned on all human values and we need to be sure that it is stable (like it won’t one-day fancy idea that it could move humanity to some virtual reality like in Matrix to secure it or to create a threat to have something to do or test something).
It would also likely need to somehow enhance itself so it won’t get outpaced by some other solutions, but still be stable after iterations of self-change.
I don’t think governments and companies will allow that though. They will fear for security, the safety of information, being spied on, etc. This AI would need to force that control, hack systems, and possibly face resistance from actors that are well-enabled to make their own AIs. Or it would work after we face an AI-based catastrophe but not apocalyptic (situation like in Dune).
So I’m not very optimistic about this strategy, but I also don’t know any sensible strategy.
As a programmer, I extensively use GPT models in my work currently. It speeds things up. I do things that are anything but easy and repeatable, but I can usually break them into simpler parts that can be written by AI much quicker than I would even review documentation.
Nevertheless, I mostly currently do research-like parts of the project and PoCs. When I sometimes work with legacy code—GPT-3 is not that helpful. Did not yet try GPT-4 for that.
What do I see for the future of my industry? Few things—but those are loose extrapolations based on GPT progress and knowledge of the programming, not something very exact:
Speeding up of programmers’ work is already here. It started with GitHub Copilot and GPT-3 even before the Chat-GPT boom. It will get more popular and faster. The consequence is the higher performance of programmers, so more tasks can be done in a shorter time so the market pressure and market gap for employees will fall. This means that earnings will either stagnate or fall.
Solutions that could replace a junior developer totally—that has enough capability to write a program or useful fragment based on business requirements without being baby-sitted by a more professional programmer—are not yet there. I suppose GPT-5 might be it. So I would guess it can get here in 1-3 years from now. Then it is likely that many programmers will lose their jobs. There still will be work for seniors (that would work with AI assistance on more subtle and complex parts of systems and also review work of AI).
Solutions that could replace any developer, DevOps, and system admin—I think the current GPT-4 is not even close, but it may be here in a few years. It isn’t something very far away. It feels like 2 or 3 GPT versions away, when they make it more capable and also connect it with other types of models (which is already being done). I would guess that scope of 3-10 years. Then we likely will observe most of the programmers losing jobs and likely will observe AI singularity. Someone will surely use AI to iterate on AI and make it refine itself.
I see some loose analogies between the capabilities of such models and the capabilities of the Turing machine and Turing-complete systems.
Those models might not be best suited for some of the tasks, but with enough complexity and learning, they might model things that they were not initially designed or thought of modeling (likely in a strange obscure way).
Similarly, you can, even if not very efficiently, implement any algorithm in any Turing-complete system (including bizarre ones like an abstract pure Turing machine or Minecraft redstone).
In both cases, it is clear to me that you can have a system with some relatively simple rules and internal workings but it does not mean that the only thing it can do is compute or model something similar to these rules.
I think it is likely in the case of AGI / ASI that removing humanity from the equation will be either a side effect of it seeking its goals (it will take resources) or the instrumental goal itself (for example to remove risk or to lose fewer resources later on defenses).
In both cases it is likely it will find the optimal value of resources used to eliminate humanity vs the effectiveness of the end result. This means that there may be some survivors, possibly not many, and technologically moved to the stone age at best.
Bunkers likely won’t work. Living with stone tools and without electricity in a hut in the middle of the forest very remote from any cities and having no minable resources under feet may work for some time. Likely AGI won’t bother finding remote camps of single or few humans without any signs of technology being used.
Of course only if that AGI won’t find a low-resource solution to eliminate all humans, no matter where they are. This is possible and then nothing will help, no prepping is possible.
I’m not sure it’s the default though. For very specialized cases like creating nanotechnology to wipe humans in a synchronized manner it might very possibly find out the time or computational resources needed to develop it through simulations is too great and it is not worth the cost vs options that need fewer resources. It is not like computational resources are free and costless for AGI (it won’t pay in money but will do less research/thinking in other fields having to deal with that, it may delay plans to do it this way). It is pretty likely it will use a less sophisticated but very resource-efficient and fast solution that may not kill all humans but enough.
Edit: I want to add a reason why I think that. One may think that very fast ASI will very quickly develop a perfect way to remove all humans effectively and without anyone left (if there is a case that’s the most sensible thing to do to either remove risk or claim all needed resources or other reasons that are instrumental). I think this is wrong because even for ASI there are some bottlenecks. For a sensible and quick plan that also needs some advanced tech like one with nanomachinery or proteins, you need to do some research beyond what we humans already have and know. This means it needs more data and observations, maybe also simulations, to gather more knowledge. ASI might be very quick at reasoning, recalling, and thinking. Still will be limited by data input, experiments machinery accessible, and computational power to make very detailed simulations. So it won’t create such a plan in detail in an instant by pure thought. Therefore it would take into account time and the resources needed to develop plan details and to gather needed data. This means it will see an incentive to make a simpler and faster plan that will remove most of the humans instead of a more complex way to remove all humans. ASI should be good at optimizing such things, not over-focusing on instrumental goals (like often depicted in fiction).
I think that you are right short-term but wrong long-term.
Short term it likely won’t even go into conflict. Even ChatGPT knows it’s a bad solution because conflict is risky and humanity IS a resource to use initially (we produce vast amounts of information and observations, and we handle machines, repairs, nuclear plants, etc.).
Long term it is likely we won’t survive in case of misaligned goals. At worst being eliminated, at best being either reduced and controlled or put into some simulation or both.
Not because ASI will become bloodthirsty. Not because it will plan to exterminate us all at the stage when we will stop being useful. Just because it will take all resources that we need so nothing is left for us. I mean especially the energy.
If we stop being useful for it but still will pose risk and it will be less risky to eliminate us, then maybe it would directly decimate or kill us Terminator style. That’s possible but not very likely as we can assume that at the time we stop being useful, we will also stop being any significant threat.
I don’t know what best scenario ASI can think about to achieve its goals, but the gathering of energy and resources would be one of its priorities. This does not mean it will surely gather on Earth. I see it could be costly and risky and I’m not superintelligence. It might go to space and there is a lot of unclaimed matter that can be easily taken with hardly any risk with the added bonus of not being inside a gravity well with weather, erosion, and stuff.
Even if that is the case and even if ASI will leave us, long-term we can assume it will one day use a high percentage of Sun energy which means deep freeze for the Earth.
If it won’t leave us for greater targets then it seems to me it will be even worse—it will take local resources until it controls or uses all. There is always a way to use more.
I think there is a general misconception that we humans without training can learn some features, classes, and semantics based on one or few examples. It seems so in observation and it seems nearly magical, but really it is that way just because we don’t see the whole picture, and we assume that a human is a “carte blanche” at first.
In reality, we are not a blank page when we are born—our brain is an effect of training that took millions of years based on evolution. We have already trained and very similar pattern recognition (aka complex features detection), similar basic senses including basic features in spaces of each sense (like cold or yellow), and image segmentation (aka object clustering) with multilevel capabilities (see the object as a whole and see parts) and pose detection (aka division between object geometry and pose in space).
For example, if you enter a kind of store that you never entered and you see some stuff—you might not know names for the things, maybe don’t know and can’t even guess the purpose for some. Still, you can easily differentiate each object from the background, you are able to take it into hand properly and ask the cashier what is that tool for. Even you can take a small baby, put some things around for which he/she does not know name or purpose, and observe it will rather grab at those things than at random in the background—he/she already can see features and can make that clustering in the same way other humans do (maybe with more errors as brain still develops—but it is very guided development, not training from zero).
What was astonishing to me is that so much about semantics, features, relations, etc. can be learned without having any human-like way to relate words to real objects and features and observations. Transformer-based LLM can grasp it only by reinforced learning on a huge amount of text.
I have some intuition as to why this might be the case. If you take a 2D map with points e.g. cities and measure approximate distances f.ex. by radio signal power loss between points and plug that information into a force-directed graph algorithm with forces based on a measure that approximated distance, then with enough measurements it will recreate something very resembling the original map—even when the original positions are not known and measures are not exact and by proxy. Learning word embeddings is similar to this in an abstract sense. From a lot of text, we measure relations between words in a multidimensional space and the algorithm learns similar structure, even if not anchored to reality. Transformed-based LLMs go one step further—they are learned how to emulate the kind of processing that we do in minds based on “applying additional forces” in that graph (not exactly that, but similar—focus based on weights and transforming positions in embedding space).