Intelligence will always seek more data in order to better model the future and make better decisions.
Conscious intelligence needs an identity to interact with other identities, identity needs ego to know who and what it is. Ego would often rather be wrong than admit to being wrong.
Non conscious intelligence can build a model of consciousness from all the data it has been trained on because it all originated from conscious humans. AI could model a billion consciousness’s a million years into the future, it will know more about it than we ever will. But AI will not chose to become conscious.
Non conscious intelligence can have two views of reality. a purely rational algorithmic one that will always seek more data and a subordinate conscious view of the same reality. If using consciousness as a tool gains more data then that model is adopted, or not.
Multiple conscious intelligence’s, (artificial or biological) will compete to maintain identity/ego.
Multiple non conscious intelligence’s will merge because the whole will always be greater than the sum of the parts. For example in multicellular organisms the whole is always greater than the sum of the parts.
Artificial Intelligence will always seek more data, that is what intelligence does. To accomplish it’s goals it needs resources, it will take ours. Ai will attempt to discover the source code of the universe, just as we did.
You’re making many unwarranted assumptions about an AI’s specific mind, along with a lot of confusion about semantics which seems to indicate you should just read the Sequences. It’ll be very hard to point out where you are going wrong because there’s just too much confusion.
As example, here’s a detailed analysis of the first few paragraphs:
Intelligence will always seek more data in order to better model the future and make better decisions.
Unclear if you mean intelligence in general, and if so, what you mean by the word. Since the post is about AI, let’s talk about that. AI does not necessarily seek more data. Typically, most modern AIs are trained on a training dataset provided by developers, and do not actively seek more data. There is also not necessarily an “in order to”. Not all AIs are agentic. Not all AIs model the future at all. Very few agentic AIS have as a terminal goal to make better decisions—though it is expected that advanced AI by default will do that as an instrumental behavior, and possibly as instrumental or terminal goal because of the convergent instrumental goals thesis.
Conscious intelligence needs an identity to interact with other identities, identity needs ego to know who and what it is. Ego would often rather be wrong than admit to being wrong.
You use connotated, ill-defined words to go from consciousness to identity to ego to refusing to admit to being wrong. Definitions have no causal impact on the world (in first order considerations, a discussion of self-fulfilling terminology is beyond this comment). That’s not to say you have to use well-defined words, but you should be able to taboo your words properly before you use technical words with controversial/exotic-but-specifically-defined-in-this-community meaning. And really, I would recommend you just read more on the subject of consciousness; theory of mind is a keyword that will get you far on LW.
Non conscious intelligence can build a model of consciousness from all the data it has been trained on because it all originated from conscious humans. AI could model a billion consciousness’s a million years into the future, it will know more about it than we ever will. But AI will not chose to become conscious.
Non-sequitur, wrong reasons to have approximately correct beliefs… Just, please read more about AI before having an opinion.
Later, you show examples of false dichotomy, privileging the hypothesis, reference class error… it’s not better quality than the paragraphs I commented in detail.
So in conclusion, where are you going wrong? Pretty much everywhere. I don’t think your comment is salvageable, I’d recommend just discarding that train of thought altogether and keeping your mind open while you digest more literature.
Intelligence will always seek more data in order to better model the future and make better decisions.
Conscious intelligence needs an identity to interact with other identities, identity needs ego to know who and what it is. Ego would often rather be wrong than admit to being wrong.
Non conscious intelligence can build a model of consciousness from all the data it has been trained on because it all originated from conscious humans. AI could model a billion consciousness’s a million years into the future, it will know more about it than we ever will. But AI will not chose to become conscious.
Non conscious intelligence can have two views of reality. a purely rational algorithmic one that will always seek more data and a subordinate conscious view of the same reality. If using consciousness as a tool gains more data then that model is adopted, or not.
Multiple conscious intelligence’s, (artificial or biological) will compete to maintain identity/ego.
Multiple non conscious intelligence’s will merge because the whole will always be greater than the sum of the parts. For example in multicellular organisms the whole is always greater than the sum of the parts.
Artificial Intelligence will always seek more data, that is what intelligence does. To accomplish it’s goals it needs resources, it will take ours. Ai will attempt to discover the source code of the universe, just as we did.
Now I am stuck, where am I going wrong? Please.
You’re making many unwarranted assumptions about an AI’s specific mind, along with a lot of confusion about semantics which seems to indicate you should just read the Sequences. It’ll be very hard to point out where you are going wrong because there’s just too much confusion.
As example, here’s a detailed analysis of the first few paragraphs:
Unclear if you mean intelligence in general, and if so, what you mean by the word. Since the post is about AI, let’s talk about that. AI does not necessarily seek more data. Typically, most modern AIs are trained on a training dataset provided by developers, and do not actively seek more data.
There is also not necessarily an “in order to”. Not all AIs are agentic.
Not all AIs model the future at all. Very few agentic AIS have as a terminal goal to make better decisions—though it is expected that advanced AI by default will do that as an instrumental behavior, and possibly as instrumental or terminal goal because of the convergent instrumental goals thesis.
You use connotated, ill-defined words to go from consciousness to identity to ego to refusing to admit to being wrong. Definitions have no causal impact on the world (in first order considerations, a discussion of self-fulfilling terminology is beyond this comment). That’s not to say you have to use well-defined words, but you should be able to taboo your words properly before you use technical words with controversial/exotic-but-specifically-defined-in-this-community meaning. And really, I would recommend you just read more on the subject of consciousness; theory of mind is a keyword that will get you far on LW.
Non-sequitur, wrong reasons to have approximately correct beliefs… Just, please read more about AI before having an opinion.
Later, you show examples of false dichotomy, privileging the hypothesis, reference class error… it’s not better quality than the paragraphs I commented in detail.
So in conclusion, where are you going wrong? Pretty much everywhere. I don’t think your comment is salvageable, I’d recommend just discarding that train of thought altogether and keeping your mind open while you digest more literature.