I find it quite hard to believe you couldn’t do even better if you were a single mind perceiving what the ants did and controlling them (which is how you are set up in this game). A single mind can, worst case, simulate the rules each ant follows, so it can never be worse than the social behavior is. But the ants individually can’t simulate a single large mind (for one thing, they wouldn’t have all the information it would have).
It’d be like writing a chess engine by writing a different AI for each piece. Splitting up the AI gives you more problems, not less.
(That’s not to say that you couldn’t evolve a good set of local rules to follow in this game, of course!)
I find it quite hard to believe you couldn’t do even better if you were a single mind perceiving what the ants did and controlling them
Sorry for the additional response to the same post, but I feel this bears special notice, as I just now realized that we might be talking “past” one another.
IF the purpose of the endeavor isn’t just to “do better”, but rather to learn about how intelligence and cognition operate, then it seems to me that examining a real-world manifestation of intelligence (and even cognition in the form of observing an environment and reacting to it with intent and even “instictive” tool-use [such as harvesting leaves to build a bridge to cross a river]) in a radically alternative substrate than “simple” neurology should be taken as an opportunity to learn more about how, in principle, cognition and intelligence “work”.
In other words: ant colonies as collectives are a form of “unit of cognition” (in the manner tha a fly or a rat or a human each represent a “unit of cognition”) -- but “ant colonies” do not have brains. The act of “figuring out how to handle this river between us and our goal” never occurs in any particular ant or even specific set of ants within a colony. I find this fact fascinating and believe it is a deeply under-explored avenue towards understanding cognition and intelligence.
I find it quite hard to believe you couldn’t do even better if you were a single mind perceiving what the ants did and controlling them (which is how you are set up in this game).
You heard about the Berkeley Overmind? A single mind has a limited amount of capacity to focus on events simultaneously. Emergent intelligence does not.
But the ants individually can’t simulate a single large mind (for one thing, they wouldn’t have all the information it would have).
Given the total neural processing power available to the ants, I’d dare say that their capacity to solve problems is far greater than you’re giving them credit for. Also, there’s a non-trivial chance that this is already how individual minds operate—I’m speaking of the Society of Mind hypothesis.
A single mind has a limited amount of capacity to focus on events simultaneously.
This is a statement about your mind (ok, human minds), not about minds in general. There’s no law saying that minds can’t have multiple simultaneous trains of thought.
Emergent intelligence does not.
A unified mind can always simulate separate agents. Separate agents cannot simulate a unified mind. If the separate agents all have simultaneous access to the same information that the unified mind would, then they cease being separate agents. In my book, there is no longer a distinction.
There’s a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
(I might not respond again, I have a bot to write!)
A unified mind can always simulate separate agents.
To within its available resources, sure. But that’s under the assumption that there’s a categorical difference between multiple agents instantiated on separate hardware and multiple agents instantiated on a single piece of hardware.
Separate agents cannot simulate a unified mind.
Actually, that’s the entire notion behind the Society of the Mind: there’s no such thing as a “unified mind”. Only separate agents that operate as a wholistic system.
If the separate agents all have simultaneous access to the same information that the unified mind would, then they cease being separate agents. In my book, there is no longer a distinction.
I believe there is a significantly false assumption here: that the agents present in human minds are operating with “simultaneous” (or otherwise) access to “the same information”.
Furthermore—the entire concept of stigmergy rests upon the notion that each of these independent agents would produce effects that alter the behaviors of the other independent agents, thus creating an operationally unified whole.
There’s a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
I submit to you that the differences are more of distance (proximity of computational units) and magnitude ( overall level of currently manifested intellect), and far less of category. That is; I see nothing about the behaviors of siphonophores and ants—as eusocial collectives—which prevents in principle a eusocial collective from manifesting tool-making level intellect and/or sentience.
I find it quite hard to believe you couldn’t do even better if you were a single mind perceiving what the ants did and controlling them (which is how you are set up in this game).
...
If the separate agents all have simultaneous access to the same information … then they cease being separate agents … .
There’s a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
I believe there is a significantly false assumption here: that the agents present in human minds are operating with “simultaneous” (or otherwise) access to “the same information”.
To me, that reads as if lavalamp doesn’t think humans actually are a “unified mind”, though. It’s the program written in the context of the game that acts as a single agent by processing the same information with pseudo-‘simultaneity’.
It’s the program written in the context of the game that acts as a single agent by processing the same information with pseudo-‘simultaneity’.
I believe I understand what you are saying here. I just don’t think it fairly describes what lavalamp was saying.
My reading of that passage is that his assertion was that humans, by having separate agents all running “in one brain”, cease being separate agents as a result of “having simultaneous access to the same information”.
EDIT: Okay, now I find myself confused. By the course of the dialogue it’s clear that pedanterrific did not downvote my comment, so someone not replying to it must have. I am left without insight as to why this was done, however.
Well, if you’re correct and that is what lavalamp is asserting, I pretty much agree with you. Humans are definitely not “unified minds”, and the difference between separate agents running on one or multiple brains may be large, but it’s quantitative, not qualitative.
That is, even separate agents running on one brain will never have simultaneous access to the same information (unless you cheat by pausing time).
That is, even separate agents running on one brain will never have simultaneous access to the same information (unless you cheat by pausing time).
Even then it’s important to note that various agents operating on varying principles of how to transform / relate to information might only be “capable” of noting specific subsets of “the same information”, and that this is—I believe—contextually relevant to comparing brains to ant colonies. Just like how the parts of your brain that handle emotions will not be involved in processing the differences between two sounds; two ants each in different locations have access to separate subsets of information which is then relayed to other parts of the colony.
The emotion-parts react to the signals sent by the sound-parts, and vice versa; so to ant(1) reacts to the signals sent by ant(2), and vice versa.
I’m not so sure our anticipations necessarily differ. I think separate agents with amazingly fast communication will approach the performance of a unified mind, and a mind with poor internal communication will approach the performance of separate agents. Human minds arguably might have poor internal communication, but I’m still betting that it’s more than one order of magnitude better than what ants do. I think our disagreement is more about the scale of this difference than anything.
The fundamental barrier to communication inside a single mind is the speed of light; an electronic brain the size of a human one ought to be able to give its sub-agents information that’s pretty damn close to simultaneous.
At any rate, in this game we do have simultaneous knowledge, and there’s no reason to handicap ourselves by e.g., waiting for scouts to return to other ants to share their knowledge.
I believe there is a significantly false assumption here: that the agents present in human minds are operating with “simultaneous” (or otherwise) access to “the same information”.
I think it is true for games with turns like this one.
‘Simultaneity’ is easy to achieve when the environment changes in discrete intervals with time to think in between.
The appearance of “simultaneity”, sure. But that’s a manifestation of the difference between real-time and turn-based ‘games’, and not a characteristic of cognition that is meaningfully significant. (At least, not so far as I can tell.)
I’d say the implication that it’s only actually possible to act as a “unified mind” in certain highly artificial non-realtime circumstances is pretty significant.
But if I am correct that it is only the appearance of acting as a “unified mind”, then… there’s no real significance there, as it again is simply a characteristic of the medium rather than of the function. In other words, this “unification” is only present in a turn-based game, and only manifests as a result of the fact that turn-based games have ‘bots’ whose intellect necessarily manifests during the turn.
This, in kind, would “compress” the actual processes of cognition into what would appear to be a “unified/simultaneous” process.
And this is why I say that it is not a characteristic of cognition which is meaningfully significant. It’s telling us something about turn-based games—not about cognition.
Even if/as there is no such thing as simultaneity in consciousness, in a game with rules like this thoughts can be neatly divided into “after seeing the results of turn one, and before deciding what to do on turn two,” and that is all that is important.
What I said was badly phrased: the assumption isn’t true, but if it is being made, that is irrelevant.
in a game with rules like this thoughts can be neatly divided into “after seeing the results of turn one, and before deciding what to do on turn two,” and that is all that is important.
I don’t know as to how that maps to “simultaneous access to the same information”, however, in any computationally significant sense. It’s simply a characteristic of the definition of turn-based as opposed to real-time ‘games’ that you do your processing between turns but during real-time.
I find it quite hard to believe you couldn’t do even better if you were a single mind perceiving what the ants did and controlling them (which is how you are set up in this game). A single mind can, worst case, simulate the rules each ant follows, so it can never be worse than the social behavior is. But the ants individually can’t simulate a single large mind (for one thing, they wouldn’t have all the information it would have).
It’d be like writing a chess engine by writing a different AI for each piece. Splitting up the AI gives you more problems, not less.
(That’s not to say that you couldn’t evolve a good set of local rules to follow in this game, of course!)
Sorry for the additional response to the same post, but I feel this bears special notice, as I just now realized that we might be talking “past” one another.
IF the purpose of the endeavor isn’t just to “do better”, but rather to learn about how intelligence and cognition operate, then it seems to me that examining a real-world manifestation of intelligence (and even cognition in the form of observing an environment and reacting to it with intent and even “instictive” tool-use [such as harvesting leaves to build a bridge to cross a river]) in a radically alternative substrate than “simple” neurology should be taken as an opportunity to learn more about how, in principle, cognition and intelligence “work”.
In other words: ant colonies as collectives are a form of “unit of cognition” (in the manner tha a fly or a rat or a human each represent a “unit of cognition”) -- but “ant colonies” do not have brains. The act of “figuring out how to handle this river between us and our goal” never occurs in any particular ant or even specific set of ants within a colony. I find this fact fascinating and believe it is a deeply under-explored avenue towards understanding cognition and intelligence.
You heard about the Berkeley Overmind? A single mind has a limited amount of capacity to focus on events simultaneously. Emergent intelligence does not.
Given the total neural processing power available to the ants, I’d dare say that their capacity to solve problems is far greater than you’re giving them credit for. Also, there’s a non-trivial chance that this is already how individual minds operate—I’m speaking of the Society of Mind hypothesis.
Yes.
This is a statement about your mind (ok, human minds), not about minds in general. There’s no law saying that minds can’t have multiple simultaneous trains of thought.
A unified mind can always simulate separate agents. Separate agents cannot simulate a unified mind. If the separate agents all have simultaneous access to the same information that the unified mind would, then they cease being separate agents. In my book, there is no longer a distinction.
There’s a big difference between separate agents all running in one brain (e.g., possibly humans) and separate agents in separate brains (ants).
(I might not respond again, I have a bot to write!)
To within its available resources, sure. But that’s under the assumption that there’s a categorical difference between multiple agents instantiated on separate hardware and multiple agents instantiated on a single piece of hardware.
Actually, that’s the entire notion behind the Society of the Mind: there’s no such thing as a “unified mind”. Only separate agents that operate as a wholistic system.
I believe there is a significantly false assumption here: that the agents present in human minds are operating with “simultaneous” (or otherwise) access to “the same information”.
Furthermore—the entire concept of stigmergy rests upon the notion that each of these independent agents would produce effects that alter the behaviors of the other independent agents, thus creating an operationally unified whole.
I submit to you that the differences are more of distance (proximity of computational units) and magnitude ( overall level of currently manifested intellect), and far less of category. That is; I see nothing about the behaviors of siphonophores and ants—as eusocial collectives—which prevents in principle a eusocial collective from manifesting tool-making level intellect and/or sentience.
To me, that reads as if lavalamp doesn’t think humans actually are a “unified mind”, though. It’s the program written in the context of the game that acts as a single agent by processing the same information with pseudo-‘simultaneity’.
I believe I understand what you are saying here. I just don’t think it fairly describes what lavalamp was saying.
My reading of that passage is that his assertion was that humans, by having separate agents all running “in one brain”, cease being separate agents as a result of “having simultaneous access to the same information”.
EDIT: Okay, now I find myself confused. By the course of the dialogue it’s clear that pedanterrific did not downvote my comment, so someone not replying to it must have. I am left without insight as to why this was done, however.
Well, if you’re correct and that is what lavalamp is asserting, I pretty much agree with you. Humans are definitely not “unified minds”, and the difference between separate agents running on one or multiple brains may be large, but it’s quantitative, not qualitative.
That is, even separate agents running on one brain will never have simultaneous access to the same information (unless you cheat by pausing time).
Even then it’s important to note that various agents operating on varying principles of how to transform / relate to information might only be “capable” of noting specific subsets of “the same information”, and that this is—I believe—contextually relevant to comparing brains to ant colonies. Just like how the parts of your brain that handle emotions will not be involved in processing the differences between two sounds; two ants each in different locations have access to separate subsets of information which is then relayed to other parts of the colony.
The emotion-parts react to the signals sent by the sound-parts, and vice versa; so to ant(1) reacts to the signals sent by ant(2), and vice versa.
I’m not the one downvoting you, either.
I’m not so sure our anticipations necessarily differ. I think separate agents with amazingly fast communication will approach the performance of a unified mind, and a mind with poor internal communication will approach the performance of separate agents. Human minds arguably might have poor internal communication, but I’m still betting that it’s more than one order of magnitude better than what ants do. I think our disagreement is more about the scale of this difference than anything.
The fundamental barrier to communication inside a single mind is the speed of light; an electronic brain the size of a human one ought to be able to give its sub-agents information that’s pretty damn close to simultaneous.
At any rate, in this game we do have simultaneous knowledge, and there’s no reason to handicap ourselves by e.g., waiting for scouts to return to other ants to share their knowledge.
I think it is true for games with turns like this one.
I am not aware of any mechanism which might cause this to be a meaningful difference. Enlighten me?
‘Simultaneity’ is easy to achieve when the environment changes in discrete intervals with time to think in between.
Edit: What lessdazed said.
The appearance of “simultaneity”, sure. But that’s a manifestation of the difference between real-time and turn-based ‘games’, and not a characteristic of cognition that is meaningfully significant. (At least, not so far as I can tell.)
I’d say the implication that it’s only actually possible to act as a “unified mind” in certain highly artificial non-realtime circumstances is pretty significant.
But if I am correct that it is only the appearance of acting as a “unified mind”, then… there’s no real significance there, as it again is simply a characteristic of the medium rather than of the function. In other words, this “unification” is only present in a turn-based game, and only manifests as a result of the fact that turn-based games have ‘bots’ whose intellect necessarily manifests during the turn.
This, in kind, would “compress” the actual processes of cognition into what would appear to be a “unified/simultaneous” process.
And this is why I say that it is not a characteristic of cognition which is meaningfully significant. It’s telling us something about turn-based games—not about cognition.
Allow me to slightly rephrase my point: I’d say the implication that it’s impossible to act as a “unified mind” in realtime is pretty significant.
Even if/as there is no such thing as simultaneity in consciousness, in a game with rules like this thoughts can be neatly divided into “after seeing the results of turn one, and before deciding what to do on turn two,” and that is all that is important.
What I said was badly phrased: the assumption isn’t true, but if it is being made, that is irrelevant.
I don’t know as to how that maps to “simultaneous access to the same information”, however, in any computationally significant sense. It’s simply a characteristic of the definition of turn-based as opposed to real-time ‘games’ that you do your processing between turns but during real-time.