I know a lot of cellular automata are Turing complete, which makes them about as useful (in principle) as anything, but I fail to see what a game of Life teaches us about decision-making (aside from reminding us of the about the determinism of the overall system), or about what we’ll run into in the real world. Does a glider gun one-box? Does it run into Omega more often than No-mega?
I’m just unsure how you mean for this to relate to the problem.
Right, I just wouldn’t expect trying to model decision making agents in cellular automata to be any more illuminating (and certainly not any easier) than more conventional (everyday level) ways of figuring out what the world’s most frequent problems are. I understand they make good universe-modelers in theory, but I don’t see them being useful here for the same reason I don’t resort to QM for figuring out optimal horseshoe-tossing techniques. Too much work to get all the way back up to the everyday level.
The question was whether we have a formalism for which problems are most likely to come up.
Cellular automata represent an elegant theoretical model for very many questions about things that are likely to happen in spatialised, reversible, local universes—like our own.
The question, as you quoted it, was whether we have a “good” formalism for this.
I would define “good” in this context as something like “useful for solving the problem at hand”. If you would define it simply as “elegant”, then I suppose we weren’t really disagreeing to begin with. But if you define it the same way I do, then perhaps you’ve just seen cellular automata do some way more impressive high-level things than I’ve seen them do.
I know a lot of cellular automata are Turing complete, which makes them about as useful (in principle) as anything, but I fail to see what a game of Life teaches us about decision-making (aside from reminding us of the about the determinism of the overall system), or about what we’ll run into in the real world. Does a glider gun one-box? Does it run into Omega more often than No-mega?
I’m just unsure how you mean for this to relate to the problem.
It is intelligent agents—not gliders—that make decisions.
Right, I just wouldn’t expect trying to model decision making agents in cellular automata to be any more illuminating (and certainly not any easier) than more conventional (everyday level) ways of figuring out what the world’s most frequent problems are. I understand they make good universe-modelers in theory, but I don’t see them being useful here for the same reason I don’t resort to QM for figuring out optimal horseshoe-tossing techniques. Too much work to get all the way back up to the everyday level.
The question was whether we have a formalism for which problems are most likely to come up.
Cellular automata represent an elegant theoretical model for very many questions about things that are likely to happen in spatialised, reversible, local universes—like our own.
The question, as you quoted it, was whether we have a “good” formalism for this.
I would define “good” in this context as something like “useful for solving the problem at hand”. If you would define it simply as “elegant”, then I suppose we weren’t really disagreeing to begin with. But if you define it the same way I do, then perhaps you’ve just seen cellular automata do some way more impressive high-level things than I’ve seen them do.
Well, the ones in question are universal—and so can do all the same things that any other parallel universal system can do without very much stress.