I’m an applied mathematician. I am finishing my PhD on natural language processing. I worked on many industry projects, some relevant keywords are data mining, text mining, network analysis, speech recognition, and optimization on spatial data.
Years ago I learned computational complexity theory. Occasionally I still teach it at my old university. After the PhD, my next plan is to finish my problem book on computational complexity. The computational complexity approach to problems influences everything I do.
I love my work, but frankly, I don’t come to LW to talk about data mining. I believe my hobbies and side projects are more relevant and interesting here. I have two hobby projects that should really be tackled by physicists. I hope I don’t deviate too much from the spirit of this thread if I answer the “What might you learn from experts in other domains that could be useful in yours?” by introducing my projects as questions to physicists:
Let us assume that you an all-powerful optimization process, and your goal is to finish an extremely long computation (say, a search for a very large Hamiltonian cycle) in the shortest possible amount of time. You have millions of galaxies to turn into computronium. What is the optimal expansion speed of your computer, considering our current understanding of particle physics, general relativity and thermodynamics?
Of all the possible Universes in Tegmark’s level IV Multiverse, most don’t even have a concept of Time. How can we decide whether or not a specific Universe is in fact a Space-Time Continuum?
What is the optimal expansion speed of your computer
I see no reason to go slow on the expansion except for the possibility of hostile opposition. If you do intend to expand, you have nothing to gain computationally by delaying.
Of all the possible Universes in Tegmark’s level IV Multiverse, most don’t even have a concept of Time. How can we decide whether or not a specific Universe is in fact a Space-Time Continuum?
You have to build time into your metaphysics from the beginning. If you restrict your studies to arithmetic, category theory, or any study of static timeless entities, you won’t get time back for free. In general relativity, to have a timelike direction, your metric must have a certain signature. So perhaps we can say that the “mathematical object”, 4-manifold with a metric whose signature is +++- everywhere, describes a universe with time. But the mathematical object itself does not intrinsically contain time.
This is a significant flaw in Tegmark’s scheme, as he describes it, as well as all belief systems of the form “reality is mathematics”: mathematics is not the full ontology of the world. Time might be the least disputable illustration of this. Time is something you can describe mathematically, but it is not itself mathematics in the way that numbers are.
Let’s consider a classic example of an ontology, Aristotle’s. I mention it not to endorse it but simply to present an example of what an ontology looks like. According to Aristotle’s ontology, everything that is belongs to one of ten classes of entity—substance, quantity, quality, relation, place, time, position, state, action, affection—and these entities connect to each other in specific ways, e.g. substances have qualities.
General ontological theory talks about these ten categories and their relationships. All lesser forms of knowledge are “regional ontologies”, they only talk about subtypes of being, or perhaps beings which are built up from elementary types in some way.
Now Pythagoreans supposedly believed that all is number. If we transpose that slogan into Aristotle’s categories, what is it saying? It’s saying that the category of quantity is the only real one and the only one we need to study. Obviously an Aristotelian would reject this view. Quantity is not only to be studied in itself, but in its relations to the other categories.
Tegmark, and all other mathematical neoplatonists, are doing the same thing as the Pythagoreans. Modern mathematics is part of the full ontology, but only part of it. Because we know how to reason rigorously and with clarity about mathematical objects, and because we can represent so much of reality mathematically, there is apparently a temptation to view mathematics as reality itself. But this requires you to ignore the representation relation—what exactly is going on there? It’s not as if anyone has a very convincing account of how things and their properties are fused into one. But to adopt mathematical neoplatonism guarantees that you will be unable to think straight about such issues. With respect to time, for example, you will inevitably end up skipping back and forth between rigorous discussion of the properties of semi-Riemannian metrics, and then vague or even bogus assertions about how the metric “is time” and so on. This vagueness is a symptom of a problem overlooked, namely, what is the larger ontology of which mathematics is just a subset, and how do the categories of mathematics relate to the more physical categories of reality, like time and substance?
There’s no reason why you can’t have a systematic multiverse theory based on a richer ontology, one with physical categories as well as mathematical. But you would have to figure out the outlines of that richer ontology first.
I see no reason to go slow on the expansion except for the possibility of hostile opposition. If you do intend to expand, you have nothing to gain computationally by delaying.
I am not sure. The laws of thermodynamics may interfere. Do you suggest that the optimal expansion speed I am looking for is equal to the speed of light? Do you know how to build a computer that expands with exactly the speed of light?
If you do, then I am very interested, because that makes a pet theory of mine work: I seriously believe that the expansion speed of (expanding) civilizations goes through a fast phase transition from 0 to c. I half-seriously believe that this is the proper explanation of the Fermi Paradox: we can’t observe other civilizations because they are approaching us with the speed of light. (And when finally we could observe them, they already turned us into computronium.)
You have to build time into your metaphysics from the beginning.
It is quite possible that we are using the same terms in some very different sense. But if accidentally we are really talking about the same things, then I think you are wrong.
I believe that time is an emergent phenomenon, and it is emerging from the more basic notion of memory. Of all the many arrows of time physicists and philosophers like to talk about, the thermodynamic arrow of time is the only basic one, And it is, in turn, just some averaging of the many local arrows defined by information-retrieval processes. Luckily for us, in our Universe, these processes typically are in sync. That’s why we can talk about time the way we can.
I half-seriously believe that this is the proper explanation of the Fermi Paradox: we can’t observe other civilizations because they are approaching us with the speed of light.
I agree (NB: also computer scientist, not physicist) with the premise that civilizations probably expand at near-c, but there’s a problem with this. Since it seems that intelligent life like us could have arisen billions of years ago, if life is common and this is the explanation for the Fermi Paradox, we should be very surprised to observe ourselves existing so late.
You are right. The argument is not compatible with the possibility that life is very common, and this makes it much less interesting as an argument for life not being very rare. But it is not totally superfluous: We can observe the past of a 46 billion light years radius sphere of the expanding, 14 billion light years old Universe. Let us now assume that 4 billion years since the Big Bang is somehow really-really necessary for a maximally expanding civilization to evolve. In this case, my whole Fermi Paradox argument is still compatible with hundreds of such civilizations in the future of some of the stars we can currently observe. (You can drop hundreds of 10ly spheres into a 46ly sphere before starting to be very surprised that the center is uncovered.)
But you are right. I think my Fermi Paradox argument is an exciting thought experiment, but it does not add too much actual value. (I believe it deserves a sensationalist report in New Scientist, at least. :) ) On the other hand, I am much more convinced about the expansion speed phase transition conjecture. And I am very convinced that my original question regarding optimally efficient computational processes is a valuable research subject.
In this case, my whole Fermi Paradox argument is still compatible with hundreds of such civilizations in the future of some of the stars we can currently observe. (You can drop hundreds of 10ly spheres into a 46ly sphere before starting to be very surprised that the center is uncovered.)
You’re right, and I hadn’t really thought that through — I had thought that this argument ruled out alien intelligence much more strongly than it does. Thanks.
Glad I could help. :) You know, I am quite proud of this set of arguments, and when I registered on LW, it was because I had three concrete ideas for a top-level post, and one of those was this one. But since then, I became somewhat discouraged about it, because I observed that mentioning this idea in the comments didn’t really earn me karma. (So far, it did all in all about as much as my two extremely unimportant remarks here today.) I am now quite sure that if I actually wrote that top-level post, it would just sit there, unread. Do you think it is worth bothering with it? Do you have any advice how to reach my audience with it, here on LW? Thanks for any advice!
No, because if there is something like a Gaussian distribution of the emergence times of intelligent civilizations, we could just be one of the civilizations on the tail.
Exactly. The argument is that, since being on the tail of a Gaussian distribution is a priori unlikely, our age + no observation of past civilizations is anthropic evidence that life isn’t too common.
We have no idea what the Gaussian distribution looks like. We don’t necessarily have to be on the tail, just somewhere say one sigma away. No observation of civilizations just corresponds to us being younger than average and the other civilizations being far away. Or we could be older and the other civilizations just haven’t formed yet. But none of this can imply whether life is uncommon or common.
Do you know how to build a computer that expands with exactly the speed of light?
No. I have nothing more exotic to suggest than a spherical expansion of ultrarelativistic constructor fleets, building Matrioshka-brains that communicate electromagnetically. All I’m saying is, if you think you have an unbounded demand for computation, I see no computational reason to expand at anything less than the maximum speed.
we can’t observe other civilizations because they are approaching us with the speed of light.
How do two such civilizations react when they collide?
I believe that time is an emergent phenomenon, and it is emerging from the more basic notion of memory.
Is “memory” a mathematical concept? We are talking about Tegmark’s theory, right? Anyway, you go on to say
in our Universe, these processes typically are in sync
and the moment you talk about “processes”, you have implicitly reintroduced the concept of time.
So you’re doing several things wrong at once.
1) You talk about process as if that was a concept distinct from and more fundamental than the concept of time, when in fact it’s the other way around.
2) You hope to derive time from memory. I see two ways that can work out, neither satisfactory. Either you talk about memory processes and we are back to the previous problem of presupposing time; or you adopt an explicitly timeless physical ontology, like Julian Barbour, and say you’re accounting for the appearance of time or the illusion of time. Are you prepared to do that—to say simply that time is not real? I’ll still disagree with you, but your position will be a little more consistent.
3) Finally, this started out in Tegmark’s multiverse. But if we are sticking to purely mathematical concepts, there is neither a notion of memory or of process in such an ontology. Tell me where time or memory is in the ZFC universe of sets, for example! The root of the problem again is the neglect of representation. We use these mathematical objects to represent process, mental states, physical states and so forth, and then careless or unwary thinkers simply equivocate between the mathematics and the thing represented.
All I’m saying is, if you think you have an unbounded demand for computation, I see no computational reason to expand at anything less than the maximum speed.
I agree. That’s why I was careful to ask the advice of physicists and not computer scientists. I am a computer scientist myself.
How do two such civilizations react when they collide?
and the moment you talk about “processes”, you have implicitly reintroduced the concept of time.
Your critique is misdirected. If I, a time-based creature, write a long paragraph about a timeless theory, it is not surpising that accidentally I will use some time-based notion in the text somewhere. But this is not a problem with the theory, this is a problem with my text. You jumped on the word ‘process’, but if I write ‘pattern’ instead, then you will have much less to nitpick about.
2) You hope to derive time from memory. I see two ways that can work out, neither satisfactory. Either you talk about memory processes and we are back to the previous problem of presupposing time; or you adopt an explicitly timeless physical ontology, like Julian Barbour, and say you’re accounting for the appearance of time or the illusion of time. Are you prepared to do that—to say simply that time is not real? I’ll still disagree with you, but your position will be a little more consistent.
Little more consistent then the position you put into my mouth after reading one paragraph? This is unfair and a bit rude. (Especially considering the thread we are still on. I came here for some feel-good karma and expert advice from physicists, and I was used as a straw man instead. :) Should we switch to Open Thread, BTW?)
To answer the question: yes, I am all the way down route number 2. Barbour has it exactly right in my opinion, except for one rhetorical point: it is just marketing talk to interpret these ideas as “time is not real”. Time is very real, and an emergent notion. Living organisms are real, even if we can reduce biology to chemistry.
then careless or unwary thinkers simply equivocate between the mathematics and the thing represented.
Please read my answer to ata. I’m not a platonist. I don’t do such an equivocation. I am a staunch formalist. I don’t BELIEVE in Tegmark’s Multiverse in the way you think I do. It is a tool for me to think more clearly about why OUR Universe is the way it is.
I seriously believe that the expansion speed of (expanding) civilizations goes through a fast phase transition from 0 to c. I half-seriously believe that this is the proper explanation of the Fermi Paradox: we can’t observe other civilizations because they are approaching us with the speed of light. (And when finally we could observe them, they already turned us into computronium.)
I think this argument has the same logic as the Doomsday Argument, and therefore is subject to the same counterarguments (see SIA and UDT). I’ll explain the analogy below:
In the DA, the fact that I have a low birth rank is explained by a future doom, which makes it more likely for me to observe a low birth rank by preventing people with high birth ranks from coming into existence.
In your argument, the fact that we are outside the lightcones of every alien civilization is explained by the idea that they expand at light speed and destroy those who would otherwise observe being in the lightcone of an alien civilization.
I am afraid the analogy is not clear enough for me to apply it, and explicitly reproduce the relevant version of the counterarguments you are implying. I would be thankful if you elaborated.
In the meanwhile, let me note that the Doomsday argument floats in an intellectual vacuum, while my proposed 0-1 law for expansion speed could in principle be a proven theorem of economics, sociology, computer science or some other field of science, instead of being the wild speculation what it is. My goal to understand the physics of optimally efficient computational processes is motivated by exactly this: I wish to prove the 0-1 law, from still very speculative and shaky, but at least more basic assumptions.
I see, your proposed argument isn’t directly analogous to the standard Doomsday Argument, but more like a (hypothetical) variant that gives a number of non-anthropic reasons for expecting doom in the near future, and also says “BTW, a near future doom would explain why we have low birth rank.”
I’m not sure that such anthropic explanations make sense, but if you’re not mainly depending on anthropic reasoning to make your case, then the counterarguments aren’t so important.
BTW, I agree it is likely that alien civilizations would expand at near the speed of light, but not necessarily to finish some computation as quickly as possible. (Once you’re immortal, it’s not clear why speed matters.) Another reason is that because the universe itself is expanding, the slower those civilizations expand, the less mass/energy they will eventually have access to.
I’m not remotely a physicist, but I have a few comments, which I will do my best to confine to the limits imposed by my knowledge of my lack of knowledge.
Let us assume that you an all-powerful optimization process, and your goal is to finish an extremely long computation (say, a search for a very large Hamiltonian cycle) in the shortest possible amount of time. You have millions of galaxies to turn into computronium. What is the optimal expansion speed of your computer, considering our current understanding of particle physics, general relativity and thermodynamics?
By “optimal expansion speed”, do you mean “maximum possible expansion speed given particle physics, general relativity and thermodynamics (according to our current understanding thereof)”, or do you see some reason that a slower expansion would be beneficial to the ultimate goal (or is that the question)?
(Meanwhile, I’ll just say that I’d first want to either prove P≠NP or find a polynomial-time algorithm for the Hamiltonian cycle problem. P=NP may be unlikely, but if I were an all-powerful optimization process, I’d probably want to get that out of the way before brute-forcing an NP-complete problem. Might save a few million galaxies that way. Then again, an all-powerful optimization process would very likely have a better idea than this puny human.)
Of all the possible Universes in Tegmark’s level IV Multiverse, most don’t even have a concept of Time.
I’m not sure if something without a timelike dimension would really qualify as a universe. It’s really just a matter of definition, but since every mathematical structure has the same ontological status in Tegmark IV, including, say, the set {1, 2, 3}, a useful definition of “universe” will have to be more narrow than “every element of the Level IV Multiverse” and more broad than “every structure that can result from the laws of this universe”.
I’m not sure if we’d be able to rigorously define what structures count as “universes” (and it’s not terribly important, being that our definition of the word doesn’t impinge on reality anyway), but intuitively, what properties are necessary for a structure to seem like a universe to you in the first place? I’d be pretty flexible with it, but I think I’d require it to have something timelike, some way for conditions to dynamically evolve over at least one dimension.
By “optimal expansion speed”, do you mean “maximum possible expansion speed given particle physics, general relativity and thermodynamics (according to our current understanding thereof)”, or do you see some reason that a slower expansion would be beneficial to the ultimate goal (or is that the question)?
Avoiding heat death may be beneficial, for example. As I wrote to Mitchell Porter, to me, the most interesting special case of the question is: if you want to build the fastestest computer in the Universe, should it expand with the speed of light? I’m really not a physicist, so I don’t even know the answer to a very simple version of this question, a version that any particle physicist should be able to answer: Is it possible for some nontrivial information processing system to spread with exactly the speed of light? If not, what about expansion speed converging to c?
what properties are necessary for a structure to seem like a universe to you in the first place?
You (and Mitchell Porter) are completely right . At this point, I don’t have a convincing answer to your obvious question. In the meantime, Tegmark level IV is a good enough answer for me. (Note to Mitchell: it would be very hard to find someone less platonist than me. And I find Tegmark’s focus on computability totally misdirected, so in this sense I am not an intuitionist either.)
I’d be pretty flexible with it, but I think I’d require it to have something timelike, some way for conditions to dynamically evolve over at least one dimension.
I think we disagree here. Please see my answer to Mitchell about the emergence of time from the more basic concept of memory.
I’m not a physicist, but my understanding is that it is not possible for an information processing system to expand at or arbitrarily close to the speed of light; if we neglect the time taken for other activities such as mining and manufacturing, the most obvious limit is the speed to which you can accelerate colony ships (which have to be massive enough to not be fried by collision with interstellar hydrogen atoms). The studies I’ve seen suggest that a few percent of lightspeed is doable given moderate assumptions, a few tens of percent doable if we can get closer to ultimate physical limits, 90%+ doable under extreme assumptions, 99%+ not plausible and 99.9%+ flat-out impossible without some kind of unobtainium.
On the question of ontology, I’m a card-carrying neoplatonist, so you’ve probably heard my position from other people before :-)
Having read further down and under the context of the Fermi problem, I think the idea is that the general limitations (on the first question) are more due to engineering than due to particle physics, relativity, and so on. Allow me to explain.
Relativity sets a limit on information propagation at the speed of light. More specifically, in physics they talk about waves having a phase velocity (which can be arbitrarily large) and a group velocity. The group velocity refers to the information carrying content of the wave, and this speed is always strictly limited to less than c. Since things like light waves are the fastest means of communication, this sets your upper bound according to the current state of the art.
But in reality, if you were expanding outwards from a point in space, you would need more than just a light wave. If you were a person, you would need a ship. In a more imaginative case, even if they could decode your body into pure information content and then broadcast that signal at light speed across the universe, it would be good for nothing unless you had something that could reconstruct you on the other side. But I’m guessing you’d already thought about this.
In a ship then, accelerating to high speeds in a straight line for a macroscopic object like a spaceship isn’t that hard, theoretically. The hard part would be in things like maneuvering and radiation shielding. It takes a lot of energy to turn your trajectory if you’re a massive object moving at significant fractions of c. I haven’t calculated it but that’s just my intuition.
If you did something like accelerating on straight line trajectories (or geodesics, or what have you) and decelerating in straight lines, going from point to point around obstacles, then you obviously accumulate more delay time in a different fashion. In any case, there’s these engineering difficulties that are practical issues to expansion.
Maybe you’d rather imagine cellular automata or some kind of machine construction proceeding outwards at light speed, rather than more conventional ideas. In this case, it would be something radically different from what currently exists, so one can only speculate. The hard part might be the following. If you are building yourself outwards at the speed c, then you are also colliding with whatever is in front of you at speed c, and this would most likely cause your total destruction.
This is of course assuming that the machine would require a conventional structure in the form of nanorobots, gelatin, or basically anything with nontrivial mass distributions.
You might instead try to compromise in the following way. Namely, you expand at some high speed, say 0.5 c, where you can still shoot (e.g.) space nukes out in front of you at another 0.5 c, to attempt to vaporize or sufficiently distillate the obstacles before you hit them. And so on and so forth.…
I’m an applied mathematician. I am finishing my PhD on natural language processing. I worked on many industry projects, some relevant keywords are data mining, text mining, network analysis, speech recognition, and optimization on spatial data.
Years ago I learned computational complexity theory. Occasionally I still teach it at my old university. After the PhD, my next plan is to finish my problem book on computational complexity. The computational complexity approach to problems influences everything I do.
I love my work, but frankly, I don’t come to LW to talk about data mining. I believe my hobbies and side projects are more relevant and interesting here. I have two hobby projects that should really be tackled by physicists. I hope I don’t deviate too much from the spirit of this thread if I answer the “What might you learn from experts in other domains that could be useful in yours?” by introducing my projects as questions to physicists:
Let us assume that you an all-powerful optimization process, and your goal is to finish an extremely long computation (say, a search for a very large Hamiltonian cycle) in the shortest possible amount of time. You have millions of galaxies to turn into computronium. What is the optimal expansion speed of your computer, considering our current understanding of particle physics, general relativity and thermodynamics?
Of all the possible Universes in Tegmark’s level IV Multiverse, most don’t even have a concept of Time. How can we decide whether or not a specific Universe is in fact a Space-Time Continuum?
I see no reason to go slow on the expansion except for the possibility of hostile opposition. If you do intend to expand, you have nothing to gain computationally by delaying.
You have to build time into your metaphysics from the beginning. If you restrict your studies to arithmetic, category theory, or any study of static timeless entities, you won’t get time back for free. In general relativity, to have a timelike direction, your metric must have a certain signature. So perhaps we can say that the “mathematical object”, 4-manifold with a metric whose signature is +++- everywhere, describes a universe with time. But the mathematical object itself does not intrinsically contain time.
This is a significant flaw in Tegmark’s scheme, as he describes it, as well as all belief systems of the form “reality is mathematics”: mathematics is not the full ontology of the world. Time might be the least disputable illustration of this. Time is something you can describe mathematically, but it is not itself mathematics in the way that numbers are.
Let’s consider a classic example of an ontology, Aristotle’s. I mention it not to endorse it but simply to present an example of what an ontology looks like. According to Aristotle’s ontology, everything that is belongs to one of ten classes of entity—substance, quantity, quality, relation, place, time, position, state, action, affection—and these entities connect to each other in specific ways, e.g. substances have qualities.
General ontological theory talks about these ten categories and their relationships. All lesser forms of knowledge are “regional ontologies”, they only talk about subtypes of being, or perhaps beings which are built up from elementary types in some way.
Now Pythagoreans supposedly believed that all is number. If we transpose that slogan into Aristotle’s categories, what is it saying? It’s saying that the category of quantity is the only real one and the only one we need to study. Obviously an Aristotelian would reject this view. Quantity is not only to be studied in itself, but in its relations to the other categories.
Tegmark, and all other mathematical neoplatonists, are doing the same thing as the Pythagoreans. Modern mathematics is part of the full ontology, but only part of it. Because we know how to reason rigorously and with clarity about mathematical objects, and because we can represent so much of reality mathematically, there is apparently a temptation to view mathematics as reality itself. But this requires you to ignore the representation relation—what exactly is going on there? It’s not as if anyone has a very convincing account of how things and their properties are fused into one. But to adopt mathematical neoplatonism guarantees that you will be unable to think straight about such issues. With respect to time, for example, you will inevitably end up skipping back and forth between rigorous discussion of the properties of semi-Riemannian metrics, and then vague or even bogus assertions about how the metric “is time” and so on. This vagueness is a symptom of a problem overlooked, namely, what is the larger ontology of which mathematics is just a subset, and how do the categories of mathematics relate to the more physical categories of reality, like time and substance?
There’s no reason why you can’t have a systematic multiverse theory based on a richer ontology, one with physical categories as well as mathematical. But you would have to figure out the outlines of that richer ontology first.
I am not sure. The laws of thermodynamics may interfere. Do you suggest that the optimal expansion speed I am looking for is equal to the speed of light? Do you know how to build a computer that expands with exactly the speed of light?
If you do, then I am very interested, because that makes a pet theory of mine work: I seriously believe that the expansion speed of (expanding) civilizations goes through a fast phase transition from 0 to c. I half-seriously believe that this is the proper explanation of the Fermi Paradox: we can’t observe other civilizations because they are approaching us with the speed of light. (And when finally we could observe them, they already turned us into computronium.)
It is quite possible that we are using the same terms in some very different sense. But if accidentally we are really talking about the same things, then I think you are wrong.
I believe that time is an emergent phenomenon, and it is emerging from the more basic notion of memory. Of all the many arrows of time physicists and philosophers like to talk about, the thermodynamic arrow of time is the only basic one, And it is, in turn, just some averaging of the many local arrows defined by information-retrieval processes. Luckily for us, in our Universe, these processes typically are in sync. That’s why we can talk about time the way we can.
I agree (NB: also computer scientist, not physicist) with the premise that civilizations probably expand at near-c, but there’s a problem with this. Since it seems that intelligent life like us could have arisen billions of years ago, if life is common and this is the explanation for the Fermi Paradox, we should be very surprised to observe ourselves existing so late.
You are right. The argument is not compatible with the possibility that life is very common, and this makes it much less interesting as an argument for life not being very rare. But it is not totally superfluous: We can observe the past of a 46 billion light years radius sphere of the expanding, 14 billion light years old Universe. Let us now assume that 4 billion years since the Big Bang is somehow really-really necessary for a maximally expanding civilization to evolve. In this case, my whole Fermi Paradox argument is still compatible with hundreds of such civilizations in the future of some of the stars we can currently observe. (You can drop hundreds of 10ly spheres into a 46ly sphere before starting to be very surprised that the center is uncovered.)
But you are right. I think my Fermi Paradox argument is an exciting thought experiment, but it does not add too much actual value. (I believe it deserves a sensationalist report in New Scientist, at least. :) ) On the other hand, I am much more convinced about the expansion speed phase transition conjecture. And I am very convinced that my original question regarding optimally efficient computational processes is a valuable research subject.
You’re right, and I hadn’t really thought that through — I had thought that this argument ruled out alien intelligence much more strongly than it does. Thanks.
Glad I could help. :) You know, I am quite proud of this set of arguments, and when I registered on LW, it was because I had three concrete ideas for a top-level post, and one of those was this one. But since then, I became somewhat discouraged about it, because I observed that mentioning this idea in the comments didn’t really earn me karma. (So far, it did all in all about as much as my two extremely unimportant remarks here today.) I am now quite sure that if I actually wrote that top-level post, it would just sit there, unread. Do you think it is worth bothering with it? Do you have any advice how to reach my audience with it, here on LW? Thanks for any advice!
No, because if there is something like a Gaussian distribution of the emergence times of intelligent civilizations, we could just be one of the civilizations on the tail.
Exactly. The argument is that, since being on the tail of a Gaussian distribution is a priori unlikely, our age + no observation of past civilizations is anthropic evidence that life isn’t too common.
We have no idea what the Gaussian distribution looks like. We don’t necessarily have to be on the tail, just somewhere say one sigma away. No observation of civilizations just corresponds to us being younger than average and the other civilizations being far away. Or we could be older and the other civilizations just haven’t formed yet. But none of this can imply whether life is uncommon or common.
No. I have nothing more exotic to suggest than a spherical expansion of ultrarelativistic constructor fleets, building Matrioshka-brains that communicate electromagnetically. All I’m saying is, if you think you have an unbounded demand for computation, I see no computational reason to expand at anything less than the maximum speed.
How do two such civilizations react when they collide?
Is “memory” a mathematical concept? We are talking about Tegmark’s theory, right? Anyway, you go on to say
and the moment you talk about “processes”, you have implicitly reintroduced the concept of time.
So you’re doing several things wrong at once.
1) You talk about process as if that was a concept distinct from and more fundamental than the concept of time, when in fact it’s the other way around.
2) You hope to derive time from memory. I see two ways that can work out, neither satisfactory. Either you talk about memory processes and we are back to the previous problem of presupposing time; or you adopt an explicitly timeless physical ontology, like Julian Barbour, and say you’re accounting for the appearance of time or the illusion of time. Are you prepared to do that—to say simply that time is not real? I’ll still disagree with you, but your position will be a little more consistent.
3) Finally, this started out in Tegmark’s multiverse. But if we are sticking to purely mathematical concepts, there is neither a notion of memory or of process in such an ontology. Tell me where time or memory is in the ZFC universe of sets, for example! The root of the problem again is the neglect of representation. We use these mathematical objects to represent process, mental states, physical states and so forth, and then careless or unwary thinkers simply equivocate between the mathematics and the thing represented.
I agree. That’s why I was careful to ask the advice of physicists and not computer scientists. I am a computer scientist myself.
I don’t know. But these cyclic cellular automata were an influence when I was thinking about these ideas. http://www.permadi.com/java/cautom/index.html (Java applet)
Your critique is misdirected. If I, a time-based creature, write a long paragraph about a timeless theory, it is not surpising that accidentally I will use some time-based notion in the text somewhere. But this is not a problem with the theory, this is a problem with my text. You jumped on the word ‘process’, but if I write ‘pattern’ instead, then you will have much less to nitpick about.
Little more consistent then the position you put into my mouth after reading one paragraph? This is unfair and a bit rude. (Especially considering the thread we are still on. I came here for some feel-good karma and expert advice from physicists, and I was used as a straw man instead. :) Should we switch to Open Thread, BTW?)
To answer the question: yes, I am all the way down route number 2. Barbour has it exactly right in my opinion, except for one rhetorical point: it is just marketing talk to interpret these ideas as “time is not real”. Time is very real, and an emergent notion. Living organisms are real, even if we can reduce biology to chemistry.
Please read my answer to ata. I’m not a platonist. I don’t do such an equivocation. I am a staunch formalist. I don’t BELIEVE in Tegmark’s Multiverse in the way you think I do. It is a tool for me to think more clearly about why OUR Universe is the way it is.
Continued here.
I think this argument has the same logic as the Doomsday Argument, and therefore is subject to the same counterarguments (see SIA and UDT). I’ll explain the analogy below:
In the DA, the fact that I have a low birth rank is explained by a future doom, which makes it more likely for me to observe a low birth rank by preventing people with high birth ranks from coming into existence.
In your argument, the fact that we are outside the lightcones of every alien civilization is explained by the idea that they expand at light speed and destroy those who would otherwise observe being in the lightcone of an alien civilization.
I am afraid the analogy is not clear enough for me to apply it, and explicitly reproduce the relevant version of the counterarguments you are implying. I would be thankful if you elaborated.
In the meanwhile, let me note that the Doomsday argument floats in an intellectual vacuum, while my proposed 0-1 law for expansion speed could in principle be a proven theorem of economics, sociology, computer science or some other field of science, instead of being the wild speculation what it is. My goal to understand the physics of optimally efficient computational processes is motivated by exactly this: I wish to prove the 0-1 law, from still very speculative and shaky, but at least more basic assumptions.
I see, your proposed argument isn’t directly analogous to the standard Doomsday Argument, but more like a (hypothetical) variant that gives a number of non-anthropic reasons for expecting doom in the near future, and also says “BTW, a near future doom would explain why we have low birth rank.”
I’m not sure that such anthropic explanations make sense, but if you’re not mainly depending on anthropic reasoning to make your case, then the counterarguments aren’t so important.
BTW, I agree it is likely that alien civilizations would expand at near the speed of light, but not necessarily to finish some computation as quickly as possible. (Once you’re immortal, it’s not clear why speed matters.) Another reason is that because the universe itself is expanding, the slower those civilizations expand, the less mass/energy they will eventually have access to.
I’m not remotely a physicist, but I have a few comments, which I will do my best to confine to the limits imposed by my knowledge of my lack of knowledge.
By “optimal expansion speed”, do you mean “maximum possible expansion speed given particle physics, general relativity and thermodynamics (according to our current understanding thereof)”, or do you see some reason that a slower expansion would be beneficial to the ultimate goal (or is that the question)?
(Meanwhile, I’ll just say that I’d first want to either prove P≠NP or find a polynomial-time algorithm for the Hamiltonian cycle problem. P=NP may be unlikely, but if I were an all-powerful optimization process, I’d probably want to get that out of the way before brute-forcing an NP-complete problem. Might save a few million galaxies that way. Then again, an all-powerful optimization process would very likely have a better idea than this puny human.)
I’m not sure if something without a timelike dimension would really qualify as a universe. It’s really just a matter of definition, but since every mathematical structure has the same ontological status in Tegmark IV, including, say, the set {1, 2, 3}, a useful definition of “universe” will have to be more narrow than “every element of the Level IV Multiverse” and more broad than “every structure that can result from the laws of this universe”.
I’m not sure if we’d be able to rigorously define what structures count as “universes” (and it’s not terribly important, being that our definition of the word doesn’t impinge on reality anyway), but intuitively, what properties are necessary for a structure to seem like a universe to you in the first place? I’d be pretty flexible with it, but I think I’d require it to have something timelike, some way for conditions to dynamically evolve over at least one dimension.
Avoiding heat death may be beneficial, for example. As I wrote to Mitchell Porter, to me, the most interesting special case of the question is: if you want to build the fastestest computer in the Universe, should it expand with the speed of light? I’m really not a physicist, so I don’t even know the answer to a very simple version of this question, a version that any particle physicist should be able to answer: Is it possible for some nontrivial information processing system to spread with exactly the speed of light? If not, what about expansion speed converging to c?
You (and Mitchell Porter) are completely right . At this point, I don’t have a convincing answer to your obvious question. In the meantime, Tegmark level IV is a good enough answer for me. (Note to Mitchell: it would be very hard to find someone less platonist than me. And I find Tegmark’s focus on computability totally misdirected, so in this sense I am not an intuitionist either.)
I think we disagree here. Please see my answer to Mitchell about the emergence of time from the more basic concept of memory.
I’m not a physicist, but my understanding is that it is not possible for an information processing system to expand at or arbitrarily close to the speed of light; if we neglect the time taken for other activities such as mining and manufacturing, the most obvious limit is the speed to which you can accelerate colony ships (which have to be massive enough to not be fried by collision with interstellar hydrogen atoms). The studies I’ve seen suggest that a few percent of lightspeed is doable given moderate assumptions, a few tens of percent doable if we can get closer to ultimate physical limits, 90%+ doable under extreme assumptions, 99%+ not plausible and 99.9%+ flat-out impossible without some kind of unobtainium.
On the question of ontology, I’m a card-carrying neoplatonist, so you’ve probably heard my position from other people before :-)
Having read further down and under the context of the Fermi problem, I think the idea is that the general limitations (on the first question) are more due to engineering than due to particle physics, relativity, and so on. Allow me to explain.
Relativity sets a limit on information propagation at the speed of light. More specifically, in physics they talk about waves having a phase velocity (which can be arbitrarily large) and a group velocity. The group velocity refers to the information carrying content of the wave, and this speed is always strictly limited to less than c. Since things like light waves are the fastest means of communication, this sets your upper bound according to the current state of the art.
But in reality, if you were expanding outwards from a point in space, you would need more than just a light wave. If you were a person, you would need a ship. In a more imaginative case, even if they could decode your body into pure information content and then broadcast that signal at light speed across the universe, it would be good for nothing unless you had something that could reconstruct you on the other side. But I’m guessing you’d already thought about this.
In a ship then, accelerating to high speeds in a straight line for a macroscopic object like a spaceship isn’t that hard, theoretically. The hard part would be in things like maneuvering and radiation shielding. It takes a lot of energy to turn your trajectory if you’re a massive object moving at significant fractions of c. I haven’t calculated it but that’s just my intuition.
If you did something like accelerating on straight line trajectories (or geodesics, or what have you) and decelerating in straight lines, going from point to point around obstacles, then you obviously accumulate more delay time in a different fashion. In any case, there’s these engineering difficulties that are practical issues to expansion.
Maybe you’d rather imagine cellular automata or some kind of machine construction proceeding outwards at light speed, rather than more conventional ideas. In this case, it would be something radically different from what currently exists, so one can only speculate. The hard part might be the following. If you are building yourself outwards at the speed c, then you are also colliding with whatever is in front of you at speed c, and this would most likely cause your total destruction.
This is of course assuming that the machine would require a conventional structure in the form of nanorobots, gelatin, or basically anything with nontrivial mass distributions.
You might instead try to compromise in the following way. Namely, you expand at some high speed, say 0.5 c, where you can still shoot (e.g.) space nukes out in front of you at another 0.5 c, to attempt to vaporize or sufficiently distillate the obstacles before you hit them. And so on and so forth.…