I am an all time geek, with knowledge / interest in math, physics, chemistry, molecular biology, computer science, software engineering, algorithm engineering and history. Some areas in which I’m comparatively more knowledgeable: quantum field theory, differential geometry, algebraic geometry, algorithm engineering (especially computer vision)
In my day job I’m a technical + product manager of a small software group in Mantis Vision (http://www.mantis-vision.com/) a company developing 3D video cameras. My previous job was in VisionMap (http://www.visionmap.com/) which develops airborne photography / mapping systems, where I led a team of software and algorithm engineers.
I knew about Eliezer Yudkowsky and his friendly AI thesis (which I don’t fully accept) for some time, but discovered this community only relatively recently. For me this community is interesting because of several reasons. One reason is that many discussions are related to the topics of transhumanism / technological singularity / artificial intelligence which I find very interesting and important. Another is that consequentialism is a popular moral philosophy here, and I (relatively recently) started to identify myself as strongly consequentialist. Yet another is that it seems to be a community where rational people discuss things rationally (or at least try), something that society all over the world misses as much direly as the idea seems trivial. This is in stark contrast the usual mode of discourse about social / political issues which is extremely shallow and plagued by excessive emotionality and dogmatism. I truly believe such a community can become a driver of social change in good directions, something with incredible impact
Recently I became very much interested with the subject of understanding general intelligence mathematically, in particular by the methods of computer science. I’ve written some comments here about my own variant of the Orseau-Ring framework, something I wished to expand into a full article but didn’t have the karma for it. Maybe I’ll post in on LW discussion.
My personal philosophy: As I said, I’m a consequentialist. I define my utility function not on the basis of hedonism or anything close to hedonism but on the basis of long-term scientific / technological / autoevolutional (transhumanist) progress. I don’t believe in the innate value of h. sapiens but rather in the innate value of intelligent beings (in particular the more intelligence the more value). I can imagine scenarios in which a strong AI destroys humanity which are from my P.O.V. strongly positive: this is my disagreement with the friendly AI thesis. However I’m not sure whether any strong AI scenario will be positive, so I agree it is a concern. I also consider myself a deist rather than an atheist. Thus I believe in God, but the meaning I ascribe to the word “God” is very different from the meaning most religious people ascribe to it (I choose to still use the word “God” since there are a few things in common). For me God is the (unknowable) reason for the miraculous beauty of universe, perceived by us as the beauty of mathematics and science and the amazing plethora of interesting natural phenomena. God doesn’t punish/reward for good/bad behavior, doesn’t perform divine intervention (in the sense of occasional violations of natural law) and doesn’t write/dictate scriptures and prophesies (except by inspiring scientists to make mathematical and scientific discoveries). I consider the human brain to be a machine, with no magic “soul” behind the scenes. However I believe in immortality in a stranger metaphysical sense which is something probably too long to detail here
I’m 29.9 years old, married with child (boy, 2.8 years old). I live in Israel since the age of 7 but I was born in the USSR. Ethnically I’m an Ashkenazi Jew. I enjoy science fiction, good cinema ( but no time to see any since my son was born :) ) and many sorts of music (but rock is probably my favorite). Glad to be here!
Welcome! You should probably join the MAGIC list. Orseau and others hang out there, and Orseau will probably comment on your two posts if you ask for feedback on that list. Also, if you ever visit California then you should visit MIRI and do some math with us.
Welcome! We’re all 29.9 years old, here. I look forward to your comments, hopefully you’ll find the time for that post on your Orseau-Ring variant.
Regarding your redefinition of god, allow me just a small comment: Calling an unknowable reason “god”—without believing in such a reason’s personhood, or volition, or having a mind—invites a lot of unneeded baggage and historical connotations that muddle the discussion, and your self-identification, because what you apparently mean by that term is so different from the usual definitions of “god” that you could just as well call yourself a spiritual atheist (or related).
Speak for yourself, youngster ! Why, back in my day, we didn’t have these “internets” you whippersnappers are always going on about, what with the cats and the memes and the facetubes and the whatnot. We had to make our own networks, by hand, out of floppies and acoustic modems, and we liked it . Why, there’s nothing like an invigorating morning hike with a box of 640K floppies (formatted to 800K) in your backpack, uphill in the snow both ways. Builds character, it does. Mumble mumble mumble get off my lawn !
Maybe from a consequentialist point-of-view, it’s best to use the word “God” when arguing my philosophy with theists and use some other word when arguing my philosophy with atheists :) I’m thinking of “The Source”. However there is a closely related construct which has a sort-of personhood. I named it “The Asymptote”: I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power and “The Asymptote” is a formal limit of this sequence. Loosely speaking, “The Asymptote” is just any intelligence vastly more powerful than our own. This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation → geological structure formation → biological evolution → sentient life → evolution of civilization) and therefore my guess is that there is something about “The Source” which guarantees a indefinite process of this kind. Some sort of a fundamental Law of Evolution which should be complementary, in a way, to the Second Law of Thermodynamics.
This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation → geological structure formation → biological evolution → sentient life → evolution of civilization)
I disagree that they are necessarily more elaborate. I don’t think we (as humanity) fully appreciate the complexity of cosmological structures yet (and I don’t think we will until we get out there and take a closer look at them; we can only see coarse features from several lightyears away). And civilisation seems less elaborate than sentience, to me.
Well, civilization is a superstructure of sentience an is more elaborate in this sense (i.e. sentience + civilization is more elaborate than “wild” sentience)
I take your point. However, I can turn it about and point out that cosmological structures (a category that includes the planet Earth) must by the same token be more elaborate than geological structures.
Sure. Perhaps I chose careless wording but when I said “cosmological structure formation → geological structure formation” my intent was the process thereby a universe initially filled with homogeneous gas develops inhomogeneities which condense to form galaxies, stars and planets which undergo further processes (galaxy collisions, supernova explosions, collisions within stellar systems, geologic / atmospheric processes within planets) that produce more and more complex structure over time.
You mean that this process has the appearance of decreasing entropy? In truth it doesn’t. For example gravitational collapse (the basic mechanism of galaxy and star formation) decreases entropy by reducing the spatial spread of matter but increases entropy by heating matter up. Thus we end up with a total entropy gain. On cosmic scale, I think the process is exploiting a sort-of temperature difference between gravity and matter, namely that initially the temperature of matter was much higher than the Unruh temperature associated with the cosmological constant. Thus even though the initial state had little structure it was very off-equilibrium and thus very low entropy compared to the final equilibrium it will reach.
I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power...
I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light.
…and “The Asymptote” is a formal limit of this sequence.
Wait, so is it bounded or isn’t it ? I’m not sure what you mean.
cosmological structure formation → geological structure formation → biological evolution → sentient life → evolution of civilization
There are plenty of planets where biological evolution had not happened, and most likely never will—take Mercury, for example, or Pluto (yes yes I know it’s not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?
I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light.
In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually. Of course you might argue that our universe it asymptotically de Sitter. This is true, but it also probably metastable and can collapse into a universe with other properties. In http://arxiv.org/abs/1105.3796 the authors present the following line of reasoning: there must be a way to perform an infinite sequence of measurements since otherwise the probabilities of quantum mechanics would be meaningless. In a similar vein I speculate it must be possible to perform an infinite number of computation (or even all possible computations). The authors then go on to explore cosmological explanation of how that might be feasible.
Wait, so is it bounded or isn’t it ? I’m not sure what you mean.
The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is “like an intelligence but not quite” in the same way infinity is “like a number but not quite”
There are plenty of planets where biological evolution had not happened, and most likely never will—take Mercury, for example, or Pluto (yes yes I know it’s not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?
Good point. Indeed it seems that life formation is a rare event. So I’m not sure whether there really is a “Law of Evolution” or we’re just seeing the anthropic principle at work. It would be interesting to understand how to distinguish these scenarios
In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.
Does this hold in a universe that is also expanding (like ours)? Such a scenario makes the ‘infinite’ property largely moot given that any point within has an ‘observable universe’ that is not infinite. That would seem to rule out computations of anything more complicated than what can be represented within the Hubble volume.
Yes, this was exactly my point regarding the universe being asymptotically de Sitter. The problem is that the universe is not merely expanding, it’s expanding with acceleration. But there are possible solutions to this like escaping to an asymptotic region with a non-positive cosmological constant via false vacuum collapse.
In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.
wedrifid already replied better than I could; but I’d still like to add that “eventually” is a long time. For example, if the algorithm that you are computing is NP-complete, then you won’t be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an “infinite series of computations”.
The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is “like an intelligence but not quite” in the same way infinity is “like a number but not quite”
Sorry, but I literally have no idea what this means. I don’t think that infinity is “like a number but not quite” at all, so the analogy doesn’t work for me.
It would be interesting to understand how to distinguish these scenarios
Well, so far, we have observed one instance of “evolution”, and thousands of instances of “no evolution”. I’d say the evidence is against the “Law of Evolution” so far...
In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.
wedrifid already replied better than I could; but I’d still like to add that “eventually” is a long time. For example, if the algorithm that you are computing is NP-complete, then you won’t be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an “infinite series of computations”.
For algorithms with exponential complexity, you will have to wait for exponential time, yes. But eternity is enough time for everything. I think the universe is eternal. Even an asymptotically de Sitter region is eternal (but useless since it reaches thermodynamic equilibrium), however the universe contains other asymptotic regions. See http://arxiv.org/abs/1105.3796
Sorry, but I literally have no idea what this means. I don’t think that infinity is “like a number but not quite” at all, so the analogy doesn’t work for me.
Formally, adding infinity to the field of real numbers doesn’t yield a field (or even a ring).
Well, so far, we have observed one instance of “evolution”, and thousands of instances of “no evolution”. I’d say the evidence is against the “Law of Evolution” so far...
There is clearly at least one Great Filter somewhere between life creation (probably there is one exactly there) and appearance of civilization with moderately supermodern technology: it follows from Fermi’s paradox. However it feels as though there is a small number of such Great Filters with nearly inevitable evolution between them. The real question is what is the expected number of instances of passing these Filters within the volume of a cosmological horizon. If this number is greater than 1 then the universe is more pro-evolution than what is anticipated from the anthropic principle alone. Fermi’s paradox puts an upper bound on this number, but I think this bound is much greater than 1
To really explain what I mean by the Asymptote, I need to explain another construct which I call “the Hypermind” ( Kawoomba’s commented motivated me to invest in the terminology :) ).
What is identity? What makes you today the same person like you yesterday? My conviction is that the essential relationship between the two is that the “you of today” shares the memories of “you of yesterday” and fully understands them. In a similar manner, if a hypothetical superintelligence Omega would learn all of your memories and understand them (you) on the same level you understand yourself, Omega should be deemed a continuation of you, i.e. it assimilated your identity into its own. Thus in the space of “moments of consciousness” in the universe we have a partial order where A < B means “B is a continuation of A” i.e. “B shares A’s memories and understands them”. The Hypermind hypothesis is that for any A and B in this space there is C s.t. C > A and C > B. This seems to me a likely hypothesis if you take into account that the Omega in the example above doesn’t have to exist in your physical vicinity but may exist anywhere in the (multi/)universe and have a simulation of you running on its laptop.
The Asymptote is then a formal limit of the Hypermind. That is, the semantics of “The Asymptote has property P” is “For any A there is B > A s.t. for any C > B, C has property P”. It is then an interesting problem to find non-trivial properties of the Asymptote. In particular, I suspect (without strong evidence yet) that the opposite of the Orthogonality Thesis is true, namely that the Asymptote has a well-defined preference / utility function
This seems like a rather simplistic view, see counter-examples below.
My conviction is
“conviction” might not be a great term, maybe what you mean is a careful conclusion based on something.
that the essential relationship between the two is that the “you of today” shares the memories of “you of yesterday”
except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional.
and fully understands them.
Not sure what you mean by understanding here, feel free to define it better. For example, we often “understand” our memories differently at different times in our lives.
Thus in the space of “moments of consciousness” in the universe we have a partial order where A < B means “B is a continuation of A” i.e. “B shares A’s memories and understands them”
So, if you forgot what you had for breakfast the other day, you today are no longer a continuation of you from yesterday?
“The Asymptote has property P” is “For any A there is B > A s.t. for any C > B, C has property P”
That’s a rather non-standard definition. If anything, it’s close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood.
To sum up, your notion of Asymptote needs a lot more fleshing out before it starts making sense.
the essential relationship between the two is that the “you of today” shares the memories of “you of yesterday”
except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional.
Good point. The description I gave so far is just a first approximation. In truth, memory is far from ideal. However if we assign weight to memories by their potential impact on our thinking and decision making then I think we would get that most of the memories are preserved, at least on short time scales. So, from my point of view, the “you of today” is only a partial continuation of the “you of yesterday”. However it doesn’t essentially changing the construction of the Hypermind. It is possible to refine the hypothesis by stating that for every two “pieces of knowledge” a and b, there exists a “moment of consciousness” C s.t. C contains a and b.
“The Asymptote has property P” is “For any A there is B > A s.t. for any C > B, C has property P”
That’s a rather non-standard definition. If anything, it’s close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood.
Actually I overcomplicated the definition. The definition should read “Exists A s.t. for any B > A, B has property P”. The neighbourhoods are sets of the form {B | B > A}. This form of the definition implies the previous form using the assumption that for any A, B there is C > A, B.
Hello everyone. My name is Vadim Kosoy, and you can find some LW-relevant stuff about me in my Google+ stream: http://plus.google.com/107405523347298524518/about
I am an all time geek, with knowledge / interest in math, physics, chemistry, molecular biology, computer science, software engineering, algorithm engineering and history. Some areas in which I’m comparatively more knowledgeable: quantum field theory, differential geometry, algebraic geometry, algorithm engineering (especially computer vision)
In my day job I’m a technical + product manager of a small software group in Mantis Vision (http://www.mantis-vision.com/) a company developing 3D video cameras. My previous job was in VisionMap (http://www.visionmap.com/) which develops airborne photography / mapping systems, where I led a team of software and algorithm engineers.
I knew about Eliezer Yudkowsky and his friendly AI thesis (which I don’t fully accept) for some time, but discovered this community only relatively recently. For me this community is interesting because of several reasons. One reason is that many discussions are related to the topics of transhumanism / technological singularity / artificial intelligence which I find very interesting and important. Another is that consequentialism is a popular moral philosophy here, and I (relatively recently) started to identify myself as strongly consequentialist. Yet another is that it seems to be a community where rational people discuss things rationally (or at least try), something that society all over the world misses as much direly as the idea seems trivial. This is in stark contrast the usual mode of discourse about social / political issues which is extremely shallow and plagued by excessive emotionality and dogmatism. I truly believe such a community can become a driver of social change in good directions, something with incredible impact
Recently I became very much interested with the subject of understanding general intelligence mathematically, in particular by the methods of computer science. I’ve written some comments here about my own variant of the Orseau-Ring framework, something I wished to expand into a full article but didn’t have the karma for it. Maybe I’ll post in on LW discussion.
My personal philosophy: As I said, I’m a consequentialist. I define my utility function not on the basis of hedonism or anything close to hedonism but on the basis of long-term scientific / technological / autoevolutional (transhumanist) progress. I don’t believe in the innate value of h. sapiens but rather in the innate value of intelligent beings (in particular the more intelligence the more value). I can imagine scenarios in which a strong AI destroys humanity which are from my P.O.V. strongly positive: this is my disagreement with the friendly AI thesis. However I’m not sure whether any strong AI scenario will be positive, so I agree it is a concern. I also consider myself a deist rather than an atheist. Thus I believe in God, but the meaning I ascribe to the word “God” is very different from the meaning most religious people ascribe to it (I choose to still use the word “God” since there are a few things in common). For me God is the (unknowable) reason for the miraculous beauty of universe, perceived by us as the beauty of mathematics and science and the amazing plethora of interesting natural phenomena. God doesn’t punish/reward for good/bad behavior, doesn’t perform divine intervention (in the sense of occasional violations of natural law) and doesn’t write/dictate scriptures and prophesies (except by inspiring scientists to make mathematical and scientific discoveries). I consider the human brain to be a machine, with no magic “soul” behind the scenes. However I believe in immortality in a stranger metaphysical sense which is something probably too long to detail here
I’m 29.9 years old, married with child (boy, 2.8 years old). I live in Israel since the age of 7 but I was born in the USSR. Ethnically I’m an Ashkenazi Jew. I enjoy science fiction, good cinema ( but no time to see any since my son was born :) ) and many sorts of music (but rock is probably my favorite). Glad to be here!
Welcome! You should probably join the MAGIC list. Orseau and others hang out there, and Orseau will probably comment on your two posts if you ask for feedback on that list. Also, if you ever visit California then you should visit MIRI and do some math with us.
Welcome! We’re all 29.9 years old, here. I look forward to your comments, hopefully you’ll find the time for that post on your Orseau-Ring variant.
Regarding your redefinition of god, allow me just a small comment: Calling an unknowable reason “god”—without believing in such a reason’s personhood, or volition, or having a mind—invites a lot of unneeded baggage and historical connotations that muddle the discussion, and your self-identification, because what you apparently mean by that term is so different from the usual definitions of “god” that you could just as well call yourself a spiritual atheist (or related).
Speak for yourself, youngster ! Why, back in my day, we didn’t have these “internets” you whippersnappers are always going on about, what with the cats and the memes and the facetubes and the whatnot. We had to make our own networks, by hand, out of floppies and acoustic modems, and we liked it . Why, there’s nothing like an invigorating morning hike with a box of 640K floppies (formatted to 800K) in your backpack, uphill in the snow both ways. Builds character, it does. Mumble mumble mumble get off my lawn !
Maybe from a consequentialist point-of-view, it’s best to use the word “God” when arguing my philosophy with theists and use some other word when arguing my philosophy with atheists :) I’m thinking of “The Source”. However there is a closely related construct which has a sort-of personhood. I named it “The Asymptote”: I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power and “The Asymptote” is a formal limit of this sequence. Loosely speaking, “The Asymptote” is just any intelligence vastly more powerful than our own. This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation → geological structure formation → biological evolution → sentient life → evolution of civilization) and therefore my guess is that there is something about “The Source” which guarantees a indefinite process of this kind. Some sort of a fundamental Law of Evolution which should be complementary, in a way, to the Second Law of Thermodynamics.
I disagree that they are necessarily more elaborate. I don’t think we (as humanity) fully appreciate the complexity of cosmological structures yet (and I don’t think we will until we get out there and take a closer look at them; we can only see coarse features from several lightyears away). And civilisation seems less elaborate than sentience, to me.
Well, civilization is a superstructure of sentience an is more elaborate in this sense (i.e. sentience + civilization is more elaborate than “wild” sentience)
I take your point. However, I can turn it about and point out that cosmological structures (a category that includes the planet Earth) must by the same token be more elaborate than geological structures.
Sure. Perhaps I chose careless wording but when I said “cosmological structure formation → geological structure formation” my intent was the process thereby a universe initially filled with homogeneous gas develops inhomogeneities which condense to form galaxies, stars and planets which undergo further processes (galaxy collisions, supernova explosions, collisions within stellar systems, geologic / atmospheric processes within planets) that produce more and more complex structure over time.
I see.
Doesn’t that whole chain require the entropy of the universe to be negative? Or am I missing something?
You mean that this process has the appearance of decreasing entropy? In truth it doesn’t. For example gravitational collapse (the basic mechanism of galaxy and star formation) decreases entropy by reducing the spatial spread of matter but increases entropy by heating matter up. Thus we end up with a total entropy gain. On cosmic scale, I think the process is exploiting a sort-of temperature difference between gravity and matter, namely that initially the temperature of matter was much higher than the Unruh temperature associated with the cosmological constant. Thus even though the initial state had little structure it was very off-equilibrium and thus very low entropy compared to the final equilibrium it will reach.
Huh. I don’t think that I know enough physics to argue this point any further.
I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light.
Wait, so is it bounded or isn’t it ? I’m not sure what you mean.
There are plenty of planets where biological evolution had not happened, and most likely never will—take Mercury, for example, or Pluto (yes yes I know it’s not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?
In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually. Of course you might argue that our universe it asymptotically de Sitter. This is true, but it also probably metastable and can collapse into a universe with other properties. In http://arxiv.org/abs/1105.3796 the authors present the following line of reasoning: there must be a way to perform an infinite sequence of measurements since otherwise the probabilities of quantum mechanics would be meaningless. In a similar vein I speculate it must be possible to perform an infinite number of computation (or even all possible computations). The authors then go on to explore cosmological explanation of how that might be feasible.
The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is “like an intelligence but not quite” in the same way infinity is “like a number but not quite”
Good point. Indeed it seems that life formation is a rare event. So I’m not sure whether there really is a “Law of Evolution” or we’re just seeing the anthropic principle at work. It would be interesting to understand how to distinguish these scenarios
Does this hold in a universe that is also expanding (like ours)? Such a scenario makes the ‘infinite’ property largely moot given that any point within has an ‘observable universe’ that is not infinite. That would seem to rule out computations of anything more complicated than what can be represented within the Hubble volume.
Yes, this was exactly my point regarding the universe being asymptotically de Sitter. The problem is that the universe is not merely expanding, it’s expanding with acceleration. But there are possible solutions to this like escaping to an asymptotic region with a non-positive cosmological constant via false vacuum collapse.
wedrifid already replied better than I could; but I’d still like to add that “eventually” is a long time. For example, if the algorithm that you are computing is NP-complete, then you won’t be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an “infinite series of computations”.
Sorry, but I literally have no idea what this means. I don’t think that infinity is “like a number but not quite” at all, so the analogy doesn’t work for me.
Well, so far, we have observed one instance of “evolution”, and thousands of instances of “no evolution”. I’d say the evidence is against the “Law of Evolution” so far...
For algorithms with exponential complexity, you will have to wait for exponential time, yes. But eternity is enough time for everything. I think the universe is eternal. Even an asymptotically de Sitter region is eternal (but useless since it reaches thermodynamic equilibrium), however the universe contains other asymptotic regions. See http://arxiv.org/abs/1105.3796
A more formal definition is given in my comment http://lesswrong.com/lw/do9/welcome_to_less_wrong_july_2012/8kt7 . Less formally, infinity is “like a number but not quite” because many predicates into which a number can be meaningfully plugged in, also work for infinity. For example:
infinity > 5 infinity + 7 = infinity infinity + infinity = infinity infinity * 2 = infinity
However not all such expressions make sense:
infinity—infinity = ? infinity * 0 = ?
Formally, adding infinity to the field of real numbers doesn’t yield a field (or even a ring).
There is clearly at least one Great Filter somewhere between life creation (probably there is one exactly there) and appearance of civilization with moderately supermodern technology: it follows from Fermi’s paradox. However it feels as though there is a small number of such Great Filters with nearly inevitable evolution between them. The real question is what is the expected number of instances of passing these Filters within the volume of a cosmological horizon. If this number is greater than 1 then the universe is more pro-evolution than what is anticipated from the anthropic principle alone. Fermi’s paradox puts an upper bound on this number, but I think this bound is much greater than 1
Why postulate that such a limit exists?
To really explain what I mean by the Asymptote, I need to explain another construct which I call “the Hypermind” ( Kawoomba’s commented motivated me to invest in the terminology :) ).
What is identity? What makes you today the same person like you yesterday? My conviction is that the essential relationship between the two is that the “you of today” shares the memories of “you of yesterday” and fully understands them. In a similar manner, if a hypothetical superintelligence Omega would learn all of your memories and understand them (you) on the same level you understand yourself, Omega should be deemed a continuation of you, i.e. it assimilated your identity into its own. Thus in the space of “moments of consciousness” in the universe we have a partial order where A < B means “B is a continuation of A” i.e. “B shares A’s memories and understands them”. The Hypermind hypothesis is that for any A and B in this space there is C s.t. C > A and C > B. This seems to me a likely hypothesis if you take into account that the Omega in the example above doesn’t have to exist in your physical vicinity but may exist anywhere in the (multi/)universe and have a simulation of you running on its laptop.
The Asymptote is then a formal limit of the Hypermind. That is, the semantics of “The Asymptote has property P” is “For any A there is B > A s.t. for any C > B, C has property P”. It is then an interesting problem to find non-trivial properties of the Asymptote. In particular, I suspect (without strong evidence yet) that the opposite of the Orthogonality Thesis is true, namely that the Asymptote has a well-defined preference / utility function
This seems like a rather simplistic view, see counter-examples below.
“conviction” might not be a great term, maybe what you mean is a careful conclusion based on something.
except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional.
Not sure what you mean by understanding here, feel free to define it better. For example, we often “understand” our memories differently at different times in our lives.
So, if you forgot what you had for breakfast the other day, you today are no longer a continuation of you from yesterday?
That’s a rather non-standard definition. If anything, it’s close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood.
To sum up, your notion of Asymptote needs a lot more fleshing out before it starts making sense.
Good point. The description I gave so far is just a first approximation. In truth, memory is far from ideal. However if we assign weight to memories by their potential impact on our thinking and decision making then I think we would get that most of the memories are preserved, at least on short time scales. So, from my point of view, the “you of today” is only a partial continuation of the “you of yesterday”. However it doesn’t essentially changing the construction of the Hypermind. It is possible to refine the hypothesis by stating that for every two “pieces of knowledge” a and b, there exists a “moment of consciousness” C s.t. C contains a and b.
Actually I overcomplicated the definition. The definition should read “Exists A s.t. for any B > A, B has property P”. The neighbourhoods are sets of the form {B | B > A}. This form of the definition implies the previous form using the assumption that for any A, B there is C > A, B.
Hmm, it seems like your definition of Asymptote is nearly that of a limit ordinal.