This is a very interesting part of an interview with Freeman Dyson where he talks about how computation could go on forever even if the universe faces a heat death scenario. https://www.youtube.com/watch?v=3qo4n2ZYP7Y
SodaPopinski
In the same vein, I would highly recommend John Maynard Smith’s “Evolution and the Theory of Games”. It has many highly motivated examples of Game Theory in Biology by a real biologist. The later chapters get dense but the first half is readable with a basic knowledge of calculus (which was in fact my background when I first picked up this book).
CellBioGuy all your astrobiology posts are great I’d be happy to read all of those. This may be off the astrobiology topic but I would love to see a post with your opinion on the foom question. For example do you agree with Gwern’s post about there not being complexity limitations preventing runaway self-improving agents?
Still reading minor nitpick: for point 2 you don’t want to say NP (since P is in NP). It is the NP-hard problems that people would say can’t be solved but for small instances (which as you point out is not a reasonable assumption).
So your first and second point make sense to me, they together make the nominal interest rate. What I don’t understand is your point about growth. The price of a stock should be determined by the adjusted future returns of the company right? The growth you speak of should be accounted for already in our models of the future returns. So if the price going up that means the models are underestimating future returns right?
People in finance tend to believe (reasonably I think) that the stock market trends upward. I believe they mean it trends upward even after you account for the value of the risk you take on by buying stock in a company (i.e. being in the stock market is not just selling insurance). So how does this mesh with the general belief that the market is at least pretty efficient. Why are we systematically underestimating future returns of companies?
About 20⁄50, I don’t know if that can be unambiguously converted to diopters. I measure by performance by sitting at a constant 20 feet away and when I am over 80% correct I shrink the font on the chart a little bit. I can currently read a slightly smaller font than what corresponds to 20⁄50 on an eye chart.
Does anyone know of some good program for eye training. I would like to try to become a little less near-sighted by straining to make out things which are at the edge of my range of good vision. I know near-sighted means my eyeball is squashed, but I am hoping my brain can fix a bit of the distortion in software. Currently I am doing random printed out eye charts, and I have gotten a bit better over time, but printing out the charts is tedious.
This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.
To boil it down to a simple thought experiment. Suppose I am in the future where we have a ton of computing power and I know something bad will happen tomorrow (say I’ll be fired) barring some 1/1000 likelihood quantum event. No problem, I’ll just make millions of simulations of the world with me in my current state so that tomorrow the 1/1000 event happens and I’m saved since I’m almost certainly in one of these simulations I’m about to make!
I agree with your sentiment. I am hoping though that one can define formally what a computation is given a physical system. Perhaps you are on to something with the causal requirement, but I think this is hard to pin down precisely. The noise is still being caused by the previous state of the system, so how can we sensibly talk about cause in a physical system. It seems like we would be more interested in ‘causes’ associated to more agent-like objects like an engine than formless things like the previous state of a cloud of gas. Actually I think Caspar’s article was trying to formalize something like this but I don’t understand it that well: http://lesswrong.com/r/discussion/lw/msg/publication_on_formalizing_preference/
Take the thermal noise generated in part of the circuit. By setting a threshold we can interpret it as a sequence 110101011 etc. Now if this list sequence was enormous we would eventually have a pixel by pixel description of any picture, letter by letter description of every book, state after state description of the tape on any Turing machine etc (basically a Library of Babel situation). Now of course we would need a crazy long sequence for this, but there is similar noise associated with the motion of every atom in the circuit, likewise the noise is far more complex if we don’t truncate it to 0′s and 1′s, and finally there are many many many encodings of our resulting strings (does 110 represent the letter A, 0101 a blue pixel and so on).
If I chose ahead of time the procedure of how the thermal noise fluctuates and I seed in two instances of noise I think of as representing 2 and 3, and after a while it outputs a thermal noise I think of as 5 then I am ok calling that a computation. But why should my naming of the noise and dictating how the system develops be required for computation to occur?
It is interesting to compare the Less Wrong and Wikipedia articles on Recursive self improvement: http://wiki.lesswrong.com/wiki/Recursive_self-improvement https://en.wikipedia.org/wiki/Recursive_self-improvement I still find the anti-foom arguments based on diminishing returns in the Wikipedia article to be compelling. Has there been any progress on modelling recursively self improving systems systems beyond what we can find in the foom-debate?
If there are really infinite instances of conscious computations, then I don’t think it is unreasonable to believe that there exists no more/less measure and simply we have no reason at all to be surprised to be living in one type of simulation than another. I guess my interest with the question was if there is any way to not throw the baby out with the bathwater, by having a reasonable more restrictive notation of what a computation is.
My question is simply: Do we have any reason to believe that the uncertainty introduced by quantum mechanics will preclude the level of precision in which two agents have to model each other in order to engage in acausal trade?
What is a computation? Intuitively some (say binary) states of the physical world are changed, voltage gates switched, rocks moved around (https://xkcd.com/505/), whatever.
Now, in general if these physical changes were done with some intention like in my CPU or the guy moving the rocks in the xkcd comic, then I think of this as a computation, and consequentially I would care for example about if the computation I performed simulated a conscious entity.However, surely my or my computer’s intention can’t be what makes the physical state changes count as a computation. But then how do we get around the slippery slope where everything is computing everything imaginable. There are billions of states I can interpret as 1′s and 0′s which get transformed in countless different ways every time I stir my coffee. Even worse, in quantum mechanics, the state of a point is given by a potentially infinitely wiggly function. What stops me from interpreting all of this as computation which under some encoding gives rise to countless Boltzmann brain type conscious entities and simulated worlds?
Yes, I understand the point of acausal trading. The point of my question was to speculate on how likely it is that quantum mechanics may prohibit modeling accurate enough to make acausal trading actually work. My intuition is based on the fact that in general faster than light transmission of information is prohibited. For example, even though entangled particles update on each others state when they are outside of each others light cone, it is known that it is not possible transmit information faster than light using this fact.
Now, does mutually enhancing each others utility count as information, I don’t think so. But my instinct is that acausal trade protocols will not be possible do to the level of modelling required and the noise introduced by quantum mechanics.
Do we know whether quantum mechanics could rule out acausal between partners outside eachother’s lightcone? Perhaps it is impossible to model someone so far away precisely enough to get a utility gain out of an acuasal trade? I started thinking about this after reading this wiki article on the ‘Free will theorem’ https://en.wikipedia.org/wiki/Free_will_theorem .
Where can I find the most coherent anti-FOOM argument (outside of the FOOM debate)? [That is, I’m looking for arguments for the possibility of not having an intelligence explosion if we reach near human level AI, the other side is pretty well covered on LW.]
If we obtained a good understanding of the beginning of life and found that the odds of life occurring at some point in our universe was one in a million, then what exactly would follow from that. Sure the Fermi paradox would be settled, but would this give credence to multiverse/big world theories or does the fact that the information is anthropically biased tell us nothing at all? Finally, if we don’t have to suppose a multiverse to account for a vanishingly small probability of life, then wouldn’t it be surprising if there are not a lot of hugely improbable jumps in the forming of intelligent life?
I believe Dyson is saying there could indeed by an infinite amount. Here is a wikipedia article about it https://en.wikipedia.org/wiki/Dyson%27s_eternal_intelligence and the article itself http://www.aleph.se/Trans/Global/Omega/dyson.txt