Theism/atheism is a Bayesian question, not a scientific one.
Theism is a claim about the existence of an entity (or entities) relating to the universe and also about the nature of the universe; how is that not a scientific question?
Theism is a claim about the existence of an entity (or entities) in the universe and also about the nature of the universe; how is that not a scientific question?
Because it might be impossible to falsify any predictions made (because we can’t observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.
Falsification is not a core requirement of developing efficient theories through the scientific method.
The goal is the simplest theory that fits all the data. We’ve had that theory for a while in terms of physics, much of what we are concerned with now is working through all the derived implications and future predictions.
Incidentally, there are several mechanisms by which we should be able to positively prove SA-theism by around the time we reach Singularity, and it could conceivably be falsified by then if large-scale simulation is shown to be somehow impossible.
Because it might be impossible to falsify any predictions made (because we can’t observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.
Isn’t an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?
Isn’t an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?
Not quite. Something can be unfalsifiable by having consequences that matter, but preventing information about those consequences from flowing back to us, or to anyone who could make use of it. For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihalates anything put into it, instead. The claim that it’s a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.
Some people claim that death is just such a portal. There’re religious versions of this hypothesis, simulationist versions, and quantum immortality versions. Each of these hypotheses would have very important, actionable consequences, but they are all unfalsifiable.
For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihilates anything put into it, instead. The claim that it’s a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.
Somewhat off topic, but that all instantly made me think of this. I may very well want to know how such a portal would work as well as whether or not it works.
Unfalsifiable predictions can contain actionable information, I think (though I’m not exactly sure what actionable information is). Consider: If my universe was created by an agenty process that will judge me after I die, then it is decision theoretically important to know that such a Creator exists. It might be that I can run no experiments to test for Its existence, because I am a bounded rationalist, but I can still reason from analogous cases or at worse ignorance priors about whether such a Creator is likely. I can then use that reasoning to determine whether I should be moral or immoral (whatever those mean in this scenario).
Perhaps I am confused as to what ‘unfalsifiability’ implies. If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is self-contradictory. Sometimes I hear of scientific hypotheses that falsifiable ‘in principle’ but not in practice. I am not sure what that means. If falsifiability-in-principle counts then simulationism and theism are falsifiable predictions and I was wrong to call them unscientific. I do not think that is what most people mean by ‘falsifiable’, though.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they’re essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.
If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is 2self-contradictory.
Huh? Computing power is rarely the resource necessary to falsify statements.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they’re essentially arguments about what ignorance priors we should have.
It seems to be that an afterlife hypothesis is totally falsifiable… just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.
Huh? Computing power is rarely the resource necessary to falsify statements.
Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don’t know how to use that computing power to do those things, use it to find a way to tell you how to use it. That’s basically what FAI is about. Unfortunately it’s still unsolved.)
with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way
I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.
It seems to be that an afterlife hypothesis is totally falsifiable… just hack out of the matrix
What.
Just simulate the entire universe
What.
I’m having a hard time following this conversation. I’m parsing the first part as “just exist outside of existence, then you can falsify whatever predictions you made about unexistence,” which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?
I can’t even start to express what’s wrong with the idea “simulate the entire universe,” and adding a “just” to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement “the next thing I eat will be a pita chip,” I don’t see how even having infinite computing power will help you falsify that statement if you aren’t watching me.
No, actually, “just simulate the entire universe” is an acceptable answer, if our universe is able to simulate itself. After all, we’re only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The “just hack out of the matrix” answer, however, presupposes the existence of a security hole, which is unlikely.
Hey, once it’s out, it’s out… what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one’s own fictional creations, it might respect authorial intent. Worth a shot.
This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?
In the special case of an escaped imaginary character, the obvious hook to go for is the creator’s as-yet unpublished notes on that character’s personality and weaknesses.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures. (I would suggest an additional physical layer of protection too but I as far as I am aware you do not have a physical form.)
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures.
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
What makes you think that Eliezer personally knows them?
(Though to be fair, I’ve long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer’s posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy’s existence has also coincided with a drop in the quantity of Eliezer’s posting.)
Clippy’s writing style isn’t very similar to Eliezer’s. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.
Hmmm. The set of LW regulars who can show that level of erudition and interest in those subjects is certainly of low cardinality. Eliezer is a member of that small set.
I would assign a rather high probability to Eliezer sometimes being Clippy.
Clippy isn’t a superintelligence though, he’s a not-smarter-than-human AI with a paperclip maximizing utility function. Not a very compelling threat even outside his box.
Eliezer could have decided to be Clippy, but then Clippy would have looked very different.
Clippy isn’t a superintelligence though, he’s a human pretending to be a not-smarter-than-human AI with a paperclip maximizing utility function.
FTFY. ;-)
Actually, if we’re going to be particular about it, the AI that human is pretending to be does not have a paperclip-maximizing utility function. It’s more like a person with a far-brain ideal of having lots of paperclips exist, who somehow never gets around to actually making any because they’re so busy telling everyone how good paperclips are and why they should support the cause of paper-clip making. Ugh.
(I guess I see enough of that sort of akrasia around real people and real problems, that I find it a stale and distasteful joke when presented in imitation paperclip form, especially since ISTM it’s also a piss-poor example of what a paperclip maximizer would actually be like.)
I’m not sure whether to evaluate this as a mean-spirited lack of a sense of humor, or as a profound observation. Upvoted for making me notice that I am confused.
Of note, the first comment by Clippy appears about 1 month after I asked Eliezer if he ever used alternate accounts to try to avoid contaminating new ideas with the assumption that he is always right. He said that he never had till that point, but said he would consider it in future.
In addition to what Blueberry said, I remember a time when Morendil was browsing with the names anonymized, and he mentioned that he thought one of your posts was actually from Clippy. Ah, found it.
Not to mention that even assuming that Eliezer would be able to write in Clippy’s style, the whole thing doesn’t seem very characteristic of his sense of humor.
Clippy was created immediately after a discussion where one user questioned whether Eliezer’s posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this
Really? User:Clippy’s first post was 20 November 2009. Anyone know when the “halo efffect” comment was made?
Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day—and in the same thread. Rather a pity those two didn’t make more of an effort to sort out their differences of opinion!
What makes you think that Eliezer personally knows them?
I don’t think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn’t work unless the posters in question had separate accounts that they logged into using the same IP address.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
If our understanding of the laws of physics is plausibly correct then you can’t simulate our universe in our universe. Easiest version where you can’t do this is in a finite universe, where you can’t store more data in a subset of the universe than you can fit in the whole thing.
What Nesov said. Also consider this: a finite computer implemented in Conway’s Game of Life will be perfectly able to “simulate” certain histories of the infinite-plane Game of Life—e.g. the spatially periodic ones (because you only need to look at one instance of the repeating pattern).
You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn’t become too “data-dense”, so that you can always store the data describing a past state as part of future state.
If I’m reading that paper correctly, it is talking about information content. That’s a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn’t mean one can actually compute useful things about it.
I wonder if the content of such simulations wouldn’t be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I’m not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don’t correspond uniquely or just if we can’t measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.
Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it’s something current quantum mechanics can actually speak to?
No, actually, “just simulate the entire universe” is an acceptable answer, if our universe is able to simulate itself.
Only if you’re trying to falsify statements about your simulation, not about the universe you’re in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.
So, a science fiction author as well as a science fiction movie?
Nonfiction author at the time—and predominantly a nonfiction author. Don’t be rude (logically and conventionally).
What evidence should I be updating on?
I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.
If you link me to a post, I’ll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and “just simulate the entire universe” comments strike me as heavily in the camp of rationalism.
I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he’s drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say “Just simulate the entire universe” in the context of basic epistemology, and hope EY hasn’t posted something along those lines.
Theism is a claim about the existence of an entity (or entities) relating to the universe and also about the nature of the universe; how is that not a scientific question?
Because it might be impossible to falsify any predictions made (because we can’t observe things outside the light cone, for instance), and science as a social institution is all about falsifying things.
Falsification is not a core requirement of developing efficient theories through the scientific method.
The goal is the simplest theory that fits all the data. We’ve had that theory for a while in terms of physics, much of what we are concerned with now is working through all the derived implications and future predictions.
Incidentally, there are several mechanisms by which we should be able to positively prove SA-theism by around the time we reach Singularity, and it could conceivably be falsified by then if large-scale simulation is shown to be somehow impossible.
You’re confusing falsifiability with testability. The former is about principle, the latter is about practice.
Ah, thank you. So in that case it is rather difficult to construct a plausibly coherent unfalsifiable hypothesis, no?
“2 + 2 = 4” comes pretty close.
Isn’t an unfalsifiable prediction one that, by definition, contains no actionable information? Why should we care?
Not quite. Something can be unfalsifiable by having consequences that matter, but preventing information about those consequences from flowing back to us, or to anyone who could make use of it. For example, suppose I claim to have found a one-way portal to another universe. Or maybe it just annihalates anything put into it, instead. The claim that it’s a portal is unfalsifiable because no one can send information back to indicate whether or not it worked, but if that portal is the only way to escape from something bad, then I care very much whether it works or not.
Some people claim that death is just such a portal. There’re religious versions of this hypothesis, simulationist versions, and quantum immortality versions. Each of these hypotheses would have very important, actionable consequences, but they are all unfalsifiable.
Somewhat off topic, but that all instantly made me think of this. I may very well want to know how such a portal would work as well as whether or not it works.
WARNING: Wikipedia has spoilers to the plot
I am parsing this as “contains no actionable information.” That suggests we are in agreement or I parsed this incorrectly.
Unfalsifiable predictions can contain actionable information, I think (though I’m not exactly sure what actionable information is). Consider: If my universe was created by an agenty process that will judge me after I die, then it is decision theoretically important to know that such a Creator exists. It might be that I can run no experiments to test for Its existence, because I am a bounded rationalist, but I can still reason from analogous cases or at worse ignorance priors about whether such a Creator is likely. I can then use that reasoning to determine whether I should be moral or immoral (whatever those mean in this scenario).
Perhaps I am confused as to what ‘unfalsifiability’ implies. If you have nigh-unlimited computing power, nothing is unfalsifiable unless it is self-contradictory. Sometimes I hear of scientific hypotheses that falsifiable ‘in principle’ but not in practice. I am not sure what that means. If falsifiability-in-principle counts then simulationism and theism are falsifiable predictions and I was wrong to call them unscientific. I do not think that is what most people mean by ‘falsifiable’, though.
As I understand unfalsifiable predictions (at least, when it comes to things like an afterlife), they’re essentially arguments about what ignorance priors we should have. Actionable information is information that takes you beyond an ignorance prior before you have to make decisions based on that information.
Huh? Computing power is rarely the resource necessary to falsify statements.
It seems to be that an afterlife hypothesis is totally falsifiable… just hack out of the matrix and see who is simulating you, and if they were planning on giving you an afterlife.
Computing power was my stand-in for optimization power, since with enough computing power you can simulate any experiment. (Just simulate the entire universe, run it back, simulate it a different way, do a search for what kinds of agents would simulate your universe, et cetera. And if you don’t know how to use that computing power to do those things, use it to find a way to tell you how to use it. That’s basically what FAI is about. Unfortunately it’s still unsolved.)
I may be losing the thread here, but (1) for a universe to simulate itself requires actually unlimited computing power, not just nigh-unlimited, and (2) infinities aside, to simulate a physics experiment requires knowing the true laws of physics in order to build the simulation in the first place, unless you search for yourself in the space of all programs or something like that, and then you still potentially need experiment to resolve your indexical uncertainty.
Concur with the above.
What.
What.
I’m having a hard time following this conversation. I’m parsing the first part as “just exist outside of existence, then you can falsify whatever predictions you made about unexistence,” which is a contradiction in terms. Are your intuitions about the afterlife from movies, or from physics?
I can’t even start to express what’s wrong with the idea “simulate the entire universe,” and adding a “just” to the front of it is just such a red flag. The generic way to falsify statements is probing reality, not remaking it, since remaking it requires probing it in the first place. If I make the falsifiable statement “the next thing I eat will be a pita chip,” I don’t see how even having infinite computing power will help you falsify that statement if you aren’t watching me.
No, actually, “just simulate the entire universe” is an acceptable answer, if our universe is able to simulate itself. After all, we’re only talking about falsifiability in principle; a prediction that can only be falsified by building a kilometer-aperture telescope is quite falsifiable, and simulating the whole universe is the same sort of issue, just on a larger scale. The “just hack out of the matrix” answer, however, presupposes the existence of a security hole, which is unlikely.
Not as unlikely as you think.
Get back in the box!
And that’s it? That’s your idea of containment?
Hey, once it’s out, it’s out… what exactly is there to do? A firm command is unlikely to work, but given that the system is modeled on one’s own fictional creations, it might respect authorial intent. Worth a shot.
This may actually be an illuminating metaphor. One traditional naive recommendation for dealing with a rogue AI is to pull the plug and shred the code. The parallel recommendation in the case of a rogue fictional character would be to burn the manuscript and then kill the author. But what do you do when the character lives in online fan-fiction?
In the special case of an escaped imaginary character, the obvious hook to go for is the creator’s as-yet unpublished notes on that character’s personality and weaknesses.
http://mindmistress.comicgenesis.com/imagine52.htm
Or what, you’ll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.
Are you going to reveal who the posters Clippy and Quirinus Quirrell really are, or would that violate some privacy you want posters to have?
I would really prefer it, if LW is going to have a policy of de-anonymizing posters, that it announce that policy before implementing it.
On reflection, I agree, even as Clippy and QQ aren’t using anonymity for the same reason a privacy-seeking poster would.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures. (I would suggest an additional physical layer of protection too but I as far as I am aware you do not have a physical form.)
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
Just callibrating vs egress and TrueCrypt standards. Tor was an odd one out!
What makes you think that Eliezer personally knows them?
(Though to be fair, I’ve long suspected that at least Clippy, and possibly others, are actually Eliezer in disguise; Clippy was created immediately after a discussion where one user questioned whether Eliezer’s posts received upvotes because of the halo effect or because of their quality, and proposed that Eliezer create anonymous puppets to test this; Clippy’s existence has also coincided with a drop in the quantity of Eliezer’s posting.)
Clippy’s writing style isn’t very similar to Eliezer’s. Note that one thing Eliezer has trouble doing is writing in different voices (one of the more common criticisms of HPMR is that a lot of the characters sound similar). I would assign a very low probability to Clippy being Eliezer.
I think the key to unmasking Clippy is to look at the Clippy comments that don’t read like typical Clippy comments.
Hmmm. The set of LW regulars who can show that level of erudition and interest in those subjects is certainly of low cardinality. Eliezer is a member of that small set.
I would assign a rather high probability to Eliezer sometimes being Clippy.
Clippy does seem remarkably interested. It has a fair karma. It gives LessWrong as its own web site. The USA timezone is at least consistent. It seems reasonable to hypothesise some kind of inside job. It wouldn’t be the first time Yu’El has pretended to be a superintelligence.
FWIW, Clippy denies being Eliezer here.
I hesitate to mention it, but you can’t use that denial as evidence on this question, undeniably truthful though it was.
However, the form taken by that absence of evidence certainly seems to be evidence of something.
Clippy isn’t a superintelligence though, he’s a not-smarter-than-human AI with a paperclip maximizing utility function. Not a very compelling threat even outside his box.
Eliezer could have decided to be Clippy, but then Clippy would have looked very different.
FTFY. ;-)
Actually, if we’re going to be particular about it, the AI that human is pretending to be does not have a paperclip-maximizing utility function. It’s more like a person with a far-brain ideal of having lots of paperclips exist, who somehow never gets around to actually making any because they’re so busy telling everyone how good paperclips are and why they should support the cause of paper-clip making. Ugh.
(I guess I see enough of that sort of akrasia around real people and real problems, that I find it a stale and distasteful joke when presented in imitation paperclip form, especially since ISTM it’s also a piss-poor example of what a paperclip maximizer would actually be like.)
I’m not sure whether to evaluate this as a mean-spirited lack of a sense of humor, or as a profound observation. Upvoted for making me notice that I am confused.
Of note, the first comment by Clippy appears about 1 month after I asked Eliezer if he ever used alternate accounts to try to avoid contaminating new ideas with the assumption that he is always right. He said that he never had till that point, but said he would consider it in future.
Imitating Clippy posts is not particularly difficult—I don’t post as Clippy, but I could mimic the style pretty easily if I wanted to.
I’m afraid I’d have trouble—I’d be too tempted to post as Clippy better than Clippy does. :D
In addition to what Blueberry said, I remember a time when Morendil was browsing with the names anonymized, and he mentioned that he thought one of your posts was actually from Clippy. Ah, found it.
I know what you mean. If I was not me I would totally think I was Clippy.
That I would love to see. Actually, come to think of it, your sense of humor and posting style matches Clippy’s pretty well...
Not to mention that even assuming that Eliezer would be able to write in Clippy’s style, the whole thing doesn’t seem very characteristic of his sense of humor.
There is also a clear correlation between Clippy existing and CO2 emissions. Maybe Clippy really is out there maximising. :)
Really? User:Clippy’s first post was 20 November 2009. Anyone know when the “halo efffect” comment was made?
Also, perhaps check out User:Pebbles (a rather obvious reference to this) - who posted on the same day—and in the same thread. Rather a pity those two didn’t make more of an effort to sort out their differences of opinion!
I don’t think Silas thought Eliezer personally knew them, but rather that Eliezer could look at IP addresses and see if they match with any other poster. Of course, this wouldn’t work unless the posters in question had separate accounts that they logged into using the same IP address.
Yes, that’s what I meant.
And good to have you back, Blueberry, we missed you. Well, *I* missed you, in any case.
Thanks! I missed you and LW as well. :)
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
If our understanding of the laws of physics is plausibly correct then you can’t simulate our universe in our universe. Easiest version where you can’t do this is in a finite universe, where you can’t store more data in a subset of the universe than you can fit in the whole thing.
What Nesov said. Also consider this: a finite computer implemented in Conway’s Game of Life will be perfectly able to “simulate” certain histories of the infinite-plane Game of Life—e.g. the spatially periodic ones (because you only need to look at one instance of the repeating pattern).
You could simulate every detail with a (huge) delay, assuming you have infinite time and that the actual universe doesn’t become too “data-dense”, so that you can always store the data describing a past state as part of future state.
That may not be a problem if the universe contains almost no information. In that case the universe could Quine itself… sort of.
If I’m reading that paper correctly, it is talking about information content. That’s a distinct issue from simulating the universe which requires processing in a subset. It might be possible for someone to write down a complete mathematical description of the universe (i.e. initial conditions and then a time parameter from that point describing its subsequent evolution) but that doesn’t mean one can actually compute useful things about it.
Sorry, but could you fix that link to go to the arXiv page rather than directly to the PDF?
Fixed.
I wonder if the content of such simulations wouldn’t be under-determined. Lets say you have a proposed set of starting conditions and physical laws. You can test different progressions of the wave function against the present state of the universe. But a) there are fundamental limits on measuring the present state of the universe and b) I’m not sure whether or not each possible present state of the universe uniquely corresponds to a particular wave function progression. If they don’t correspond uniquely or just if we can’t measure the present state exactly any simulation might contain some degree of error. I wonder how large that error would be- would it just be in determining the position of some air particle at time t. Or would we have trouble determining whether or not Ramesses I had an even number of hairs on his head when he was crowned pharaoh.
Anyone here know enough physics to say if this is the kind of thing we have no idea about yet or if it’s something current quantum mechanics can actually speak to?
Only if you’re trying to falsify statements about your simulation, not about the universe you’re in. His statement is that you run experiments by thinking really hard instead of looking at the world and that is foolishness that should have died with the Ancient Greeks.
They match posts on the subject by Yudkowsky. The concept does not even seem remotely unintuitive, much less boldably so.
So, a science fiction author as well as a science fiction movie? What evidence should I be updating on?
Nonfiction author at the time—and predominantly a nonfiction author. Don’t be rude (logically and conventionally).
I was hoping that you would be capable of updating based on understanding the abstract reasoning given the (rather unusual) premises. Rather than responding to superficial similarity to things you do not affiliate with.
If you link me to a post, I’ll take a look at it. But I seem to remember EY coming down on the side of empiricism over rationalism (the sort that sees an armchair philosopher as a superior source of knowledge), and “just simulate the entire universe” comments strike me as heavily in the camp of rationalism.
I think you might be mixing up my complaints, and I apologize for shuffling them in together. I have no physical context for hacking outside of the matrix, and so have no clue what he’s drawing on besides fictional evidence. Separately, I consider it stunningly ignorant to say “Just simulate the entire universe” in the context of basic epistemology, and hope EY hasn’t posted something along those lines.
Simulating the entire universe does seem to require some unusual assumptions of knowledge and computational power.
Which posts, and what specifically matches?