Obviously the key lies in their definition of “ideal explanation.”
That seems non-obvious to me. It’s highly problematic, sure—but not “key”. “Key” is “adequate range of data”. That cannot be an objective measure. It occurs to me that Bayes’ theorem has no such problem; it simply takes additional input and revises its conclusions as they come—it makes no presumption of its conclusions necessarily being representative of absolute truth.
I also, personally, take objection to:
(2) It is not highly unlikely that a world-creator exists.
I find it is highly unlikely that “a world-creator” exists. For two reasons. 1) Our universe necessarily possesses an infinite history (Big Bang + Special Relativity says this.) 2) Any ruleset which allows for spontaneous manifestation of an agentless system is by definition less unlikely than the rulesets which allow for the spontaneous manifestation of an agent that can itself manifest rulesets. (The latter being a subset of the former, and possessed of greater ‘complexity’—an ugly term but there just isn’t a better one I am familiar with; in this case I use it to mean “more pieces that could go wrong if not assembled ‘precisely so’.)
I can’t say, as a person who is still neutral on this whole “Bayesian theory” thing (i.e.; I feel no special attachment to the idea and can’t say I entirely agree with the notion that our universe in no ways truly behaves probabilistically) -- I can’t say that this topic as related is at all convincing.
Here’s a thought experiment for you: Imagine that you’ve decided to take a short walk to the black hole at the corner 7-11 / Circle-K / ‘Kwik-E-Mart’. How long will it take you to reach the event horizon? (The answer, of course, is that you never will.)
As you approach the event horizon of a quantum singularity, time is distorted until it reaches an infinitessimal rate of progression. The Bing Bang states that the entire universe inflated from a single point; a singularity. The same rules, thusly govern—in reverse; the first instants of the universe took an infinitely long period of time to progress.
It helps if you think of this as a two-dimensional graph, with the history of the universe as a line. As we approach the origin mark, the graph of history curves; the “absolute zero instant” of the Universe is, thusly, shown to be an asymptotic limit; a point that can only ever be approached but never, ever reached.
If you decide to really walk inside, you could be well behind the horizon before you remember to check your watch and hit the singularity not long afterwards.
There are different times in general relativistic problems. There is the coordinate time, which is what one usually plots on the vertical axis of a graph. This is (with usual choice of coordinates) infinite when any object reaches the horizon, but it also lacks immediate physical meaning, since GR is invariant with respect to (almost) arbitrary coordinate changes. Then there may be times measured by individual observers. A static observer looking at an object falling into a black hole will never see the object cross the horizon, apparently it takes infinite time to reach it. But the proper time of a falling observer (the time measured by the falling observer’s clocks) is finite and nothing special happens at the horizon.
What does that mean? Do you say that proper time measured along geodesics was infinite between the Big Bang and the moment denoted as “first second” by the coordinate time, or that the coordinate time difference between those events is infinite while the proper time is one second?
I agree. But now, how does that justify talking about infinite history? Coordinate time has no physical meaning, it’s an arbitrary artifact of our description and it’s possible to choose the coordinates in such a way to have the time difference finite.
But now, how does that justify talking about infinite history?
How does it not? It’s a true statement: the graph of our history is infinitely long.
Coordinate time has no physical meaning,
I can’t agree with that statement.
and it’s possible to choose the coordinates in such a way to have the time difference finite.
That much is true, but it fails to reveal explicably the nature of why the question, “What happened before the Big Bang?” as being as meaningless as “What’s further North than the North Pole?”
It’s a true statement: the graph of our history is infinitely long.
A graph of our history is not our history. Saying that our history is infinitely long because in some coordinates its beginning may have t=-\infty is like saying the North Pole is infinitely far away because it is drawn there on Mercator projection maps. Anyway, it’s not the graph of our history; there are many graphs and only some of them are infinitely long.
Coordinate time has no physical meaning,
I can’t agree with that statement.
It would be actually helpful if you also said why.
and it’s possible to choose the coordinates in such a way to have the time difference finite.
That much is true, but it fails to reveal explicably the nature of why the question, “What happened before the Big Bang?” as being as meaningless as “What’s further North than the North Pole?”
We aren’t discussing the question “what happened before the Big Bang”, but rather “how long ago the Big Bang happened”.
It is currently unknown how to apply special relativity SR and general relativity GR to quantum systems and it appears likely that they break down at this level. Thus applying us SR or GR on black holes or the very beginning of the universe is unlikely to result in perfectly accurate description of how the universe works.
That seems non-obvious to me. It’s highly problematic, sure—but not “key”. “Key” is “adequate range of data”.
I can see where you’re coming from. I may have mistaken “adequate range of data” for simply “range of data.” Thus it read more like, “I have this set of data. Which hypothesis is most closely like the ‘ideal explanation’ of this data.” Thus, the key piece of information will be in how you define “ideal explanation.”
Re-reading, I think both are critical. How you define the ideal still matters a great deal, but you’re absolutely right… the definition of an “adequate range” is also huge. I also don’t recall them talking about this, so that may be another reason why it didn’t strike me as strongly.
...and can’t say I entirely agree with the notion that our universe in no ways truly behaves probabilistically
Could you explain this? I thought that the fact that our universe did behave probabilistically was the whole point of Bayes’ theorem. If you have no rules of probability, why would you have need for a formula that says if you have 5 balls in a bucket and one of them is green, you will pull out a green one 20% of the time? If the universe weren’t probabilistic, shouldn’t that number be entirely unpredictable?
In other words; a Bayesian believes that each trial will have a set outcome that isn’t ‘fuzzy’ even at the time the trial is initiated. The frequentist on the other hand believes that probability makes reality itself fuzzy until the trial concludes. If you had a sufficiently accurate predicting robot, to the Bayesian, it would be ‘right’ in one million out of one million coin flips by a robotic arm. To the frequentist, on the other hand, that sort of accuracy is impossible.
Now, I believe Bayesian statistical modeling to be vastly more effective at modeling our reality. However, I don’t think that belief is incompatible with a foundational belief that our universe is probabilistic rather than deterministic.
Critical I can agree to. “Key” is a more foundational term than “critical” in my ‘gut response’.
I can dig.
If you had a sufficiently accurate predicting robot, to the Bayesian, it would be ‘right’ in one million out of one million coin flips by a robotic arm. To the frequentist, on the other hand, that sort of accuracy is impossible.
My initial response was, “No way Bayesians really believe that.” My secondary response was, “Well, if ‘sufficiently accurate’ means knowing the arrangement of things down to quarks, the initial position, initial angle, force applied, etc… then, sure, you’d know what the flip was going to be.”
If you meant the second thing, then I guess we disagree. If you meant something else, you’ll probably have to clarify things. Either way, what you mean by “sufficiently accurate” might need some explaining.
My initial response was, “No way Bayesians really believe that.”
When I was first introduced to the concept of Bayesian statistics, I had rather lengthy conversations on just this very example.
Either way, what you mean by “sufficiently accurate” might need some explaining.
“Sufficiently accurate” means “sufficiently accurate”, in this case. sufficient: being as much as needed; accurate. Synthesize the two and you have “being as without error and precise as needed”. Can’t get more clear than that, I fear.
Now, if I can read into the question you’re tending to with the request—well… let’s put it this way; there is a phenomenon called stochastic resonance. We know that quantum-scale spacetime events do not have precise locations despite being discrete phenonema (wave-particle duality): this is why we don’t talk about ‘location’ but rather ‘configuration space’.
Now, which portion of the configuration space will interact with which other portion in which way is an entirely probabilistic process. To the Bayesians I’ve discussed the topic with at any length, this is where we go ‘sideways’; they believe as you espoused: know enough points of fact and you can make inerrant predictions; what’s really going to happen is set in stone before the trial is even conducted. Replay it a trillion, trillion times with the same exact original conditions and you will get the same results every single time. You just have to get the parameters EXACTLY the same.
I don’t believe that’s a true statement. I believe that there is and does exist material randomness and pseudorandomness; and I believe further that while we as humans cannot ever truly exactly measure the world’s probabilities but instead only take measurements and make estimates, those probabilities are real.
Your “read into where I was tending with the request” was more like it. Sorry if I was unclear. I was more interested in what phenomenon such a machine would have at its disposal—anything we can currently know/detect (sensors on the thumb, muscle contraction detection of some sort, etc.), only a prior history of coin flips, or all-phenomenon-that-can-ever-be-known-even-if-we-don’t-currently-know-how-to-know-it? By “accurate”
I was more meaning, “accurate given what input information?” Then again, perhaps your addition of “sufficiently” should have clued me in on the fact that you meant a machine that could know absolutely everything.
I’ll probably have to table this one as I really don’t know enough about all of this to discuss further, but I do appreciate the food for thought. Very interesting stuff. I’m intuitively drawn to say that there is nothing actually random… but I am certainly not locked into that position, nor (again) do I know what I’m talking about were I to try and defend that with substantial evidence/argument.
Then again, perhaps your addition of “sufficiently” should have clued me in on the fact that you meant a machine that could know absolutely everything.
Funny thing. Just a few hours ago today, I was having a conversation with someone who said, “I need to remember, {Logos01}, that you use words in their literal meaning.”
I’m intuitively drawn to say that there is nothing actually random...
It’s a common intuition. I have the opposite intuition. As a layman, however, I don’t know enough to get our postulates in line with one another. So I’ll leave you to explore the topic yourself.
Indeed. Whether I should have caught on, didn’t think about what you wrote or not, or perhaps am trained not to think of things precisely literally… something went awry :)
To my credit (if I might), we were talking fairly hypothetical, so I don’t know that it was apparent that the prediction machine mentioned would have access to all hypothetical knowledge we can conceive of. To be explicitly literal, it might have helped to just bypass to your previous comment:
know enough points of fact and you can make inerrant predictions; what’s really going to happen is set in stone before the trial is even conducted...I believe that there is and does exist material randomness and pseudorandomness; and I believe further that while we as humans cannot ever truly exactly measure the world’s probabilities.
That would have done it easier than reference to a prediction machine, for me at least. But again, I’m more of a noob, so mentioning this to a more advanced LWer might have automatically lit up the right association.
So I’ll leave you to explore the topic yourself.
Sounds good. Thanks again for taking the time to walk through that with me!
That seems non-obvious to me. It’s highly problematic, sure—but not “key”. “Key” is “adequate range of data”. That cannot be an objective measure. It occurs to me that Bayes’ theorem has no such problem; it simply takes additional input and revises its conclusions as they come—it makes no presumption of its conclusions necessarily being representative of absolute truth.
I also, personally, take objection to:
I find it is highly unlikely that “a world-creator” exists. For two reasons. 1) Our universe necessarily possesses an infinite history (Big Bang + Special Relativity says this.) 2) Any ruleset which allows for spontaneous manifestation of an agentless system is by definition less unlikely than the rulesets which allow for the spontaneous manifestation of an agent that can itself manifest rulesets. (The latter being a subset of the former, and possessed of greater ‘complexity’—an ugly term but there just isn’t a better one I am familiar with; in this case I use it to mean “more pieces that could go wrong if not assembled ‘precisely so’.)
I can’t say, as a person who is still neutral on this whole “Bayesian theory” thing (i.e.; I feel no special attachment to the idea and can’t say I entirely agree with the notion that our universe in no ways truly behaves probabilistically) -- I can’t say that this topic as related is at all convincing.
Can you clarify? Big Bang is usually put a little more than 13 billion years ago; that’s a lot of time, but not infinity.
Here’s a thought experiment for you: Imagine that you’ve decided to take a short walk to the black hole at the corner 7-11 / Circle-K / ‘Kwik-E-Mart’. How long will it take you to reach the event horizon? (The answer, of course, is that you never will.)
As you approach the event horizon of a quantum singularity, time is distorted until it reaches an infinitessimal rate of progression. The Bing Bang states that the entire universe inflated from a single point; a singularity. The same rules, thusly govern—in reverse; the first instants of the universe took an infinitely long period of time to progress.
It helps if you think of this as a two-dimensional graph, with the history of the universe as a line. As we approach the origin mark, the graph of history curves; the “absolute zero instant” of the Universe is, thusly, shown to be an asymptotic limit; a point that can only ever be approached but never, ever reached.
If you decide to really walk inside, you could be well behind the horizon before you remember to check your watch and hit the singularity not long afterwards.
There are different times in general relativistic problems. There is the coordinate time, which is what one usually plots on the vertical axis of a graph. This is (with usual choice of coordinates) infinite when any object reaches the horizon, but it also lacks immediate physical meaning, since GR is invariant with respect to (almost) arbitrary coordinate changes. Then there may be times measured by individual observers. A static observer looking at an object falling into a black hole will never see the object cross the horizon, apparently it takes infinite time to reach it. But the proper time of a falling observer (the time measured by the falling observer’s clocks) is finite and nothing special happens at the horizon.
Correct, but since the entire universe was at that singularity, the distortion of time is relevant.
How exactly? It is the physical proper time since the Big Bang which is 13,7 billion years, isn’t it?
Yes and no. Since the first second took an infinitely long period of time to occur.
What does that mean? Do you say that proper time measured along geodesics was infinite between the Big Bang and the moment denoted as “first second” by the coordinate time, or that the coordinate time difference between those events is infinite while the proper time is one second?
The latter statement conforms to my understanding of the topic.
I agree. But now, how does that justify talking about infinite history? Coordinate time has no physical meaning, it’s an arbitrary artifact of our description and it’s possible to choose the coordinates in such a way to have the time difference finite.
How does it not? It’s a true statement: the graph of our history is infinitely long.
I can’t agree with that statement.
That much is true, but it fails to reveal explicably the nature of why the question, “What happened before the Big Bang?” as being as meaningless as “What’s further North than the North Pole?”
A graph of our history is not our history. Saying that our history is infinitely long because in some coordinates its beginning may have t=-\infty is like saying the North Pole is infinitely far away because it is drawn there on Mercator projection maps. Anyway, it’s not the graph of our history; there are many graphs and only some of them are infinitely long.
It would be actually helpful if you also said why.
We aren’t discussing the question “what happened before the Big Bang”, but rather “how long ago the Big Bang happened”.
It is currently unknown how to apply special relativity SR and general relativity GR to quantum systems and it appears likely that they break down at this level. Thus applying us SR or GR on black holes or the very beginning of the universe is unlikely to result in perfectly accurate description of how the universe works.
I can see where you’re coming from. I may have mistaken “adequate range of data” for simply “range of data.” Thus it read more like, “I have this set of data. Which hypothesis is most closely like the ‘ideal explanation’ of this data.” Thus, the key piece of information will be in how you define “ideal explanation.”
Re-reading, I think both are critical. How you define the ideal still matters a great deal, but you’re absolutely right… the definition of an “adequate range” is also huge. I also don’t recall them talking about this, so that may be another reason why it didn’t strike me as strongly.
Could you explain this? I thought that the fact that our universe did behave probabilistically was the whole point of Bayes’ theorem. If you have no rules of probability, why would you have need for a formula that says if you have 5 balls in a bucket and one of them is green, you will pull out a green one 20% of the time? If the universe weren’t probabilistic, shouldn’t that number be entirely unpredictable?
Critical I can agree to. “Key” is a more foundational term than “critical” in my ‘gut response’.
The below might help:
In other words; a Bayesian believes that each trial will have a set outcome that isn’t ‘fuzzy’ even at the time the trial is initiated. The frequentist on the other hand believes that probability makes reality itself fuzzy until the trial concludes. If you had a sufficiently accurate predicting robot, to the Bayesian, it would be ‘right’ in one million out of one million coin flips by a robotic arm. To the frequentist, on the other hand, that sort of accuracy is impossible.
Now, I believe Bayesian statistical modeling to be vastly more effective at modeling our reality. However, I don’t think that belief is incompatible with a foundational belief that our universe is probabilistic rather than deterministic.
I can dig.
My initial response was, “No way Bayesians really believe that.” My secondary response was, “Well, if ‘sufficiently accurate’ means knowing the arrangement of things down to quarks, the initial position, initial angle, force applied, etc… then, sure, you’d know what the flip was going to be.”
If you meant the second thing, then I guess we disagree. If you meant something else, you’ll probably have to clarify things. Either way, what you mean by “sufficiently accurate” might need some explaining.
Thanks for the dialog.
When I was first introduced to the concept of Bayesian statistics, I had rather lengthy conversations on just this very example.
“Sufficiently accurate” means “sufficiently accurate”, in this case. sufficient: being as much as needed; accurate. Synthesize the two and you have “being as without error and precise as needed”. Can’t get more clear than that, I fear.
Now, if I can read into the question you’re tending to with the request—well… let’s put it this way; there is a phenomenon called stochastic resonance. We know that quantum-scale spacetime events do not have precise locations despite being discrete phenonema (wave-particle duality): this is why we don’t talk about ‘location’ but rather ‘configuration space’.
Now, which portion of the configuration space will interact with which other portion in which way is an entirely probabilistic process. To the Bayesians I’ve discussed the topic with at any length, this is where we go ‘sideways’; they believe as you espoused: know enough points of fact and you can make inerrant predictions; what’s really going to happen is set in stone before the trial is even conducted. Replay it a trillion, trillion times with the same exact original conditions and you will get the same results every single time. You just have to get the parameters EXACTLY the same.
I don’t believe that’s a true statement. I believe that there is and does exist material randomness and pseudorandomness; and I believe further that while we as humans cannot ever truly exactly measure the world’s probabilities but instead only take measurements and make estimates, those probabilities are real.
Your “read into where I was tending with the request” was more like it. Sorry if I was unclear. I was more interested in what phenomenon such a machine would have at its disposal—anything we can currently know/detect (sensors on the thumb, muscle contraction detection of some sort, etc.), only a prior history of coin flips, or all-phenomenon-that-can-ever-be-known-even-if-we-don’t-currently-know-how-to-know-it? By “accurate”
I was more meaning, “accurate given what input information?” Then again, perhaps your addition of “sufficiently” should have clued me in on the fact that you meant a machine that could know absolutely everything.
I’ll probably have to table this one as I really don’t know enough about all of this to discuss further, but I do appreciate the food for thought. Very interesting stuff. I’m intuitively drawn to say that there is nothing actually random… but I am certainly not locked into that position, nor (again) do I know what I’m talking about were I to try and defend that with substantial evidence/argument.
Funny thing. Just a few hours ago today, I was having a conversation with someone who said, “I need to remember, {Logos01}, that you use words in their literal meaning.”
It’s a common intuition. I have the opposite intuition. As a layman, however, I don’t know enough to get our postulates in line with one another. So I’ll leave you to explore the topic yourself.
Indeed. Whether I should have caught on, didn’t think about what you wrote or not, or perhaps am trained not to think of things precisely literally… something went awry :)
To my credit (if I might), we were talking fairly hypothetical, so I don’t know that it was apparent that the prediction machine mentioned would have access to all hypothetical knowledge we can conceive of. To be explicitly literal, it might have helped to just bypass to your previous comment:
That would have done it easier than reference to a prediction machine, for me at least. But again, I’m more of a noob, so mentioning this to a more advanced LWer might have automatically lit up the right association.
Sounds good. Thanks again for taking the time to walk through that with me!