The basic form of the atheistic argument found in the Sequences is as follows: “The theistic hypothesis has high Kolmogorov complexity compared to the atheistic hypothesis. The absence of evidence for God is evidence for the absence of god. This in turn suggests that the large number of proponents of religion is more likely due to God being an improperly privileged hypothesis in our society rather than Less Wrong and the atheist community in general missing key pieces of evidence in favour of the theistic hypothesis.”
Now, you could make a counterpoint along the lines of “But what about ‘insert my evidence for God here’? Doesn’t that suggest the opposite, and that God IS real?” There is almost certainly some standard rebuttal to that particular piece of evidence which most of us have already previously seen. God is a very well discussed topic, and most of the points anyone will bring up have been brought up elsewhere. And so, Less Wrong as a community has for the most part elected to not entertain these sorts of arguments outside of the occasional discussion thread, if only so that we can discuss other topics without every thread becoming about religion (or politics).
“There is almost certainly some standard rebuttal to that particular piece of evidence...”
Evidence is not something that needs “rebuttal.” There is valid evidence both for and against a claim, regardless of whether the claim is true or false.
That’s fair. Though, I’d put my mistake less on the word “rebuttal” and more on the word “evidence.” The particular examples I had in mind when writing that post were non-evidence “evidences” of God’s existence like the complexity of the human eye, or fine structure of the universe. Cases where things are pointed to as being evidence despite the fact that they are just as and often more likely to exist if God doesn’t exist than they would be if he did.
yes, the debate here is well worn: the only novelty is less wrong’s degree of confidence that they have right answer. Might that be what is attracting debate, as opposed to “most of us are atheists, but whatever”.
The theistic hypothesis has high Kolmogorov complexity compared to the atheistic hypothesis.
I find this unconvincing. The basic theistic hypothesis is a description of an omnipotent, omniscient being; together with the probable aims and suspected intentions of such a being. The laws of physics would then derive from this.
The basic atheistic hypothesis is, as far as I understand it, the laws of physics themselves, arising from nothing, simply existing.
I am not convinced that the Kolmogorov complexity of the first is higher then the Kolmogorov complexity of the second. (Mind you, I haven’t really compared them all that thoroughly—I could be wrong about that. But it, at the very least, is not obviously higher).
Before seeing this I thought you rejected all priors based on Kolmogorov complexity, as that seemed like the only way to save your position. (From what you said before you’ve read at least some of what Eliezer wrote on the difficulty of writing an AGI program. Hopefully you’ve read about the way that an incautious designer could create levers which do nothing, since the human brain is inclined to underestimate its own complexity.)
While guessing is clearly risky, it seems like you’re relying on the idea that a program to simulate the right kind of “omnipotent, omniscient being” would necessarily show it creating our laws of physics. Otherwise it would appear absurd to compare the complexity of the omni-being to that of physics alone. (It also sounds like you’re talking about a fundamentally mental entity, not a kind of local tyrant existing within physics.) But you haven’t derived any of our physics from even a more specific theistic hypothesis, nor did the many intelligent people who thought about the logical implications of God in the Middle Ages! Do you actually think they just failed to come up with QM or thermodynamics because they didn’t think about God enough?
Earlier when you tried to show that assuming any omni-being implied an afterlife, you passed over the alternative of an indifferent omni^2 without giving a good reason. You also skipped the idea of an omni-being not having people die in the first place. In general, a habit of ignoring alternatives will lead you to overestimate the prior probability of your theory. And in this case, if you want to talk about an omni^2 that has an interest in humans, we would naively expect it to create some high-level laws of physics which mention humans. You have not addressed this. It seems like in practice you’re taking a scientific model of the world and adding the theistic hypothesis as an additional assumption, which—in the absence of evidence for your theory over the simpler one—lowers the probability by a factor of 2^(something on the order of MIRI’s whole reason for being). Or at least it does by assumptions which you seem to accept.
Maybe the principle will be clearer if we approach it from the evidence side. Insofar as an omni^2 seems meaningful, I’d expect its work to be near optimal for achieving its goals. I say that literally nothing in existence which we didn’t make is close to optimal for any goal, except a goal that overfits the data in a way that massively lowers that goal’s prior probability. Show me an instance. And please remember what I said about examining alternatives.
While guessing is clearly risky, it seems like you’re relying on the idea that a program to simulate the right kind of “omnipotent, omniscient being” would necessarily show it creating our laws of physics.
Yes, I think so.
It also sounds like you’re talking about a fundamentally mental entity, not a kind of local tyrant existing within physics.
Yes, that is correct.
But you haven’t derived any of our physics from even a more specific theistic hypothesis, nor did the many intelligent people who thought about the logical implications of God in the Middle Ages! Do you actually think they just failed to come up with QM or thermodynamics because they didn’t think about God enough?
A few seconds’ googling suggests (article here) that a monk by the name of Udo of Aachen figured out the Mandelbrot set some seven hundred years before Mandelbrot did by, essentially, thinking about God. (EDIT: It turns out Udo was an April Fools’ hoax from 1999. See here for details.)
Mind you, simply starting from a random conception of God and attempting to derive a universe will essentially lead to a random universe. To start from the right conception of God necessarily requires some sort of observation—and I do think it is easier to derive the laws of physics from observation of the universe than it is to derive the mindset of an omniscient being (since the second seems to require first deriving the laws of physics in order to check your conclusions).
Earlier when you tried to show that assuming any omni-being implied an afterlife, you passed over the alternative of an indifferent omni^2 without giving a good reason. You also skipped the idea of an omni-being not having people die in the first place.
You are right. I skipped over the idea of an entirely indifferent omni-being; that case seems to have minimal probability of an afterlife (as does the atheist universe; in fact, they seem to have the same minimal probability). Showing that the benevolent case increases the probability of an afterlife is then sufficient to show that the probability of an afterlife is higher in the theistic universe than the atheistic universe (though the difference is less than one would expect from examining only the benevolent case).
I also skipped the possibility of there being no death at all; I skipped this due to the observation that this is not the universe in which we live. (I could argue that the process of evolution requires death, but that raises the question of why evolution is important, and the only answer I can think of there—i.e. to create intelligent minds—seems very self-centred)
And in this case, if you want to talk about an omni^2 that has an interest in humans, we would naively expect it to create some high-level laws of physics which mention humans.
I question whether it has an interest in humans specifically, or in intelligent life as a whole. (And there is at least a candidate for a high-level law of physics which mentions humans in particular—“humans have free will”. It is not proven, despite much debate over the centuries, but it is not disproven either, and it is hard to see how it can derive from other physical laws)
Insofar as an omni^2 seems meaningful, I’d expect its work to be near optimal for achieving its goals.
This seems likely. It implies that the universe is the optimal method for achieving said goals, and therefore that said goals can be derived from a sufficiently close study of the universe.
It should also be noted that aesthetics may be a part of the design goals; in the same way as a dance is generally a very inefficient way for moving from point A to point B, the universe may have been designed in part to fulfill some (possibly entirely alien) sense of aesthetics.
I say that literally nothing in existence which we didn’t make is close to optimal for any goal, except a goal that overfits the data in a way that massively lowers that goal’s prior probability. Show me an instance. And please remember what I said about examining alternatives.
I can’t seem to think of one off the top of my head. (Mind you, I’m not sure that the goal of the universe has been reached yet; it may be something that we can’t recognise until it happens, which may be several billion years away)
Do you actually think they just failed to come up with QM or thermodynamics because they didn’t think about God enough?
A few seconds’ googling suggests (article here) that a monk by the name of Udo of Aachen figured out the Mandelbrot set some seven hundred years before Mandelbrot did by, essentially, thinking about God.
Took me a while to check this, because of course it would have been evidence for my point. (By the way, throughout this conversation, you’ve shown little awareness of the concept or the use of evidence in Bayesian thought.)
the probable aims and suspected intentions of such a being
The general opinion around here (which I share) is that the complexity of those is much higher than you probably think it is. “Human-level” concepts like “mercy” and “adultery” and “benevolence” and “cowardice” feel simple to us, which means that e.g. saying “God is a perfectly good being” feels like a low-complexity claim; but saying exactly what they mean is incredibly complicated, if it’s possible at all. Whereas, e.g., saying “electrons obey the Dirac equation” feels really complicated to us but is actually much simpler.
Of course you’re at liberty to say: “No! Actually, human-level concepts really are simple, because the underlying reality of the universe is the mind of God, which entertains such concepts as easily as it does the equations of quantum physics”. And maybe the relative plausibility of that position and ours ultimately depends on one’s existing beliefs about gods and naturalism and so forth. I suggest that (1) the startling success of reductionist mathematics-based science in understanding, explaining and predicting the universe and (2) the total failure of teleological purpose-based thinking in the same endeavour (see e.g., the problem of evil) give good reason to prefer our position to yours.
The general opinion around here (which I share) is that the complexity of those is much higher than you probably think it is.
That is possible. I have no idea how to specify such things in a minimum number of bits of information.
Whereas, e.g., saying “electrons obey the Dirac equation” feels really complicated to us but is actually much simpler.
This is true; yet there may be fewer human-level concepts and more laws of physics. I am still unconvinced which complexity is higher; mainly because I have absolutely no idea how to measure the complexity of either in the first place. (One can do a better job of estimating the complexity of the laws of physics because they are better known, but they are not completely known).
But let us consider what happens if you are right, and the complexity of my hypothesis is higher than the complexity of yours. Then that would form a piece of probabilistic evidence in favour of the atheist hypothesis, and the correct action to take would be to update—once—in that direction by an appropriate amount. I’m not sure what an appropriate amount is; that would depend on the ratio of the complexities (but is capped by the possibility of getting that ratio wrong).
This argument does not, and can not, in itself, give anywhere near the amount of certainty implied by this statement (quoted from here):
...would rather push a button that would destroy the world if God exists, than a button that had a known probability of one in a billion of destroying the world.
I should also add that the existence of God does not invalidate reductionist mathematics-based thinking in any way.
there may be fewer human-level concepts and more laws of physics
Well, I suppose in principle there might. But would you really want to bet that way?
update—once—in that direction by an appropriate amount
Yes, I completely agree.
capped by the possibility of getting that ratio wrong
Almost, but not exactly. It makes a difference how wrong, and in which direction.
can not [...] give anywhere near the amount of certainty [...] one in a billion
One in a billion is only about 30 bits. I don’t think it’s at all impossible for the complexity-based calculation, if one could do it, to give a much bigger odds ratio than that. The question then is what to do about the possibility of having got the complexity-based calculation (or actually one’s estimate of it) badly wrong. I’m inclined to agree that when one takes that into account it’s not reasonable to use an odds ratio as large as 10^9:1.
But it’s not as if this complexity argument is the only reason anyone has for not believing in God. (Some people consider it the strongest reason, but “strongest” is not the same as “only”.)
Incidentally, I offer the following (not entirely serious) argument for pressing the boom-if-God button rather than the boom-with-small-probability button: the chances of the world being undestroyed afterwards are presumably better if God exists.
Well, I suppose in principle there might. But would you really want to bet that way?
Insufficient information to bet either way.
The question then is what to do about the possibility of having got the complexity-based calculation (or actually one’s estimate of it) badly wrong. I’m inclined to agree that when one takes that into account it’s not reasonable to use an odds ratio as large as 10^9:1.
Yes, that’s what I meant by “capped”—if I did that calculation (somehow working out the complexities) and it told me that there was a one-in-a-billion chance, then there would be a far, far better than a one-in-a-billion chance that the calculation was wrong.
But it’s not as if this complexity argument is the only reason anyone has for not believing in God. (Some people consider it the strongest reason, but “strongest” is not the same as “only”.)
Noted.
If I assume that the second-strongest reason is (say) 80% as strong as the strongest reason (by which I mean, 80% as many bits of persuasiveness), the third-strongest reason is 80% as strong as that, and so on; if the strength of all this (potentially infinite) series of reasons is added together, it would come to five times as strong as the strongest reason.
Thus, for a thirty-bit strength from all the reasons, the strongest reason would need a six-bit strength—it would need to be worth one in sixty-four (approximately).
Of course, there’s a whole lot of vague assumptions and hand-waving in here (particularly that 80% figure, which I just pulled out of nowhere) but, well, I haven’t seen any reason to think it at all likely that the complexity argument is worth even three bits, never mind six.
(Mind you, I can see how a reasonable and intelligent person might disagree on me about that).
Incidentally, I offer the following (not entirely serious) argument for pressing the boom-if-God button rather than the boom-with-small-probability button: the chances of the world being undestroyed afterwards are presumably better if God exists.
...serious or not, that is a point worth considering. I’m not sure that it’s true, but it could be interesting to debate.
I would expect heavier tails than that. (For other questions besides that of gods, too.) I’d expect that there might be dozens of reasons providing half a bit or so.
I haven’t seen any reason to think it at all likely that the complexity argument is worth even three bits, never mind six.
For what it’s worth, I might rate it at maybe 7 bits. Whether I’m a reasonable and intelligent person isn’t for me to say :-).
I would expect heavier tails than that. (For other questions besides that of gods, too.) I’d expect that there might be dozens of reasons providing half a bit or so.
Fair enough. That 80% figure was kindof pulled out of nowhere, really.
For what it’s worth, I might rate it at maybe 7 bits. Whether I’m a reasonable and intelligent person isn’t for me to say :-).
You think the theistic explanation might be as much as a hundred times more complex?
...there may be some element of my current position biasing my estimate, but that does seem a little excessive.
Whether I’m a reasonable and intelligent person isn’t for me to say :-).
So far as this debate goes, my impression is that you either are both reasonable and intelligent or you’re really good at faking it.
No, as much as seven bits more complex. (More precisely, I think it’s probably a lot more more-complex than that, but I’m quite uncertain about my estimates.)
really good at faking it
Damn, you caught me. (Seriously: I’m pretty sure that being really good at faking intelligence requires intelligence. I’m not so sure about reasonable-ness.)
Seven bits are two-to-the-seven times as likely, which is 128 times.
...surely?
(Seriously: I’m pretty sure that being really good at faking intelligence requires intelligence. I’m not so sure about reasonable-ness.)
I can think of a few ways to fake greater intelligence then you have. Most of them require a more intelligent accomplice, in one way or another. But yes, reasonableness is probably easier to fake.
128x more unlikely but not 128x more complex; for me, at least, complexity is measured in bits rather than in number-of-possibilities.
[EDITED to add: If anyone has a clue why this was downvoted, I’d be very interested. It seems so obviously innocuous that I suspect it’s VoiceOfRa doing his thing again, but maybe I’m being stupid in some way I’m unable to see.]
...I thought that the ratio of likeliness due to the complexity argument would be the inverse of the ratio of complexity. Thus, something twice as complex would be half as likely. Is this somehow incorrect?
All else being equal, something that takes n bits to specify has probability proportional to 2^-n. So if hypothesis A takes 110 bits and hypothesis B takes 100, then A is about 1000x less probable.
Exactly what “all else being equal” means is somewhat negotiable.
If you are using a Solomonoff prior, it means: in advance of looking at any empirical evidence at all, the probability you assign to a hypothesis should be proportional to 2^-n where n is the number of bits in a minimal computer program that specifies the hypothesis, in a language satisfying some technical conditions. Exactly how this cashes out depends on the details of the language you use, and there’s no way of actually computing the numbers n in general, and there’s no law that says you have to use a Solomonoff prior anyway.
More generally, whatever prior you use, there are 2^n hypotheses of length n (and if you describe them in a language satisfying those technical conditions, then they are all genuinely different and as n varies you get every computable hypothesis) so (handwave handwave) on average for large n an n-bit hypothesis has to have probability something like 2^-n.
Anyway, the point is that the natural way to measure complexity is in bits, and probability varies exponentially, not linearly, with number of bits.
So if hypothesis A takes 110 bits and hypothesis B takes 100, then A is about 1000x less probable.
Yes, and hypothesis A is also 1024x as complex—since it takes ten more bits to specify.
Anyway, the point is that the natural way to measure complexity is in bits, and probability varies exponentially, not linearly, with number of bits.
...it seems that our disagreement here is in the measure of complexity, and not the measure of probability. My measure of complexity is pretty much the inverse of probability, while you’re working on a log scale by measuring it in terms of a number of bits.
Yes, apparently we’re using the word “complexity” differently.
So, getting back to what I said that apparently surprised you: Yes, I think it is very plausible that the best theistic explanation for everything we observe around us is what I call “7 bits more complex” and you call “128x more complex” than the best non-theistic explanation; just to be clear what that means, I mean that if we could somehow write down a minimal-length complete description of what we see (compressing it via computer programs / laws of physics / etc.) subject to the constraint “must not make essential use of gods”, and another subject instead to the constraint “must make essential use of gods”, then my guess at the length of the second description is >= 7 bits longer than my guess at the length of the first. Actually I think the second description would have to be much longerer than that, but I’m discounting because this is confusing stuff and I’m far from certain that I’m right.
And you, if I’m understanding you correctly, are objecting not so much “no, the theistic description will be simpler” as “well, maybe you’re right that the nontheistic description will be simpler, but we should expect it to be simpler by less than one random ASCII character’s worth of description length”.
Of course the real diffiulty here is that we aren’t in a position to say what a minimal length theistic or nontheistic description of the universe would look like. We have a reasonable set of laws of physics that might form the core of the nontheistic description, but (1) we know the laws we have aren’t quite right, (2) it seems likely that the vast bulk of the complexity needed is not in the laws but in whatever arbitrary-so-far-as-we-know boundary conditions[1] need to be added to get our universe rather than a completely different one with the same laws, and we’ve no idea how much information that takes or even whether it’s finite. And on the theistic side we have at most a pious hope that something like “this is the best of all possible worlds” might suffice, but no clear idea of how to specify what notion of “best” is appropriate, and the world looks so much unlike the best of all possible worlds according to any reasonable notion that this fact is generally considered one of the major reasons for disbelieving in gods. So what hope have we of figuring out which description is shorter?
[1] On some ways of looking at the problem, what needs specifying is not so much boundary conditions as our location within a vast universe or multiverse. Similar problem.
Actually I think the second description would have to be much longerer than that, but I’m discounting because this is confusing stuff and I’m far from certain that I’m right.
It is confusing. I’m still not even convinced that the theist’s description would be longer, but my estimation is so vague and has such massively large error bars that I can’t say you’re wrong, even if what you’re saying is surprising to me.
And you, if I’m understanding you correctly, are objecting not so much “no, the theistic description will be simpler” as “well, maybe you’re right that the nontheistic description will be simpler, but we should expect it to be simpler by less than one random ASCII character’s worth of description length”.
More or less. I’m saying I would find it surprising if the existence of God made the universe significantly more complex. (In the absolutely minimal-length description, I expect it to work out shorter, but like I say above, there are massive error bars on my estimates).
the world looks so much unlike the best of all possible worlds according to any reasonable notion
While I’ve heard this argued before, I have yet to see an idea for a world that (a) is provably better, (b) cannot be created by sufficient sustained human effort (in an “if everyone works together” kind of way) and (c) cannot be taken apart by sustained human effort into a world vaguely resembling ours (in an “if there are as many criminals and greedy people as in this world”).
I’m not saying that there isn’t nasty stuff in this world. I’m just not seeing a way that it can be removed without also removing things like free will.
what hope have we of figuring out which description is shorter?
If we get seriously into discussing arguments from evil we could be here all year :-), so I’ll just make a few points and leave it.
(1) Many religious believers, including (I think) the great majority of Christians, anticipate a future state in which sin and suffering and death will be no more. I’m pretty sure they see this as a good thing, whether they anticipate losing their free will to get it or not.
(2) I don’t know whether I can see any way to make a world with nothing nasty in it at all without losing other things we care about, but it doesn’t seem difficult to envisage ways in which omnipotence in the service of perfect goodness could improve the world substantially. For instance, consider a world exactly like this one except that whenever any cell in any animal’s body (human or other) gets into a state that would lead to a malignant tumour, God magically kills it. Boom, no more cancer. (And no effect at all on anyone who wouldn’t otherwise be getting cancer.) For an instance of a very different kind, imagine that one day people who pray actually start getting answers. Consistently. I don’t mean obliging answers to petitionary prayers, I mean communication. Suddenly anyone who prays gets a response; the responses are consistent and, for some categories of public prayer, public. There is no longer any more scope for wars about whose vision of God is right than there is for wars about whose theory of gravity is right, and anyone who tries to recruit people to blow things up in the name of God gets contradicted by a message from God himself. There might still be scope for fights between people who think it’s God doing this and people who think it’s a super-powerful evil being, but I don’t think it’s credible that this wouldn’t decrease religious strife. And if you think that being badly wrong about God is a serious problem (whether just because it’s bad to be wrong about important things, or because it leads to worse actions, or because it puts one in danger of damnation) then I hope you’ll agree that almost everyone on earth having basically correct beliefs about God would be a gain. And no, it wouldn’t mean abolishing free will; do we lack free will because we find it difficult to believe the sky is green on account of seeing it ourselves?
(3) I think your conditions a,b,c are too strict, in that I see no reason why candidate better worlds need to satisfy them all in order to be evidence that our actual world isn’t the best possible. Perhaps, e.g., a better world is possible that could be created by sustained human effort with everyone working together but won’t because actually, in practice, in the world we’ve got, everyone doesn’t work together. So, OK, you can blame humanity for the fact that we haven’t created that world, and maybe doing so makes you feel better, but more than one agent can be rightly blamed for the same thing and the fact that it’s (kinda) our fault doesn’t mean it isn’t God’s. Do you suppose he couldn’t have encouraged us more effectively to do better? If not, doesn’t the fact that not even the most effective encouragement infinite wisdom could devise would lead us to do it suggest that saying we could is rather misleading? And (this is a point I think is constantly missed) whyever should we treat human nature, as it now is, as a given? Could your god really not have arranged for humanity to be a little nicer and smarter? In terms of your condition (c), why on earth should we, when considering what better worlds there might be, only consider candidates in which “there are as many criminals and greedy people as in this world”?
Many religious believers, including (I think) the great majority of Christians, anticipate a future state in which sin and suffering and death will be no more. I’m pretty sure they see this as a good thing, whether they anticipate losing their free will to get it or not.
I’ve heard arguments that we’ve already reached that state—think about if you go back in time about two thousand years and describe modern medical technology and lifestyles. (I don’t agree with those arguments, mind you, but I do think that such a future state is going to have to be something that we build, not that we are given.
it doesn’t seem difficult to envisage ways in which omnipotence in the service of perfect goodness could improve the world substantially.
It’s difficult to be certain.
For instance, consider a world exactly like this one except that whenever any cell in any animal’s body (human or other) gets into a state that would lead to a malignant tumour, God magically kills it. Boom, no more cancer.
Now I’m imagining a lot of scientists studying and trying to figure out why some cells just mysteriously vanish for no good reason—and this becoming the greatest unsolved question in medical science and taking all the attention of people who might otherwise be figuring out cures for TB or various types of flu. (In this hypothetical universe, they wouldn’t know about malignant tumours, of course).
And if someone would otherwise develop a LOT of cancer, then Sudden Cell Vanishing Syndrome could, in itself, become a major problem...
Mind, I’m not saying it’s certain that universe would be worse, or even that it’s probable. It’s just easy to see how that universe could be worse.
For an instance of a very different kind, imagine that one day people who pray actually start getting answers. Consistently. I don’t mean obliging answers to petitionary prayers, I mean communication
That would be interesting. And you raise a lot of good points—there would be a lot of positive effects. But, at the same time… I think HPMOR showed quite nicely that sometimes, having a list of instructions with regard to what to do is a good deal less valuable than being able to understand the situation, take responsibility, and do it yourself.
People would still have free will, yes. But how many people would voluntarily abdicate their decision-making processes to simply do what the voice in the sky tells them to do (except the bits where it says “THINK FOR YOURSELVES”)?
...this is something which I think would probably be a net benefit. But I can’t be certain.
I think your conditions a,b,c are too strict
...very probably.
Perhaps, e.g., a better world is possible that could be created by sustained human effort with everyone working together but won’t because actually, in practice, in the world we’ve got, everyone doesn’t work together.
That just means that a better world needs to be designed that can be created under the constraints of not everyone working together. It’s a hard problem, but I don’t think it’s entirely insoluble.
And (this is a point I think is constantly missed) whyever should we treat human nature, as it now is, as a given? Could your god really not have arranged for humanity to be a little nicer and smarter?
That is a good question. I have no good answers for it.
why on earth should we, when considering what better worlds there might be, only consider candidates in which “there are as many criminals and greedy people as in this world”?
...less criminals and greedy people would make things a lot easier, but I’m not quite sure how to arrange that without either (a) reducing free will or (b) mass executions, which could cause other problems.
I’ve heard arguments that we’ve already reached that state
Then I suggest that you classify the people making those arguments as Very Silly and don’t listen to them in future.
I do think that such a future state is going to have to be something that we build, not that we are given.
You’re welcome to think that; my point is simply that if such a thing is possible and desirable then either one can have a better world than this without abrogating free will, or else free will isn’t as important as theists often claim it is when confronted with arguments from evil.
(Perhaps your position is that the world could indeed be much better, but that the only way to make such a better world without abrogating free will is to have us do it gradually starting with a really bad world. I hope I will be forgiven for saying that that doesn’t seem like a position anyone would adopt for reasons other than a desperate attempt to avoid the conclusion of the argument from evil.)
I’m imagining a lot of scientists studying and trying to figure out why some cells just mysteriously vanish for no good reason
Again, you’re welcome to imagine whatever you like, but if you’re suggesting that this would be a likely consequence of the scenario I proposed then I think you’re quite wrong (and again wonder whether it would occur to you to imagine that if you weren’t attempting to justify the existence of cancer to defend your god’s reputation). Under what circumstances would they notice this? Cells die all the time. We don’t have the technology to monitor every cell—or more than a tiny fraction of cells—in a living animal and see if it dies. We don’t have the technology or the medical understanding to be able to say “huh, that cell died and I don’t know why; that’s really unusual”. Maybe some hypothetical super-advanced medical science would be flummoxed by this, but right now I’m pretty sure no one would come close to noticing.
(Also, you could combine this with my second proposal, and then what happens is that someone says “hey, God, would you mind telling us why these cells are dying?” and God says “oh, yeah, those are ones that were going wrong and would have turned into runaway growths that could kill you. I zap those just before they do. You’re welcome.”.)
And if someone would otherwise develop a LOT of cancer [...]
Please, think about that scenario for thirty seconds, and consider whether you can actually envisage a situation where having those cancerous cells self-destruct would be worse than having them turn into tumours.
sometimes, having a list of instructions with regard to what to do [...]
But that was no part of the scenario I described. In that scenario, it could be that when people ask God for advice he says “Sorry, it’s going to be better for you to work this one out on your own.”
without either (a) reducing free will [...]
So here’s the thing. Apparently “reducing free will” is a terrible awful thing so bad that its spectre justifies the Holocaust and child sex abuse and all the other awful things that bad people do without being stopped by God. So … how come we don’t have more free will than we do? Why are we so readily manipulated by advertisements, so easily entrapped by habits, so easily overwhelmed by the desire for food or sex or whatever? It seems to me that if we take this sort of “free will defence” seriously enough for it to work, then we replace (or augment) the argument from evil with an equally fearsome argument from un-freedom.
Then I suggest that you classify the people making those arguments as Very Silly and don’t listen to them in future.
...perhaps I have failed to properly convey that argument. I did not intend to say that our world now is in a state of perfection. I intended to point out that, if you were to go back in time a couple of thousand years and talk to a random person about our current society, then he would be likely to imagine it as a state of perfection. Similarly, if a random person in this era were to describe a state of perfection, then that might be a description of society a couple of thousand years from now—and the people of that time would still not consider their world in a state of perfection.
In short, “perfection” may be a state that can only be approached asymptotically. We can get closer to it, but never reach it; we can labour to reduce the gap, but never fully eliminate it.
my point is simply that if such a thing is possible and desirable then either one can have a better world than this without abrogating free will, or else free will isn’t as important as theists often claim it is when confronted with arguments from evil.
You mean, just kind of starting up the universe at the point where all the major social problems have already been solved, with everyone having a full set of memories of how to keep the solutions working and what happens if you don’t?
...I have little idea why the universe isn’t like that (and the little idea I have is impractically speculative).
Perhaps your position is that the world could indeed be much better, but that the only way to make such a better world without abrogating free will is to have us do it gradually starting with a really bad world.
The only way? No. Starting a universe at the point where the answers to society’s problems are known is a possible way to do that.
...the thing is, I don’t know what the goal, the purpose of the universe is. Free will is clearly a very important part of those aims—either a goal in itself, or strictly necessary in order to achieve some other goal or goals—but I’m fairly sure it’s not the only one. It may be that other ways of making a better world without abrogating free will all come at the cost of some other important thing that is somehow necessary for the universe.
Though this is all very speculative, and the argument is rather shaky.
Under what circumstances would they notice this? Cells die all the time. We don’t have the technology to monitor every cell—or more than a tiny fraction of cells—in a living animal and see if it dies.
Okay, if the cells just die and don’t vanish, then that makes it a whole lot less physics-breaking. (Alternatively, if they are simply replaced with healthy cells, then it becomes even harder to spot).
(Also, you could combine this with my second proposal, and then what happens is that someone says “hey, God, would you mind telling us why these cells are dying?” and God says “oh, yeah, those are ones that were going wrong and would have turned into runaway growths that could kill you. I zap those just before they do. You’re welcome.”.)
...you know, combining those would be interesting as well. (Then the next logical question asked would be “Why don’t you zap all diseases?”)
Please, think about that scenario for thirty seconds, and consider whether you can actually envisage a situation where having those cancerous cells self-destruct would be worse than having them turn into tumours.
No, I can’t. This guy’s in massive trouble either way.
But that was no part of the scenario I described. In that scenario, it could be that when people ask God for advice he says “Sorry, it’s going to be better for you to work this one out on your own.”
A fair point.
Some people would be discouraged by this, others would work harder...
So here’s the thing. Apparently “reducing free will” is a terrible awful thing so bad that its spectre justifies the Holocaust and child sex abuse and all the other awful things that bad people do without being stopped by God.
Yes, and I’m not quite sure that I get the whole of the why either.
So … how come we don’t have more free will than we do? Why are we so readily manipulated by advertisements, so easily entrapped by habits, so easily overwhelmed by the desire for food or sex or whatever?
...huh. That’s… that’s a very good question, really.
Hmmm. It seems logical that it must be possible to talk someone into (or out of) a course of action. “Here is some information that shows why it is to your benefit to do X” has to be possible, or there is no point to communication and we might as well all be alone.
And given that that is possible, advertising is an inevitable consequence—tell a million people to buy Tasty Cheese Snax or whatever, and some of them will listen. (More complex use of advertising is merely a refinement of technique). I don’t really see any logical alternative—either advertising, which is a special case of persuasion, has to work to some degree, or persuasion must be impossible. (If persuasion of a specific type proves impossible, advertisers will simply use a form of persuasion that is effective).
Habits… as far as I can tell, habits are a consequence of the structure of the human brain (we’re pattern-recognition machines, and almost all biases and problems in human thought come from this). A habit is merely a pattern of action; something that we find ourselves doing by default. Avoiding habits would require a pretty much total rewrite of the human brain. Which may be a good or a bad thing, but is a completely unknown thing.
Desires for food and stuff? …I have no idea. You could probably base an argument from unfreedom around that. (It’s clear enough where the desires come from—people without those desires would have been outcompeted by people with them, so there’s evolutionary pressure to have those biases. Is this an inevitable consequence of an evolutionary development?)
I realise that I said “I’ll just make a few points and leave it” and then, er, failed to do so. And lo, this looks like it could be the beginning of a lengthy discussion of evil and theism, for which LW probably really isn’t the best venue. So I’m going to ignore all the object-level issues aside from giving a couple of clarifications (see below) and make the following meta-point:
You seem to be basically agreeing with my arguments and conceding that your counterproposals are shaky and speculative; my point isn’t to declare victory nor to suggest you should be abandoning theism immediately :-) but just that I think this indicates that you agree with me that whether or not the world turns out to be somehow the best that omnipotence coupled with perfect wisdom and goodness can achieve, it doesn’t look much like it is. In which case I don’t think you can credibly make an argument of the form “the world is well explained by the hypothesis that it’s a morally-optimal world, which is a nice simple hypothesis, so we should consider that highly probable”. I’ve argued before that it’s not so simple a hypothesis, but it’s also a really terrible explanation for the world we actually see.
The promised clarifications: 1. The reason why my cancer-zapping proposal didn’t involve curing all diseases was that it’s easier to see that a change is a clear improvement if it’s reasonably small and simple. Curing all diseases is a really big leap, it probably makes a huge difference to typical lifespans and hence to all kinds of other things in society, it probably would get noticed which, for good or ill, could make a big difference to people’s ideas about science and gods and whatnot. I would in fact expect the overall effect to be substantially more positive than that of just zapping incipient cancers, but it’s more complicated and therefore less clear. I’m not trying to describe an optimal world, merely one that’s clearly better than this one. 2. My point about habits and advertising and the like wasn’t that if free will matters then those things should have no effect, still less that it’s a mystery why we have them; but that if our world is being optimized by a superbeing who values free will so much that, e.g., Hitler’s free will matters more than the murder of six million Jews, then we should expect much less impairment of our free will than we actually seem to have.
I realise that I said “I’ll just make a few points and leave it” and then, er, failed to do so.
...to be fair, I think I also deserve part of the blame for this digression. I have a tendency to run away with minor points on occasion.
whether or not the world turns out to be somehow the best that omnipotence coupled with perfect wisdom and goodness can achieve, it doesn’t look much like it is.
I agree that this is a position which can reasonably be held and for which very strong arguments can be made.
The reason why my cancer-zapping proposal didn’t involve curing all diseases was that it’s easier to see that a change is a clear improvement if it’s reasonably small and simple.
Makes sense. (It does raise the question of how we would know whether or not it is already happening for an even more virulent disease...)
if our world is being optimized by a superbeing who values free will so much that, e.g., Hitler’s free will matters more than the murder of six million Jews,
To be fair, it wasn’t just Hitler; there were a whole lot of people working under his command whose free will was also involved. And several million other people trying to help or hinder one side or the other...
I think if you hypothetically let Hitler rise to power but then magically prevent him from giving orders to persecute Jews more severely than (say) requiring them to live in ghettos, you probably prevent the Holocaust without provoking a coup in which someone more viciously antisemitic takes over. Or killing him in childhood or letting his artistic career succeed better would probably suffice (maybe Germany would then have been taken over by different warmongers blaming their troubles on other groups, but is it really credible that in every such version of the world we get something as bad as Hitler?).
Of course it might turn out that WW2 was terribly beneficial to the world because it led to technological advances and the Holocaust was beneficial because it led to the establishment of the state of Israel, or something. But that’s an entirely different defence from the one we’re discussing here, and I can’t say it seems like a very credible one anyway. (If we really need those technological advances and the state of Israel, aren’t there cheaper ways for omnipotence to arrange for us to get them?)
I think if you hypothetically let Hitler rise to power but then magically prevent him from giving orders to persecute Jews more severely than (say) requiring them to live in ghettos, you probably prevent the Holocaust without provoking a coup in which someone more viciously antisemitic takes over.
I, too, think that this is extremely likely. This would show that Hitler’s orders were necessary for the Holocaust, but it would not show that they were sufficient—there’s probably at least a half-dozen or so people whose orders were also necessary for the Holocaust, and then of course there’s a lot of ways to prevent the Holocaust by affecting more than one person in some or other manner.
Of course it might turn out that WW2 was terribly beneficial to the world because it led to technological advances and the Holocaust was beneficial because it led to the establishment of the state of Israel, or something.
I doubt the political ramifications had much to do with it. The effect of thousands of people being placed in difficult moral situations and having to decide what to do might have been a factor, though; a sort of a stress testing of thousands of peoples’ free will, in a way which by and large strengthens their ability to think for themselves (because they’re now well aware of how bad things get when others think for them).
It doesn’t need to. The more ways there are to prevent the Holocaust, the more morally unimpressive not doing so becomes. Or, at least, the better the countervailing reasons need to be.
a sort of a stress testing of thousands of people’s free will, in a way which by and large strengthens their ability to think for themselves
Six million Jews and six million others died in the Holocaust. It is not so easy to think for yourself when you are dead.
(Or: It is very easy to think for yourself when you are dead because you have then transcended the confusions and temptations of this mortal life. But I don’t think anyone’s going to argue seriously that the Holocaust was a good thing because the people murdered in it were thereby enabled to make better decisions post mortem. The only reason for this paragraph is to forestall complaints that the one before it assumes atheism.)
So, um… to go back along this line of argument a few posts, then...
it wasn’t just Hitler; there were a whole lot of people working under his command whose free will was also involved.
...this means you’re in agreement with what I wrote here, right?
I’m not sure exactly what point you’re trying to make.
Six million Jews and six million others died in the Holocaust. It is not so easy to think for yourself when you are dead.
And several million other people hid jews in their attics; attempted (at great personal risk) to smuggle jews to safe places; helped jews across the borders; or, on the other side, hunted jews down, deciding to obey evil orders; arrested people and sent them to death camps; ran or even built said camps… and were, in one or another way, put through the wringer.
I can’t find the reference now, but I do seem to recall reading—somewhere—that Holocaust survivors were significantly less likely to fall victim to the Milgram experiment or similar things.
(I’m not talking about the people who were killed at all.)
I’m not sure exactly what point you’re trying to make.
1. The Holocaust could probably have been prevented, with no extra adverse consequences of similar severity, by an intervention that didn’t interfere with more than one person’s free will. 2. Therefore, a “free will” defence of (the compatibility of theism with) the world’s evil needs to consider that one person’s free will to be of comparable importance to all the suffering and death of the Holocaust. 3. If free will is that important, then in place of (or in addition to) the “problem of evil” we have a “problem of unfreedom”; we are all less free than we might have been, in many ways, and even if that unfreedom is only one millionth as severe as what it would have taken to stop the Holocaust, a billion people’s unfreedom is like a thousand Holocausts. 4. This seems to me to be a fatal objection to this sort of “free will” theodicy. (The real problem is clearly in step 2; we all know, really, that Hitler’s free will—or that of any of the other people whose different decisions would have sufficed to prevent the Holocaust—isn’t more important than millions of horrible deaths.)
And several million other people [...]
I’m pretty sure the number who hid Jews in their attics, helped them escape, etc., was a lot less than six million. And, please, actually think about this for a moment. Consider (1) what the Nazis did to the Jews and (2) what some less-corrupted Germans did to help the Jews. Do you really, truly, want to suggest that #2 was a greater good than #1 was an evil? And are you seriously suggesting that the fact that a whole lot of other Germans had the glorious opportunity to exercise their free will and decide to go along with the extermination of the Jews makes this better?
I think I recall reading of a Christian/atheist debate in which someone—Richard Swinburne? -- made a similar suggestion, and his opponent—Peter Atkins? Christopher Hitchens? -- was heard to growl “May you rot in hell”. I personally think hell is too severe a punishment even for the likes of Hitler, and did even when I was a Christian, but I agree with the overall sentiment.
...you know, a lot of what you’ve been saying over the past few days makes so much more sense now. In effect, you’re looking for the minimum intervention to prevent the Holocaust. (And it should have been possible to do that without taking control of Hitler’s actions; a sudden stroke, bolt of lightning, or well-timed meteor strike could have prevented Hitler from ever doing anything again without removing free will). Considering how much importance the universe seems to put on free will, this might be considered an even more minimal intervention (and no matter how much importance free will is assigned, one life is less than six million lives).
Which leads us directly to the question of why lightning doesn’t strike sufficiently evil people, preferably just before they do something sufficiently evil.
To which the answer, expressed in the simplest possible form, is “I don’t know”. (At best, I can theorise out loud, but it’s all going to end up circling back round to “I don’t know” in the end).
I’m pretty sure the number who hid Jews in their attics, helped them escape, etc., was a lot less than six million.
Well, if each one was helped by one person, refused help by one person, and arrested by one person, then that’s eighteen million moral dilemmas being faced. (Presumably one person could face several of these dilemmas).
Consider (1) what the Nazis did to the Jews and (2) what some less-corrupted Germans did to help the Jews. Do you really, truly, want to suggest that #2 was a greater good than #1 was an evil?
No. I don’t. I’m very sure that it’s nowhere near a complete picture of all the consequences of the Holocaust, but (2) is nowhere near (1).
...and neither (2) nor (1) (nor both of them together) are a complete accounting of all the consequences of the Holocaust.
I personally think hell is too severe a punishment even for the likes of Hitler, and did even when I was a Christian,
I have it on good authority (from a parish priests’ sermon, unfortunately he does not publish his sermons to the internet so I can’t link it) that the RCC agrees with you on this point.
by an intervention that didn’t interfere with more than one person’s free will.
That implies the “great people” approach to human history (history is shaped by actions of individual great people, not by large and diffuse economic/social/political/etc. forces) -- are you willing to accept it?
I think it implies only a rather weak version of the “great people” approach: some things of historical significance are down to individual people. (Who might be “great” in some sense, but might instead simply have been in a critical place at a critical time.) And yes, I’m perfectly willing to accept that; is there a reason why you would expect me not to?
Without Hitler, Germany would still have been unstable and at risk of being swayed by some sort of extremist demagogue willing to blame its troubles on Someone Else. So I’d assign a reasonable probability to something not entirely unlike the Nazi regime arising even without Hitler. It might even have had the National Socialists in charge. But their rhetoric wouldn’t have been the same, their policies wouldn’t have been the same, their tactics in war (if a war happened) wouldn’t have been the same, and many things would accordingly have been different. The extermination of millions of Jews doesn’t seem particularly inevitable, and I would guess that in (so to speak) most possible worlds in which Hitler is somehow taken out of the picture early on, there isn’t anything very much like the Holocaust.
It’s not my fault if the nearest correct thing to the “great people” theory that actually follows from my opinions happens not to be falsifiable. (It’s not even clear that strong forms of the “great people” theory are falsifiable, actually.)
Of course an opinion can be useful without being falsifiable. “Human life as we see it is not utterly worthless and meaningless,” is probably not falsifiable (how would you falsify it?), but believing it is very useful for avoiding suicide and the like.
Opinions are not falsifiable by their nature (well, maybe by revealed preferences). But, hopefully, core approaches to the study of history (e.g. “great people” vs “impersonal forces”) are more than mere opinions.
Quite possibly not. Is that a problem? (The way this conversation feels to me: You claim that X follows from my opinions. I say: no, only the much weaker X’ does. You then complain that X’ is unfalsifiable and useless. Quite possibly, but so what? I expect everyone has beliefs from which one can deduce unfalsifiable and useless things.)
A recap from my side: I didn’t claim that X follows from your opinions—I asked if you subscribe to the theory. You said yes, to a weak version. I pointed out that the weak version is unfalsifiable and useless. You said “so what?”
I don’t think that a version of a theory that has been sufficiently diluted and hedged to be unfalsifiable and so useless can be said to be a meaningful version of a theory. It’s just generic mush.
I’m not trying to trap you. I was interested in whether you actually believe in the “great people” theory (homeopathic versions don’t count). It now seems that you don’t. That is perfectly fine.
I didn’t claim that X follows from your opinions—I asked
Actually, you did both:
That implies the “great people” approach to human history [...] -- are you willing to accept it?
(An aside: where I come from, saying “Your opinion implies X; are you willing to accept X?” is a more adversarial move than simply saying “Your opinion implies X” since it carries at least a suggestion that maybe they believe things that imply X without accepting X, hence inconsistency or insincerity or something.)
It’s just generic mush.
My point, in case it wasn’t clear, is that the nearest thing to the “great people” theory that actually follows from anything I’ve said is what you are describing as “generic mush”. (Perhaps next time I will be less polite and just say “No, that’s bullshit, no such thing follows from anything I’ve said” rather than trying to find the nearest thing I can that does follow. I was hoping that you would either explain why it would be interesting if I accepted the “generic mush” or else explain why you think something stronger than the “generic mush” follows from what I wrote, and confess myself rather taken aback at the tack you have actually taken.)
As to the “great people” theory: I believe that some historical events are down to the actions of individuals (who may or may not be great in any other sense) while some are much more the result of large and diffuse phenomena involving many people. That isn’t a statement that has a lot of readily evaluable observable consequences, but it’s the best answer I can give to the question you asked. (As I said above, I’m not sure that the “great people” theory itself, even in strong forms, actually fares any better in terms of verifiability or falsifiability.)
I’m pretty sure I wasn’t thinking of infinite entities as very large finite entities, nor was I claiming that infinite sets must have infinite complexity or anything of the kind. What I was claiming high complexity for is the concept of “good”, not God or “perfectly good” as opposed to “merely very good”.
Yes, but the point is that the “perfectly” part (1) isn’t what I’m blaming for the complexity and (2) doesn’t appear to me to make the complexity go away by its presence.
I don’t see how you can be sure about, when there is so much disagreement about the meaning of good. Human preferences are complex because they are idiosyncratic, but why would a deity, particularly a “philosopher’s god”, have idiosyncratic preferences? And an omniscient deity could easily be a 100% accurate consequentialist..the difficult part of consequentialism, having reliable knowledge of the consequences, has been granted...all you need to add to omniscience is a Good Will.
IOW, regarding both atheism and consequentialism as slam-dunks is a bit of a problem, because if you follow through the consequences of consequentialism, many of the arguments atheism unravel: a consequentialist deity is fully entitled to destroy two cities to save 10, that would be his version of a trolley problem.
It seems to me that no set of preferences that can be specified very simply without appeal to human-level concepts is going to be close enough to what we call “good” to deserve that name.
a consequentialist deity is fully entitled to destroy two cities to save 10
I entirely agree, but I don’t see how this makes a substantial fraction of the arguments for atheism unravel; in particular, most thoughtful statements of the argument from evil say not “bad things happen, therefore no god” but “bad things happen without any sign that they are necessary to enable outweighing gains, therefore probably no god”.
No idea where you get that from. Theories don’t get a complexity penalty for the complexity of things that appear in universes governed by the theories, but for the complexity of their assumptions. If you have an explanation of the universe that has “there is a good god” as a postulate, then whatever complexity is hidden in the words “good” and “god” counts against that explanation.
If I’m correctly understanding what you’re claiming, it’s something like this: “One can postulate a supremely good being without needing human-level concepts that turn out to be really high-complexity, by defining ‘good’ in very general game-theoretic terms”. (And, I assume from the context in which you’re making the claims: ”… And this salvages the project, mentioned above by CCC, of postulating God as an explanation for the world we see, the idea being that ultimately the details of physical law follow from God’s commitment to making the best possible world or something of the kind”.)
I’m very pessimistic about the prospects for defining “good” in abstract game-theoretic terms with enough precision to carry out any project like this. You’d need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?
(I’ll mention two specific difficulties I anticipate if you’re aiming for simplicity through generality. First: how do you avoid identifying everything as an agent and everything that happens as an action? Second: if the notion of goodness that emerges from this is to resemble ours enough for the word “good” actually to be appropriate, it will have to give different weight to different agents’s interests—humans should matter more than ducks, etc. How will it do that?)
I’m very pessimistic about the prospects for defining “good” in abstract game-theoretic terms with enough precision to carry out any project like this. You’d need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?
So it would be difficult for a fintie being that is figuring out some facts that it doesn’t already know on the basis of other facts that it does know. Now..how about an omniscient being?
I think you may be misunderstanding what the relevance of the “difficulty” is here.
The context is the following question:
If we are comparing explanations for the universe on the basis of hypothesis-complexity (e.g., because we are using something like a Solomonoff prior), what complexity should we estimate for notions like “good”?
If some notion like “perfectly benevolent being of unlimited power” turns out to have very low complexity, so much the better for theistic explanations of the universe. If it turns out to have very high complexity, so much the worse for such explanations.
(Of course that isn’t the only relevant question. We also need to estimate how likely a universe like ours is on any given hypothesis. But right now it’s the complexity we’re looking at.)
In answering this question, it’s completely irrelevant how good some hypothetical omniscient being might be at figuring out what parts of the world count as “agents” and what their preferences are and so on, even though ultimately hypothetical omniscient beings are what we’re interested in. The atheistic argument here isn’t “It’s unlikely that the world was created by a god who wants to satisfy the preferences of agents in it, because identifying those agents and their preferences would be really difficult even for a god” (to which your question would be an entirely appropriate rejoinder). It’s something quite different: “It’s not a good explanation for the universe to say that it was created by a god who wants to satisfy the preferences of agents in it, because that’s a very complex hypothesis, because the notions of ‘agent’ and ‘preferences’ don’t correspond to simple computer programs”.
(Of course this argument will only be convincing to someone who is on board with the general project of assessing hypotheses according to their complexity as defined in terms of computer programs or something roughly equivalent, and who agrees with the claim that human-level notions like ‘agent’ and ‘preference’ are much harder to write programs for than physics-level ones like ‘electron’. Actually formalizing all this stuff seems like a very big challenge, but I remark that in principle—if execution time and computer memory are no object—we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.)
It’s not surprising that one particular parsimony principle can be used to overturn one particular form of theism.
After all, most theists disagree with most theisms...and most believres in a Weird Science hypothesis (MUH, Matrix, etcv ) don’t believe in the others.
The question is: where is the slam dunk against theism..the one that works against all forms of theism, that works only against theism , and not against similar scientific ideas like Matrix Lords, and works against the strongest arguments for theism, not just biblically literalist creationist protestant Christianity, and doesn’t rest on cherry-picking particular parisimony principles?
There are multiple principles of parsimony, multiple Occam’s razors.
Some focus on ontology, on the multiplication of entities, as in the original razor others on epistemology the multiplication of assumptions. The Kolmogorov complexity measure is more alligned to the latter.
Smaller universes are favoured by the ontological razor,but disfavoured by the Epistemological razor, because they are more arbitrary. Maximally large universes can have low epistemic complexity (because you have to add information specifying hwat has been left out to arrive at smaller universs), and low K. complexity (because short programmes can generate infinite bitstrings, eg an expansion of pi).
we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.
Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn’t, I suppose you mean that the starting conditions are absent.
Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn’t, I suppose you mean that the starting conditions are absent.
You need to know not just the starting conditions, but also the position where morality evolves. That position can theoretically have huge complexity.
Well, obviously we should pick the simplest one :-).
Seriously: I wouldn’t particularly expect there to be a single all-purpose slam dunk against all varieties of theism. Different varieties of theism are, well, very different. (Even within, say, protestant Christianity, one has the fundamentalists and the super-fuzzy liberals, and since they agree on scarcely any point of fact I wouldn’t expect any single argument to be effective against both positions.)
ontology [...] epistemology
I’m pretty sure that around these parts the “epistemological” sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then “ontological” sort.
I suppose you mean that the starting conditions are absent.
That’s one reasonable way of looking at it, but if the best way we can find to compute morality-as-we-understand-it is to run a complete physical simulation of our universe then the outlook doesn’t look good for the project of finding a simpler-than-naturalism explanation of our universe based on the idea that it’s the creation of a supremely good being.
I’m pretty sure that around these parts the “epistemological” sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then “ontological” sort.
That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise.
This is motivated stopping. You don’t want to admit any evidence for theism so you declare the problem impossible instead of thinking about it for 10 seconds.
Here are some hints: If you were dropped into an alien planet or even an alien universe you would have no trouble identifying the most agenty things.
You’d need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions,
Well there you go, agents are things that can be involved in game-like interactions.
[...] motivated [...] don’t want [...] instead of thinking about it
This is the third time in the last few weeks that you have impugned my integrity on what seems to me to be zero evidence. I do wish you would at least justify such claims when you make them. (When I have asked you to do so in the past you have simply ignored the requests.)
Would it kill you to entertain some other hypotheses—e.g., “the other guy is simply failing to notice something I have noticed” and “I am simply failing to notice something the other guy has noticed”? Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you doesn’t exactly suggest that you’re here for a collaborative search for truth as opposed to fighting a war with arguments as soldiers.
[EDITED to add: I didn’t, in fact, declare anything impossible; and before declaring it very difficult I did in fact think about it for more than ten seconds. I see little evidence that you’ve given as much thought to anything I’ve said in this discussion.]
you would have no trouble
I have agent-identifying hardware in my brain. It is, I think, quite complicated. I don’t know how to make a computer identify agents, and so far as I know no one else does either. The best automated things I know of for tasks remotely resembling agent-identification are today’s state-of-the-art image classifiers, which typically involve large mysterious piles of neural network weights, which surely count as high-complexity if anything does.
agents are things that can be involved in game-like interactions
Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don’t have the prior ability to identify the agents.
Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you
No, but I do downvote people who appear to be completely mind-killed.
Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don’t have the prior ability to identify the agents.
Rather, identifying agents using algorithms with reasonable running time is a hard problem.
Also, consider the following relatively uncontroversial beliefs around here:
1) The universe has low Kolmogorov complexity.
2) An AGI is likely to be developed and when it does it’ll take over the universe.
Now let’s consider some implications of these beliefs:
3) An AGI has low Kolmogorov complexity since it can be specified as “run this low Kolmogorov complexity universe for a sufficiently long period of time”.
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
I do downvote people who appear to be completely mind-killed
I think your mindkill detection algorithms need some tuning; they have both false positives and false negatives.
Rather [...] with reasonable running time
I know of no credible way to do it with unreasonable running time either. (Unless you count saying “AIXI can solve any solvable problem, in principle, so use AIXI”, but I see no reason to think that this leads you to a solution with low Kolmogorov complexity.)
I don’t think your argument from superintelligent AI works; exactly where it fails depends on some details you haven’t specified, but the trouble is some combination of the following.
For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can’t identify “this universe” and still have it be of low complexity) or adopt something like Tegmark’s MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
You need to say where in the universe the AGI is, which imposes a large complexity cost—unless …
… unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say “that thing”—but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say “agents are things that that identifies as agents” again has a large complexity cost from locating them.
For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can’t identify “this universe” and still have it be of low complexity)
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
adopt something like Tegmark’s MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
Well, all the universes that support can life are likely wind up taken over by AGI’s.
unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say “that thing”—but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say “agents are things that that identifies as agents” again has a large complexity cost from locating them.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
Saying that it’s important doesn’t mean it’s simple.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself.
It does if you look at the rest of my argument.
If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector,
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Thus reducing entropy globally must have low Kolmogorov complexity.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
It does if you look at the rest of my argument.
The rest of your argument is fundamentally misinformed.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make.
It might, perhaps, if I were actually trying to make that argument. But so far as I can see no one is claiming here that the universe has low komplexity. (All the atheistic argument needs is for the godless version of the universe to have lower komplexity than the godded one.)
all the universes that can support life are likely to wind up taken over by AGIs.
Even if so, you still have the locate-the-relevant-bit problem. (Even if you can just say “pick any universe”, you have to find the relevant bit within that universe.) It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
the AGI can. [...] it’s likely to have a short referent to it.
An easy-to-use one, perhaps, but I see no guarantee that it’ll be something easy to identify for others, which is what’s relevant.
Consider humans; we’re surely much simpler than a universe-spanning AGI (and also more likely to have a concept that nicely matches the human concept of “agent”; perhaps a universe-spanning AGI would instead have some elaborate range of “agent”-like concepts making fine distinctions we don’t see or don’t appreciate; but never mind that). Could you specify how to tell, using a human brain, whether something is an agent? (Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay. In fact, it’s worse; you need to specify how to work out that language by looking at human brains. Similarly, if you want to say “look at the neurons located here”, the thing you need to pay the komplexity-cost of is not just specifying “here” but specifying how to find “here” in a way that works for any possible human-like thing.)
Even if so, you still have the locate-the-relevant-bit problem.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
You haven’t played with cellular automata much, have you?
Could you specify how to tell, using a human brain, whether something is an agent?
Ask it.
Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay.
The cost of specifying a language is the cost of specifying the entity that can decode it, and we’ve already established that a universe spanning AGI has low Kolmogorov complexity.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
No part. I already explained why I don’t think “universe taken over by AGI” implies “no need for lots of bits to locate what we need within the universe”; I really shouldn’t have to do so again two comments downthread.
You haven’t played with cellular automata much, have you?
Fair comment (though, as ever, needlessly obnoxiously expressed); I agree that there are low-komplexity things that surely contain powerful intelligences. But now take a step back and look at what you’re arguing. I paraphrase thus: “A large instance of Conway’s Life, seeded pseudorandomly, will surely end up taken over by a powerful AI. A powerful AI will be good at identifying agents and their preferences. Therefore the notions of agent and preference are low-komplexity.” Is it not obvious that you’re proving too much on the basis of too little here, and therefore that something must have gone wrong? I mean, if this argument worked it would appear to obliterate differences in komplexity between any two concepts we might care about, because our hypothetical super-powerful Life AI should also be good at identifying any other kind of pattern.
I’ve already indicated one important thing that I think has gone wrong: saying how to use whatever (doubtless terribly complicated) AI may emerge from running “Life” on a board of size 10^100 for 10^200 ticks to identify agents may require a great many bits. I think I see a number of other problems, but it’s 2.30am local time so I’ll leave you to look for them, if you choose to do so.
The cost of specifying a language is the cost of specifying the entity that can decode it
No. It is the cost of specifying that entity and indicating somehow that it is to decode that language rather than some other.
Let’s make this a little more concrete. You are claiming that the likely emergence of universe-spanning AGIs able to detect agency means that the notion of “agent” has low komplexity. Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
I am not expecting you to actually write the program, of course. But you seem sure that it can be done and doesn’t need many bits, so you surely ought to be able to outline how it would work in general terms, without any points where you have to say “and then a miracle happens”.
Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
Hard code the question in the AI’s language directly into the stimulation. (This is what is known in the computational complexity world as a non-constructive existence proof.)
OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.
An AGI has low Kolmogorov complexity since it can be specified as “run this low Kolmogorov complexity universe for a sufficiently long period of time”.
That’s a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your “necessary” condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.
The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you’ve actually constructed is an argument against being able to simulate the universe in full fidelity.
This is not right, K(.) is a function that applies to computable objects. It either does not apply to our Universe, or is a constant if it does (this constant would “price the temporal evolution in”).
I sincerely don’t think it works that way. Consider the usual relationship between Shannon entropy and Kolmogorov complexity: H(x) \proportional E[K(x)]. We know that the Gibbs, and thus Shannon, entropy of the universe is nondecreasing, and that thus means that the distribution over universe-states is getting more concentrated on more complex states over time. So the Kolmogorov complexity of the universe, viewed at a given instant in time but from a “god’s eye view”, is going up.
You could try to calculate the maximum possible entropy in the universe and “price that in” as a constant, but I think that dodges the point in the same way as AIXI_{tl} does by using an astronomically large “constant factor”. You’re just plain missing information if you try to simulate the universe from its birth to its death from within the universe. At some point, your simulation won’t be identical to the real universe anymore, it’ll diverge from reality because you’re not updating it with additional empirical data (or rather, because you never updated it with any empirical data).
Hmmm… is there an extension of Kolmogorov complexity defined to describe the information content of probabilistic Turing machines (which make random choices) instead of deterministic ones? I think that would better help describe what we mean by “complexity of the universe”.
What does this mean? What is the expectation taken with respect to? I can construct an example where the above is false. Let x1 be the first n bits of Chaitin’s omega, x2 be the (n+1)th, …, 2nth bits of Chaitin’s omega. Let X be a random variable which takes the value x1 with probability 0.5 and the value x2 with probability 0.5. Then E[K(X)] = 0.5 O(n) + 0.5 O(n) = O(n), but H(X) = 1.
edit: Oh, I see, this is a result on non-adversarial sample spaces, e.g. {0,1}^n, in Li and Vitanyi.
This is not and can not be true. I mean, for one the universe doesn’t have a Kolmogorov complexity*. But more importantly, a hypothesis is not penalized for having entropy increase over time as long as the increases in entropy arise from deterministic, entropy-increasing interactions specified in advance. Just as atomic theory isn’t penalized for having lots of distinct objects, thermodynamics is not penalized for having seemingly random outputs which are secretly guided by underlying physical laws.
*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.
*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.
This is exactly the sort of thing for which Kolmogorov complexity exists: to specify the length of the shortest hypothesis which outputs the correct result.
Just as atomic theory isn’t penalized for having lots of distinct objects
Atomic theory isn’t “penalized” because it has lots of distinct but repeated objects. It actually has very few things that don’t repeat. Atomic theory, after all, deals with masses of atoms.
The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you’ve actually constructed is an argument against being able to simulate the universe in full fidelity.
Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity. Well, if it does it kind of undermines the whole “we must reject God because a godless universe has lower Kolmogorov” complexity argument.
Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity.
Not infinite, just growing over time. This just means that it’s impossible to simulate the universe with full fidelity from inside the universe, as you would need a bigger universe to do it in.
Not sure anyone is dumb enough to think the visible universe has low Kolmogorov complexity. That’s actually kind of the reason why we keep talking about a universal wavefunction, and even larger Big Worlds, none of which an AGI could plausibly control.
No, but it does mean that if you want to argue that humans exist you must provide strong positive evidence, perhaps telling us an address where we can meet a real live human ;)
The basic theistic hypothesis is a description of an omnipotent, omniscient being; together with the probable aims and suspected intentions of such a being. The laws of physics would then derive from this.
“Omnipotent”, “omniscient”, and “being” are packing a whole shit-ton of complexity, especially “being”. They’re definitely packing more than a model of particle physics, since we know that all known “beings” are implemented on top of particle physics.
I don’t think mind designs are dependent on their underlying physics. The physics is a substrate, and as long as it provides general computation, intelligence would be achievable in a configuration of that physics. The specifics of those designs may depend on how those worlds function, like how jellyfish-like minds may be different from bird-like minds, but not the common elements of induction, analysis of inputs, and selection of outputs. That would mean the simplest a priori mind would have to be computed by the simplest provision of general computation, however. An infinitely divine Turing Machine, if you will.
That doesn’t mean a mind is more basic than physics, though. That’s an entirely separate issue. I haven’t ever seen a coherent model of God in the first place, so I couldn’t begin to judge the complexity of its unproposed existence. If God is a mind, then what substrate does it rest on?
We don’t know that beings require particle physics—if the only animal I’ve ever seen is a dog, that is not proof that zebras don’t exist.
I’m not saying that there isn’t complexity in the word “being”, just that I’m not convinced that your argument in favour of there being more complexity than particle physics is good.
Kolmogorov complexity is, in essence, “How many bits do you need to specify an algorithm which will output the predictions of your hypothesis?” A hypothesis which gives a universally applicable formula is of lower complexity than one which specifies each prediction individually. More simple formulas are of lower complexity than more complex formulas. And so on and so forth.
The source of the high Kolmogorov complexity for the theistic hypothesis is God’s intelligence. Any religious theory which involves the laws of physics arising from God has to specify the nature of that God as an algorithm which specifies God’s actions in every situation with mathematical precision and without reference to any physical law which would (under this theory) later arise from God. As you can imagine, doing so would take very, very many bits to do successfully. This leads to very high complexity as a result.
The number of bits required to specify an agent with free will (insofar as free will is a meaningful term when discussing a deterministic universe) is definitely finite. Very large, but finite. Which is a good thing, since Kolmogorov priors specify a prior of 0 for a hypothesis with infinite complexity and assigning a prior of 0 to a hypothesis is a Bad Thing for a variety of reasons.
The length (in bits for a program in a universal Turing machine) of the smallest algorithm which will output the same outputs as the agent if the agent were given the same inputs as the algorithm.
Do note that I said “insofar as free will is a meaningful term when discussing a deterministic universe”. Many definitions of free will are defined around being non-deterministic, or non-computable. Obviously you couldn’t write a deterministic computer program which has those properties. But there are reasons presented on this site to think that once you pare down the definition to the basic essentials of what is really meant and stop being confused by the language used to traditionally describe free will, that you should in principle be able to have a deterministic agent who does, in fact, have free will for all meaningful purposes.
once you pare down the definition to the basic essentials of what is really meant and stop being confused by the language used to traditionally describe free will, that you should in principle be able to have a deterministic agent who does, in fact, have free will for all meaningful purposes.
I don’t read it this way. The approach you linked to basically says that free will does not exist and is just a concept humans came up with to confuse themselves. If you accept this, then you should not use the “free will” terminology at all because there is no point to it. So I still don’t understand that concept.
The only reason I’m using the free will terminology at all here is because the hypothesis under consideration (an entity with free will which resembles the Abrahamic God is responsible for the creation of our universe) was phrased in those terms. In order to evaluate the plausibility of that claim, we need a working definition of free will which is amiable to being a property of an algorithm rather than only applying to agents-in-abstract. I see no conflict between the basic notion of a divinely created universe and the framework for free will provided in the article hairyfigment links. One can easily imagine God deciding to make a universe, contemplating possible universes which They could create, using Their Godly foresight to determine what would happen in each universe and then ultimately deciding that the one we’re in is the universe They would most prefer to create. There’s many steps there, and many possible points of failure, but it is a hypothesis which you could, in principle, assign an objective Solomonoff prior to.
(Note: This post should not be taken as saying that the theistic hypothesis is true. Only that its likelihood can successfully be evaluated. I know it is tempting to take arguments of the form “God is a hypothesis which can be considered” to mean “God should be considered” or even “God is real” due to arguments being foot soldiers and it being really tempting to decry religion as not even coherent enough to parse successfully.)
Would you care to demonstrate? Preferably starting with explaining how the Solomonoff prior is relevant (note that a major point in theologies of all Abrahamic religions is that God is radically different from everything else (=universe)).
No, I would not care to demonstrate. A proof that a solution exists is not the same thing as a procedure for obtaining a solution. And this isn’t even a formal proof: it’s a rough sketch of how you’d go about constructing one, informally posted in a blog’s comment section as part of a pointless and unpleasant discussion of religion.
If you can’t follow how “It is possible-in-principle to calculate a Solomonoff prior for this hypothesis” relates to “We are dismissive of this hypothesis because it has high complexity and little evidence supporting it.” I honestly can’t help. This is all very technical and I don’t know what you already know, so I have no idea what explanation would be helpful to close that inferential distance. And the comments section of a blog really isn’t the best format. And I’m certainly not the best person to teach about this topic.
And yet here we have someone talking about “free will” as if it meant something, and CCC’s usage seems entirely consistent with the meaning described here. (The link is a spoiler for the questions linked in the grandparent, but I’ve already tried to direct CCC’s attention to the computable kind of “free will” in the hope of clarifying the discussion. That user claimed to have read a large part of the Sequences.)
The number of bits required to specify an agent with free will (insofar as free will is a meaningful term when discussing a deterministic universe) is definitely finite. Very large, but finite.
...could you elaborate on this point a bit more? I’d really like to know how you prove that.
Ok, everyone. LessWrong has now descended to actually arguing over the Kolmogorov complexity of the Christian God, as if this was a serious question. The Slate Star Codex readers demanding “charity” for this, that, and everything else have taken over.
LessWrong is now officially a branch of /r/philosophy. The site is dead, and everyone who actually wanted LessWrongian things can now migrate somewhere else.
LessWrong has now descended to actually arguing over the Kolmogorov complexity of the Christian God, as if this was a serious question.
Well, there is a lot of motivated cognition on that topic (relevant disclaimer, I’m an atheist in the conventional sense of the word) and it seems deceptively straight forward to answer (mostly by KC-dabblers), but it is in fact anything but. The non-triviality arises from technical considerations, not some philosophical obscurantism.
This may be the wrong comment chain to get into it, and your grandstanding doesn’t exactly signal an immediate willingness to engage in medias res, so I won’t elaborate for the moment (unless you want me to).
The non-triviality arises from technical considerations
The laws of physics as we know them are very simple, and we believe that they may actually be even simpler. Meanwhile, a mind existing outside of physics is somehow a more consistent and simple explanation than humans having hardware in the brain that promotes hypotheses involving human-like agents behind everything, which explains away every religion ever? Minds are not simpler than physics. This is not a technical controversy.
Go on and elaborate, but unless you can show some very thorough technical considerations, I just don’t see how you’re able to claim a mind has low Kolmogorov complexity.
“Mind” is a high level concept, on a base level it is just a subset of specific physical structures. The precise arrangement of water molecules in a waterfall, over time, matches if not dwarves the KC of a mind.
That is, if you wanted to recreate precisely this or that waterfall as it precisely happened (with the orientation of each water molecule preserved with high fidelity), the strict computational complexity would be way higher than for a comparatively more ordered and static mind.
The data doesn’t care what importance you ascribe to it. It’s not as if, say, “power”, automatically comes with “hard to describe computationally”. On the contrary, allowing for a function to do arbitrary code changes is easier to implement that defining precise power limitations (see constraining an AI’s utility function).
Then there’s the sheer number of mind-phenomena, are you suggesting adding one by necessity increases complexity? In fact, removing one can increase it as well: If I were to describe a reality in which ceteris is paribus, with the exception of your mind not actually being a mind, then by removing a mind I would have increased overall complexity. Not even taking into account that there are plenty of mind-templates around already (implicitly, since KC, even though uncomputable, is optimal), and that for complexity considerations, adding another of a template isn’t even adding much, necessarily (I’m aware that adding just a few bits already comes with a steep penalty, this comment isn’t meant to be exhaustive). See also the alphabet example further on.
Then there’s the illusion that somehow our universe is of low complexity just because the physical laws governing the transition between time-steps are simple. That is mistaken. If we just look at the laws, and start with a big bang that is not precisely informationally described, we get a multiverse host of possible universes with our universe not in the beginning, which goes counter the KC demands. You may say “I don’t care, as long as our universe is somewhere in the output, that’s fine”. But then I propose an even simpler theory of everything: Output a long enough sequence of Pi, and you eventually get our universe somewhere down the line as well. So our universe’s actual complexity is enourmous, down to atoms in a stone on a hill on some moon somewhere in the next galaxy. There exists a clear trade-off between explanatory power and conciseness. I used to link an old Hutter lecture on that latter topic a few years ago, I can dig it out if you’d like. (ETA: See for example the paragraph labeled “A” on page 6 in this paper of his).
The old argument that |”universe + mind”| > |”universe”| is simplistic and ill-applied. Unlike with probabilities, the sequence ABCDABCDABCDABCD can be less complex than ABCDABCDABCDABC.
The list goes on, if you want to focus on some aspect of it we can go into greater depth on that. Bottom line is, if there’s a slam dunk case, I don’t see it.
Because rationality isn’t about following reason where it takes you, it’s about sticking as dogmatically as possible to the 39 articles of lwrationality as laid down in the seq-tures.
Rationality is indeed about following reason where it takes you. This is very different from following wherever someone would have their feelings hurt if you didn’t go. Of course, rationality also involves the use of priors, evidence, and accumulated information over your entire lifetime. You are not merely allowed but required to assign a very low prior, in the range of “bloody ridiculous”, to propositions which contradict all your available information, or require some massively complex rationalization to be compatible with all your available information.
his is very different from following wherever someone would have their feelings hurt if you didn’t go.
What did you have in mind specifically?
Of course, rationality also involves the use of priors, evidence, and accumulated information over your entire lifetime.
Rationality also involves paradigm shifts, revolutions and inversions. “Use priors” is not, should not be, a call for fundamental conservatism.
You are not merely allowed but required to assign a very low prior, in the range of “bloody ridiculous”, to propositions which contradict all your available information, or require some massively complex rationalization to be compatible with all your available information.
One person’s complex rationalisation is another’s paradigm shift.
Evolution, relativity and quantum physics are paradigm shifts. Some people still aren’t aboard with some of them, finding them against “logic”, “reason”, “common sense”, etc. The self-professed rationalist Ayn Rand rejected all three: do you want to be another Ayn Rand?
The conservative incremental paradigm, applied retroactivley, would lead lwrationalists to reject good science. So they kind of don’t believe in it as the only paradigm. But they also kind of do, since it is the only paradigm they use when discussing theology., or other things they don’t like.
Evolution, relativity and quantum physics are paradigm shifts.
Not sure what “paradigm shift” is supposed to mean, but it sounds to me like “nobody had the slightest suspicion, then came a prophet, told something completely unexpected, and everyone’s mind was blown”. Well, if it is supposed to be anything like that, then evolution and relativity are poor examples (not completely sure about quantum physics).
With evolution, people already had millenia of experience with breeding. Darwin’s new idea was, essentially: “if human breeders can achieve some changes by selecting individuals with certain traits… couldn’t the forces of nature, by automatically selecting individuals who have a greater chance to survive or a greater chance to reproduce, have ultimately a similar effect on the species?”
With relativity, people already had many equations, already did the experiments that disproved the aether, etc. A large part of the puzzle was already known, Einstein “only” had to connect a few pieces together in a creative way. And then it was experimentally tested and confirmed.
By “paradigm shift”, I mean a certain amount of unlearning, overturning previously established beliefs—the fixity of species ion the case of evolution, absolute simultaneity in the case of relativity, determinism in the case of quantum mechanics.
ETA:
You are not merely allowed but required to assign a very low prior, in the range of “bloody ridiculous”, to propositions which contradict all your available information, or require some massively complex rationalization to be compatible with all your available information.
Note the contradicitions to “available information” listed above.
The basic form of the atheistic argument found in the Sequences is as follows: “The theistic hypothesis has high Kolmogorov complexity compared to the atheistic hypothesis. The absence of evidence for God is evidence for the absence of god. This in turn suggests that the large number of proponents of religion is more likely due to God being an improperly privileged hypothesis in our society rather than Less Wrong and the atheist community in general missing key pieces of evidence in favour of the theistic hypothesis.”
Now, you could make a counterpoint along the lines of “But what about ‘insert my evidence for God here’? Doesn’t that suggest the opposite, and that God IS real?” There is almost certainly some standard rebuttal to that particular piece of evidence which most of us have already previously seen. God is a very well discussed topic, and most of the points anyone will bring up have been brought up elsewhere. And so, Less Wrong as a community has for the most part elected to not entertain these sorts of arguments outside of the occasional discussion thread, if only so that we can discuss other topics without every thread becoming about religion (or politics).
“There is almost certainly some standard rebuttal to that particular piece of evidence...”
Evidence is not something that needs “rebuttal.” There is valid evidence both for and against a claim, regardless of whether the claim is true or false.
That’s fair. Though, I’d put my mistake less on the word “rebuttal” and more on the word “evidence.” The particular examples I had in mind when writing that post were non-evidence “evidences” of God’s existence like the complexity of the human eye, or fine structure of the universe. Cases where things are pointed to as being evidence despite the fact that they are just as and often more likely to exist if God doesn’t exist than they would be if he did.
yes, the debate here is well worn: the only novelty is less wrong’s degree of confidence that they have right answer. Might that be what is attracting debate, as opposed to “most of us are atheists, but whatever”.
I find this unconvincing. The basic theistic hypothesis is a description of an omnipotent, omniscient being; together with the probable aims and suspected intentions of such a being. The laws of physics would then derive from this.
The basic atheistic hypothesis is, as far as I understand it, the laws of physics themselves, arising from nothing, simply existing.
I am not convinced that the Kolmogorov complexity of the first is higher then the Kolmogorov complexity of the second. (Mind you, I haven’t really compared them all that thoroughly—I could be wrong about that. But it, at the very least, is not obviously higher).
Before seeing this I thought you rejected all priors based on Kolmogorov complexity, as that seemed like the only way to save your position. (From what you said before you’ve read at least some of what Eliezer wrote on the difficulty of writing an AGI program. Hopefully you’ve read about the way that an incautious designer could create levers which do nothing, since the human brain is inclined to underestimate its own complexity.)
While guessing is clearly risky, it seems like you’re relying on the idea that a program to simulate the right kind of “omnipotent, omniscient being” would necessarily show it creating our laws of physics. Otherwise it would appear absurd to compare the complexity of the omni-being to that of physics alone. (It also sounds like you’re talking about a fundamentally mental entity, not a kind of local tyrant existing within physics.) But you haven’t derived any of our physics from even a more specific theistic hypothesis, nor did the many intelligent people who thought about the logical implications of God in the Middle Ages! Do you actually think they just failed to come up with QM or thermodynamics because they didn’t think about God enough?
Earlier when you tried to show that assuming any omni-being implied an afterlife, you passed over the alternative of an indifferent omni^2 without giving a good reason. You also skipped the idea of an omni-being not having people die in the first place. In general, a habit of ignoring alternatives will lead you to overestimate the prior probability of your theory. And in this case, if you want to talk about an omni^2 that has an interest in humans, we would naively expect it to create some high-level laws of physics which mention humans. You have not addressed this. It seems like in practice you’re taking a scientific model of the world and adding the theistic hypothesis as an additional assumption, which—in the absence of evidence for your theory over the simpler one—lowers the probability by a factor of 2^(something on the order of MIRI’s whole reason for being). Or at least it does by assumptions which you seem to accept.
Maybe the principle will be clearer if we approach it from the evidence side. Insofar as an omni^2 seems meaningful, I’d expect its work to be near optimal for achieving its goals. I say that literally nothing in existence which we didn’t make is close to optimal for any goal, except a goal that overfits the data in a way that massively lowers that goal’s prior probability. Show me an instance. And please remember what I said about examining alternatives.
Yes, I think so.
Yes, that is correct.
A few seconds’ googling suggests (article here) that a monk by the name of Udo of Aachen figured out the Mandelbrot set some seven hundred years before Mandelbrot did by, essentially, thinking about God. (EDIT: It turns out Udo was an April Fools’ hoax from 1999. See here for details.)
Mind you, simply starting from a random conception of God and attempting to derive a universe will essentially lead to a random universe. To start from the right conception of God necessarily requires some sort of observation—and I do think it is easier to derive the laws of physics from observation of the universe than it is to derive the mindset of an omniscient being (since the second seems to require first deriving the laws of physics in order to check your conclusions).
You are right. I skipped over the idea of an entirely indifferent omni-being; that case seems to have minimal probability of an afterlife (as does the atheist universe; in fact, they seem to have the same minimal probability). Showing that the benevolent case increases the probability of an afterlife is then sufficient to show that the probability of an afterlife is higher in the theistic universe than the atheistic universe (though the difference is less than one would expect from examining only the benevolent case).
I also skipped the possibility of there being no death at all; I skipped this due to the observation that this is not the universe in which we live. (I could argue that the process of evolution requires death, but that raises the question of why evolution is important, and the only answer I can think of there—i.e. to create intelligent minds—seems very self-centred)
I question whether it has an interest in humans specifically, or in intelligent life as a whole. (And there is at least a candidate for a high-level law of physics which mentions humans in particular—“humans have free will”. It is not proven, despite much debate over the centuries, but it is not disproven either, and it is hard to see how it can derive from other physical laws)
This seems likely. It implies that the universe is the optimal method for achieving said goals, and therefore that said goals can be derived from a sufficiently close study of the universe.
It should also be noted that aesthetics may be a part of the design goals; in the same way as a dance is generally a very inefficient way for moving from point A to point B, the universe may have been designed in part to fulfill some (possibly entirely alien) sense of aesthetics.
I can’t seem to think of one off the top of my head. (Mind you, I’m not sure that the goal of the universe has been reached yet; it may be something that we can’t recognise until it happens, which may be several billion years away)
Took me a while to check this, because of course it would have been evidence for my point. (By the way, throughout this conversation, you’ve shown little awareness of the concept or the use of evidence in Bayesian thought.)
Are you trolling us?
...no, I am not intentionally trolling you. Thank you for finding that.
This is the danger of spending only a few seconds googling on a topic; on occasion, one finds oneself being fooled by a hoax page.
The general opinion around here (which I share) is that the complexity of those is much higher than you probably think it is. “Human-level” concepts like “mercy” and “adultery” and “benevolence” and “cowardice” feel simple to us, which means that e.g. saying “God is a perfectly good being” feels like a low-complexity claim; but saying exactly what they mean is incredibly complicated, if it’s possible at all. Whereas, e.g., saying “electrons obey the Dirac equation” feels really complicated to us but is actually much simpler.
Of course you’re at liberty to say: “No! Actually, human-level concepts really are simple, because the underlying reality of the universe is the mind of God, which entertains such concepts as easily as it does the equations of quantum physics”. And maybe the relative plausibility of that position and ours ultimately depends on one’s existing beliefs about gods and naturalism and so forth. I suggest that (1) the startling success of reductionist mathematics-based science in understanding, explaining and predicting the universe and (2) the total failure of teleological purpose-based thinking in the same endeavour (see e.g., the problem of evil) give good reason to prefer our position to yours.
That sounds really optimistic.
That is possible. I have no idea how to specify such things in a minimum number of bits of information.
This is true; yet there may be fewer human-level concepts and more laws of physics. I am still unconvinced which complexity is higher; mainly because I have absolutely no idea how to measure the complexity of either in the first place. (One can do a better job of estimating the complexity of the laws of physics because they are better known, but they are not completely known).
But let us consider what happens if you are right, and the complexity of my hypothesis is higher than the complexity of yours. Then that would form a piece of probabilistic evidence in favour of the atheist hypothesis, and the correct action to take would be to update—once—in that direction by an appropriate amount. I’m not sure what an appropriate amount is; that would depend on the ratio of the complexities (but is capped by the possibility of getting that ratio wrong).
This argument does not, and can not, in itself, give anywhere near the amount of certainty implied by this statement (quoted from here):
I should also add that the existence of God does not invalidate reductionist mathematics-based thinking in any way.
Well, I suppose in principle there might. But would you really want to bet that way?
Yes, I completely agree.
Almost, but not exactly. It makes a difference how wrong, and in which direction.
One in a billion is only about 30 bits. I don’t think it’s at all impossible for the complexity-based calculation, if one could do it, to give a much bigger odds ratio than that. The question then is what to do about the possibility of having got the complexity-based calculation (or actually one’s estimate of it) badly wrong. I’m inclined to agree that when one takes that into account it’s not reasonable to use an odds ratio as large as 10^9:1.
But it’s not as if this complexity argument is the only reason anyone has for not believing in God. (Some people consider it the strongest reason, but “strongest” is not the same as “only”.)
Incidentally, I offer the following (not entirely serious) argument for pressing the boom-if-God button rather than the boom-with-small-probability button: the chances of the world being undestroyed afterwards are presumably better if God exists.
Insufficient information to bet either way.
Yes, that’s what I meant by “capped”—if I did that calculation (somehow working out the complexities) and it told me that there was a one-in-a-billion chance, then there would be a far, far better than a one-in-a-billion chance that the calculation was wrong.
Noted.
If I assume that the second-strongest reason is (say) 80% as strong as the strongest reason (by which I mean, 80% as many bits of persuasiveness), the third-strongest reason is 80% as strong as that, and so on; if the strength of all this (potentially infinite) series of reasons is added together, it would come to five times as strong as the strongest reason.
Thus, for a thirty-bit strength from all the reasons, the strongest reason would need a six-bit strength—it would need to be worth one in sixty-four (approximately).
Of course, there’s a whole lot of vague assumptions and hand-waving in here (particularly that 80% figure, which I just pulled out of nowhere) but, well, I haven’t seen any reason to think it at all likely that the complexity argument is worth even three bits, never mind six.
(Mind you, I can see how a reasonable and intelligent person might disagree on me about that).
...serious or not, that is a point worth considering. I’m not sure that it’s true, but it could be interesting to debate.
I would expect heavier tails than that. (For other questions besides that of gods, too.) I’d expect that there might be dozens of reasons providing half a bit or so.
For what it’s worth, I might rate it at maybe 7 bits. Whether I’m a reasonable and intelligent person isn’t for me to say :-).
Fair enough. That 80% figure was kindof pulled out of nowhere, really.
You think the theistic explanation might be as much as a hundred times more complex?
...there may be some element of my current position biasing my estimate, but that does seem a little excessive.
So far as this debate goes, my impression is that you either are both reasonable and intelligent or you’re really good at faking it.
No, as much as seven bits more complex. (More precisely, I think it’s probably a lot more more-complex than that, but I’m quite uncertain about my estimates.)
Damn, you caught me. (Seriously: I’m pretty sure that being really good at faking intelligence requires intelligence. I’m not so sure about reasonable-ness.)
One bit is twice as likely.
Seven bits are two-to-the-seven times as likely, which is 128 times.
...surely?
I can think of a few ways to fake greater intelligence then you have. Most of them require a more intelligent accomplice, in one way or another. But yes, reasonableness is probably easier to fake.
128x more unlikely but not 128x more complex; for me, at least, complexity is measured in bits rather than in number-of-possibilities.
[EDITED to add: If anyone has a clue why this was downvoted, I’d be very interested. It seems so obviously innocuous that I suspect it’s VoiceOfRa doing his thing again, but maybe I’m being stupid in some way I’m unable to see.]
...I thought that the ratio of likeliness due to the complexity argument would be the inverse of the ratio of complexity. Thus, something twice as complex would be half as likely. Is this somehow incorrect?
(I have no idea why it was downvoted)
All else being equal, something that takes n bits to specify has probability proportional to 2^-n. So if hypothesis A takes 110 bits and hypothesis B takes 100, then A is about 1000x less probable.
Exactly what “all else being equal” means is somewhat negotiable.
If you are using a Solomonoff prior, it means: in advance of looking at any empirical evidence at all, the probability you assign to a hypothesis should be proportional to 2^-n where n is the number of bits in a minimal computer program that specifies the hypothesis, in a language satisfying some technical conditions. Exactly how this cashes out depends on the details of the language you use, and there’s no way of actually computing the numbers n in general, and there’s no law that says you have to use a Solomonoff prior anyway.
More generally, whatever prior you use, there are 2^n hypotheses of length n (and if you describe them in a language satisfying those technical conditions, then they are all genuinely different and as n varies you get every computable hypothesis) so (handwave handwave) on average for large n an n-bit hypothesis has to have probability something like 2^-n.
Anyway, the point is that the natural way to measure complexity is in bits, and probability varies exponentially, not linearly, with number of bits.
Yes, and hypothesis A is also 1024x as complex—since it takes ten more bits to specify.
...it seems that our disagreement here is in the measure of complexity, and not the measure of probability. My measure of complexity is pretty much the inverse of probability, while you’re working on a log scale by measuring it in terms of a number of bits.
Yes, apparently we’re using the word “complexity” differently.
So, getting back to what I said that apparently surprised you: Yes, I think it is very plausible that the best theistic explanation for everything we observe around us is what I call “7 bits more complex” and you call “128x more complex” than the best non-theistic explanation; just to be clear what that means, I mean that if we could somehow write down a minimal-length complete description of what we see (compressing it via computer programs / laws of physics / etc.) subject to the constraint “must not make essential use of gods”, and another subject instead to the constraint “must make essential use of gods”, then my guess at the length of the second description is >= 7 bits longer than my guess at the length of the first. Actually I think the second description would have to be much longerer than that, but I’m discounting because this is confusing stuff and I’m far from certain that I’m right.
And you, if I’m understanding you correctly, are objecting not so much “no, the theistic description will be simpler” as “well, maybe you’re right that the nontheistic description will be simpler, but we should expect it to be simpler by less than one random ASCII character’s worth of description length”.
Of course the real diffiulty here is that we aren’t in a position to say what a minimal length theistic or nontheistic description of the universe would look like. We have a reasonable set of laws of physics that might form the core of the nontheistic description, but (1) we know the laws we have aren’t quite right, (2) it seems likely that the vast bulk of the complexity needed is not in the laws but in whatever arbitrary-so-far-as-we-know boundary conditions[1] need to be added to get our universe rather than a completely different one with the same laws, and we’ve no idea how much information that takes or even whether it’s finite. And on the theistic side we have at most a pious hope that something like “this is the best of all possible worlds” might suffice, but no clear idea of how to specify what notion of “best” is appropriate, and the world looks so much unlike the best of all possible worlds according to any reasonable notion that this fact is generally considered one of the major reasons for disbelieving in gods. So what hope have we of figuring out which description is shorter?
[1] On some ways of looking at the problem, what needs specifying is not so much boundary conditions as our location within a vast universe or multiverse. Similar problem.
It is confusing. I’m still not even convinced that the theist’s description would be longer, but my estimation is so vague and has such massively large error bars that I can’t say you’re wrong, even if what you’re saying is surprising to me.
More or less. I’m saying I would find it surprising if the existence of God made the universe significantly more complex. (In the absolutely minimal-length description, I expect it to work out shorter, but like I say above, there are massive error bars on my estimates).
While I’ve heard this argued before, I have yet to see an idea for a world that (a) is provably better, (b) cannot be created by sufficient sustained human effort (in an “if everyone works together” kind of way) and (c) cannot be taken apart by sustained human effort into a world vaguely resembling ours (in an “if there are as many criminals and greedy people as in this world”).
I’m not saying that there isn’t nasty stuff in this world. I’m just not seeing a way that it can be removed without also removing things like free will.
Very little, really. There’s a lot of unknowns.
If we get seriously into discussing arguments from evil we could be here all year :-), so I’ll just make a few points and leave it.
(1) Many religious believers, including (I think) the great majority of Christians, anticipate a future state in which sin and suffering and death will be no more. I’m pretty sure they see this as a good thing, whether they anticipate losing their free will to get it or not.
(2) I don’t know whether I can see any way to make a world with nothing nasty in it at all without losing other things we care about, but it doesn’t seem difficult to envisage ways in which omnipotence in the service of perfect goodness could improve the world substantially. For instance, consider a world exactly like this one except that whenever any cell in any animal’s body (human or other) gets into a state that would lead to a malignant tumour, God magically kills it. Boom, no more cancer. (And no effect at all on anyone who wouldn’t otherwise be getting cancer.) For an instance of a very different kind, imagine that one day people who pray actually start getting answers. Consistently. I don’t mean obliging answers to petitionary prayers, I mean communication. Suddenly anyone who prays gets a response; the responses are consistent and, for some categories of public prayer, public. There is no longer any more scope for wars about whose vision of God is right than there is for wars about whose theory of gravity is right, and anyone who tries to recruit people to blow things up in the name of God gets contradicted by a message from God himself. There might still be scope for fights between people who think it’s God doing this and people who think it’s a super-powerful evil being, but I don’t think it’s credible that this wouldn’t decrease religious strife. And if you think that being badly wrong about God is a serious problem (whether just because it’s bad to be wrong about important things, or because it leads to worse actions, or because it puts one in danger of damnation) then I hope you’ll agree that almost everyone on earth having basically correct beliefs about God would be a gain. And no, it wouldn’t mean abolishing free will; do we lack free will because we find it difficult to believe the sky is green on account of seeing it ourselves?
(3) I think your conditions a,b,c are too strict, in that I see no reason why candidate better worlds need to satisfy them all in order to be evidence that our actual world isn’t the best possible. Perhaps, e.g., a better world is possible that could be created by sustained human effort with everyone working together but won’t because actually, in practice, in the world we’ve got, everyone doesn’t work together. So, OK, you can blame humanity for the fact that we haven’t created that world, and maybe doing so makes you feel better, but more than one agent can be rightly blamed for the same thing and the fact that it’s (kinda) our fault doesn’t mean it isn’t God’s. Do you suppose he couldn’t have encouraged us more effectively to do better? If not, doesn’t the fact that not even the most effective encouragement infinite wisdom could devise would lead us to do it suggest that saying we could is rather misleading? And (this is a point I think is constantly missed) whyever should we treat human nature, as it now is, as a given? Could your god really not have arranged for humanity to be a little nicer and smarter? In terms of your condition (c), why on earth should we, when considering what better worlds there might be, only consider candidates in which “there are as many criminals and greedy people as in this world”?
I’ve heard arguments that we’ve already reached that state—think about if you go back in time about two thousand years and describe modern medical technology and lifestyles. (I don’t agree with those arguments, mind you, but I do think that such a future state is going to have to be something that we build, not that we are given.
It’s difficult to be certain.
Now I’m imagining a lot of scientists studying and trying to figure out why some cells just mysteriously vanish for no good reason—and this becoming the greatest unsolved question in medical science and taking all the attention of people who might otherwise be figuring out cures for TB or various types of flu. (In this hypothetical universe, they wouldn’t know about malignant tumours, of course).
And if someone would otherwise develop a LOT of cancer, then Sudden Cell Vanishing Syndrome could, in itself, become a major problem...
Mind, I’m not saying it’s certain that universe would be worse, or even that it’s probable. It’s just easy to see how that universe could be worse.
That would be interesting. And you raise a lot of good points—there would be a lot of positive effects. But, at the same time… I think HPMOR showed quite nicely that sometimes, having a list of instructions with regard to what to do is a good deal less valuable than being able to understand the situation, take responsibility, and do it yourself.
People would still have free will, yes. But how many people would voluntarily abdicate their decision-making processes to simply do what the voice in the sky tells them to do (except the bits where it says “THINK FOR YOURSELVES”)?
...this is something which I think would probably be a net benefit. But I can’t be certain.
...very probably.
That just means that a better world needs to be designed that can be created under the constraints of not everyone working together. It’s a hard problem, but I don’t think it’s entirely insoluble.
That is a good question. I have no good answers for it.
...less criminals and greedy people would make things a lot easier, but I’m not quite sure how to arrange that without either (a) reducing free will or (b) mass executions, which could cause other problems.
Then I suggest that you classify the people making those arguments as Very Silly and don’t listen to them in future.
You’re welcome to think that; my point is simply that if such a thing is possible and desirable then either one can have a better world than this without abrogating free will, or else free will isn’t as important as theists often claim it is when confronted with arguments from evil.
(Perhaps your position is that the world could indeed be much better, but that the only way to make such a better world without abrogating free will is to have us do it gradually starting with a really bad world. I hope I will be forgiven for saying that that doesn’t seem like a position anyone would adopt for reasons other than a desperate attempt to avoid the conclusion of the argument from evil.)
Again, you’re welcome to imagine whatever you like, but if you’re suggesting that this would be a likely consequence of the scenario I proposed then I think you’re quite wrong (and again wonder whether it would occur to you to imagine that if you weren’t attempting to justify the existence of cancer to defend your god’s reputation). Under what circumstances would they notice this? Cells die all the time. We don’t have the technology to monitor every cell—or more than a tiny fraction of cells—in a living animal and see if it dies. We don’t have the technology or the medical understanding to be able to say “huh, that cell died and I don’t know why; that’s really unusual”. Maybe some hypothetical super-advanced medical science would be flummoxed by this, but right now I’m pretty sure no one would come close to noticing.
(Also, you could combine this with my second proposal, and then what happens is that someone says “hey, God, would you mind telling us why these cells are dying?” and God says “oh, yeah, those are ones that were going wrong and would have turned into runaway growths that could kill you. I zap those just before they do. You’re welcome.”.)
Please, think about that scenario for thirty seconds, and consider whether you can actually envisage a situation where having those cancerous cells self-destruct would be worse than having them turn into tumours.
But that was no part of the scenario I described. In that scenario, it could be that when people ask God for advice he says “Sorry, it’s going to be better for you to work this one out on your own.”
So here’s the thing. Apparently “reducing free will” is a terrible awful thing so bad that its spectre justifies the Holocaust and child sex abuse and all the other awful things that bad people do without being stopped by God. So … how come we don’t have more free will than we do? Why are we so readily manipulated by advertisements, so easily entrapped by habits, so easily overwhelmed by the desire for food or sex or whatever? It seems to me that if we take this sort of “free will defence” seriously enough for it to work, then we replace (or augment) the argument from evil with an equally fearsome argument from un-freedom.
...perhaps I have failed to properly convey that argument. I did not intend to say that our world now is in a state of perfection. I intended to point out that, if you were to go back in time a couple of thousand years and talk to a random person about our current society, then he would be likely to imagine it as a state of perfection. Similarly, if a random person in this era were to describe a state of perfection, then that might be a description of society a couple of thousand years from now—and the people of that time would still not consider their world in a state of perfection.
In short, “perfection” may be a state that can only be approached asymptotically. We can get closer to it, but never reach it; we can labour to reduce the gap, but never fully eliminate it.
You mean, just kind of starting up the universe at the point where all the major social problems have already been solved, with everyone having a full set of memories of how to keep the solutions working and what happens if you don’t?
...I have little idea why the universe isn’t like that (and the little idea I have is impractically speculative).
The only way? No. Starting a universe at the point where the answers to society’s problems are known is a possible way to do that.
...the thing is, I don’t know what the goal, the purpose of the universe is. Free will is clearly a very important part of those aims—either a goal in itself, or strictly necessary in order to achieve some other goal or goals—but I’m fairly sure it’s not the only one. It may be that other ways of making a better world without abrogating free will all come at the cost of some other important thing that is somehow necessary for the universe.
Though this is all very speculative, and the argument is rather shaky.
Okay, if the cells just die and don’t vanish, then that makes it a whole lot less physics-breaking. (Alternatively, if they are simply replaced with healthy cells, then it becomes even harder to spot).
...you know, combining those would be interesting as well. (Then the next logical question asked would be “Why don’t you zap all diseases?”)
No, I can’t. This guy’s in massive trouble either way.
A fair point.
Some people would be discouraged by this, others would work harder...
Yes, and I’m not quite sure that I get the whole of the why either.
...huh. That’s… that’s a very good question, really.
Hmmm. It seems logical that it must be possible to talk someone into (or out of) a course of action. “Here is some information that shows why it is to your benefit to do X” has to be possible, or there is no point to communication and we might as well all be alone.
And given that that is possible, advertising is an inevitable consequence—tell a million people to buy Tasty Cheese Snax or whatever, and some of them will listen. (More complex use of advertising is merely a refinement of technique). I don’t really see any logical alternative—either advertising, which is a special case of persuasion, has to work to some degree, or persuasion must be impossible. (If persuasion of a specific type proves impossible, advertisers will simply use a form of persuasion that is effective).
Habits… as far as I can tell, habits are a consequence of the structure of the human brain (we’re pattern-recognition machines, and almost all biases and problems in human thought come from this). A habit is merely a pattern of action; something that we find ourselves doing by default. Avoiding habits would require a pretty much total rewrite of the human brain. Which may be a good or a bad thing, but is a completely unknown thing.
Desires for food and stuff? …I have no idea. You could probably base an argument from unfreedom around that. (It’s clear enough where the desires come from—people without those desires would have been outcompeted by people with them, so there’s evolutionary pressure to have those biases. Is this an inevitable consequence of an evolutionary development?)
I realise that I said “I’ll just make a few points and leave it” and then, er, failed to do so. And lo, this looks like it could be the beginning of a lengthy discussion of evil and theism, for which LW probably really isn’t the best venue. So I’m going to ignore all the object-level issues aside from giving a couple of clarifications (see below) and make the following meta-point:
You seem to be basically agreeing with my arguments and conceding that your counterproposals are shaky and speculative; my point isn’t to declare victory nor to suggest you should be abandoning theism immediately :-) but just that I think this indicates that you agree with me that whether or not the world turns out to be somehow the best that omnipotence coupled with perfect wisdom and goodness can achieve, it doesn’t look much like it is. In which case I don’t think you can credibly make an argument of the form “the world is well explained by the hypothesis that it’s a morally-optimal world, which is a nice simple hypothesis, so we should consider that highly probable”. I’ve argued before that it’s not so simple a hypothesis, but it’s also a really terrible explanation for the world we actually see.
The promised clarifications: 1. The reason why my cancer-zapping proposal didn’t involve curing all diseases was that it’s easier to see that a change is a clear improvement if it’s reasonably small and simple. Curing all diseases is a really big leap, it probably makes a huge difference to typical lifespans and hence to all kinds of other things in society, it probably would get noticed which, for good or ill, could make a big difference to people’s ideas about science and gods and whatnot. I would in fact expect the overall effect to be substantially more positive than that of just zapping incipient cancers, but it’s more complicated and therefore less clear. I’m not trying to describe an optimal world, merely one that’s clearly better than this one. 2. My point about habits and advertising and the like wasn’t that if free will matters then those things should have no effect, still less that it’s a mystery why we have them; but that if our world is being optimized by a superbeing who values free will so much that, e.g., Hitler’s free will matters more than the murder of six million Jews, then we should expect much less impairment of our free will than we actually seem to have.
...to be fair, I think I also deserve part of the blame for this digression. I have a tendency to run away with minor points on occasion.
I agree that this is a position which can reasonably be held and for which very strong arguments can be made.
Makes sense. (It does raise the question of how we would know whether or not it is already happening for an even more virulent disease...)
To be fair, it wasn’t just Hitler; there were a whole lot of people working under his command whose free will was also involved. And several million other people trying to help or hinder one side or the other...
I think if you hypothetically let Hitler rise to power but then magically prevent him from giving orders to persecute Jews more severely than (say) requiring them to live in ghettos, you probably prevent the Holocaust without provoking a coup in which someone more viciously antisemitic takes over. Or killing him in childhood or letting his artistic career succeed better would probably suffice (maybe Germany would then have been taken over by different warmongers blaming their troubles on other groups, but is it really credible that in every such version of the world we get something as bad as Hitler?).
Of course it might turn out that WW2 was terribly beneficial to the world because it led to technological advances and the Holocaust was beneficial because it led to the establishment of the state of Israel, or something. But that’s an entirely different defence from the one we’re discussing here, and I can’t say it seems like a very credible one anyway. (If we really need those technological advances and the state of Israel, aren’t there cheaper ways for omnipotence to arrange for us to get them?)
I, too, think that this is extremely likely. This would show that Hitler’s orders were necessary for the Holocaust, but it would not show that they were sufficient—there’s probably at least a half-dozen or so people whose orders were also necessary for the Holocaust, and then of course there’s a lot of ways to prevent the Holocaust by affecting more than one person in some or other manner.
I doubt the political ramifications had much to do with it. The effect of thousands of people being placed in difficult moral situations and having to decide what to do might have been a factor, though; a sort of a stress testing of thousands of peoples’ free will, in a way which by and large strengthens their ability to think for themselves (because they’re now well aware of how bad things get when others think for them).
It doesn’t need to. The more ways there are to prevent the Holocaust, the more morally unimpressive not doing so becomes. Or, at least, the better the countervailing reasons need to be.
Six million Jews and six million others died in the Holocaust. It is not so easy to think for yourself when you are dead.
(Or: It is very easy to think for yourself when you are dead because you have then transcended the confusions and temptations of this mortal life. But I don’t think anyone’s going to argue seriously that the Holocaust was a good thing because the people murdered in it were thereby enabled to make better decisions post mortem. The only reason for this paragraph is to forestall complaints that the one before it assumes atheism.)
So, um… to go back along this line of argument a few posts, then...
...this means you’re in agreement with what I wrote here, right?
I’m not sure exactly what point you’re trying to make.
And several million other people hid jews in their attics; attempted (at great personal risk) to smuggle jews to safe places; helped jews across the borders; or, on the other side, hunted jews down, deciding to obey evil orders; arrested people and sent them to death camps; ran or even built said camps… and were, in one or another way, put through the wringer.
I can’t find the reference now, but I do seem to recall reading—somewhere—that Holocaust survivors were significantly less likely to fall victim to the Milgram experiment or similar things.
(I’m not talking about the people who were killed at all.)
1. The Holocaust could probably have been prevented, with no extra adverse consequences of similar severity, by an intervention that didn’t interfere with more than one person’s free will. 2. Therefore, a “free will” defence of (the compatibility of theism with) the world’s evil needs to consider that one person’s free will to be of comparable importance to all the suffering and death of the Holocaust. 3. If free will is that important, then in place of (or in addition to) the “problem of evil” we have a “problem of unfreedom”; we are all less free than we might have been, in many ways, and even if that unfreedom is only one millionth as severe as what it would have taken to stop the Holocaust, a billion people’s unfreedom is like a thousand Holocausts. 4. This seems to me to be a fatal objection to this sort of “free will” theodicy. (The real problem is clearly in step 2; we all know, really, that Hitler’s free will—or that of any of the other people whose different decisions would have sufficed to prevent the Holocaust—isn’t more important than millions of horrible deaths.)
I’m pretty sure the number who hid Jews in their attics, helped them escape, etc., was a lot less than six million. And, please, actually think about this for a moment. Consider (1) what the Nazis did to the Jews and (2) what some less-corrupted Germans did to help the Jews. Do you really, truly, want to suggest that #2 was a greater good than #1 was an evil? And are you seriously suggesting that the fact that a whole lot of other Germans had the glorious opportunity to exercise their free will and decide to go along with the extermination of the Jews makes this better?
I think I recall reading of a Christian/atheist debate in which someone—Richard Swinburne? -- made a similar suggestion, and his opponent—Peter Atkins? Christopher Hitchens? -- was heard to growl “May you rot in hell”. I personally think hell is too severe a punishment even for the likes of Hitler, and did even when I was a Christian, but I agree with the overall sentiment.
Oh.
...you know, a lot of what you’ve been saying over the past few days makes so much more sense now. In effect, you’re looking for the minimum intervention to prevent the Holocaust. (And it should have been possible to do that without taking control of Hitler’s actions; a sudden stroke, bolt of lightning, or well-timed meteor strike could have prevented Hitler from ever doing anything again without removing free will). Considering how much importance the universe seems to put on free will, this might be considered an even more minimal intervention (and no matter how much importance free will is assigned, one life is less than six million lives).
Which leads us directly to the question of why lightning doesn’t strike sufficiently evil people, preferably just before they do something sufficiently evil.
To which the answer, expressed in the simplest possible form, is “I don’t know”. (At best, I can theorise out loud, but it’s all going to end up circling back round to “I don’t know” in the end).
Well, if each one was helped by one person, refused help by one person, and arrested by one person, then that’s eighteen million moral dilemmas being faced. (Presumably one person could face several of these dilemmas).
No. I don’t. I’m very sure that it’s nowhere near a complete picture of all the consequences of the Holocaust, but (2) is nowhere near (1).
...and neither (2) nor (1) (nor both of them together) are a complete accounting of all the consequences of the Holocaust.
I have it on good authority (from a parish priests’ sermon, unfortunately he does not publish his sermons to the internet so I can’t link it) that the RCC agrees with you on this point.
That implies the “great people” approach to human history (history is shaped by actions of individual great people, not by large and diffuse economic/social/political/etc. forces) -- are you willing to accept it?
I think it implies only a rather weak version of the “great people” approach: some things of historical significance are down to individual people. (Who might be “great” in some sense, but might instead simply have been in a critical place at a critical time.) And yes, I’m perfectly willing to accept that; is there a reason why you would expect me not to?
Without Hitler, Germany would still have been unstable and at risk of being swayed by some sort of extremist demagogue willing to blame its troubles on Someone Else. So I’d assign a reasonable probability to something not entirely unlike the Nazi regime arising even without Hitler. It might even have had the National Socialists in charge. But their rhetoric wouldn’t have been the same, their policies wouldn’t have been the same, their tactics in war (if a war happened) wouldn’t have been the same, and many things would accordingly have been different. The extermination of millions of Jews doesn’t seem particularly inevitable, and I would guess that in (so to speak) most possible worlds in which Hitler is somehow taken out of the picture early on, there isn’t anything very much like the Holocaust.
That, of course, is not a falsifiable statement :-)
It’s not my fault if the nearest correct thing to the “great people” theory that actually follows from my opinions happens not to be falsifiable. (It’s not even clear that strong forms of the “great people” theory are falsifiable, actually.)
I am not talking about faults, but if it’s not falsifiable, can it be of any use?
Of course an opinion can be useful without being falsifiable. “Human life as we see it is not utterly worthless and meaningless,” is probably not falsifiable (how would you falsify it?), but believing it is very useful for avoiding suicide and the like.
Opinions are not falsifiable by their nature (well, maybe by revealed preferences). But, hopefully, core approaches to the study of history (e.g. “great people” vs “impersonal forces”) are more than mere opinions.
Quite possibly not. Is that a problem? (The way this conversation feels to me: You claim that X follows from my opinions. I say: no, only the much weaker X’ does. You then complain that X’ is unfalsifiable and useless. Quite possibly, but so what? I expect everyone has beliefs from which one can deduce unfalsifiable and useless things.)
A recap from my side: I didn’t claim that X follows from your opinions—I asked if you subscribe to the theory. You said yes, to a weak version. I pointed out that the weak version is unfalsifiable and useless. You said “so what?”
I don’t think that a version of a theory that has been sufficiently diluted and hedged to be unfalsifiable and so useless can be said to be a meaningful version of a theory. It’s just generic mush.
I’m not trying to trap you. I was interested in whether you actually believe in the “great people” theory (homeopathic versions don’t count). It now seems that you don’t. That is perfectly fine.
Actually, you did both:
(An aside: where I come from, saying “Your opinion implies X; are you willing to accept X?” is a more adversarial move than simply saying “Your opinion implies X” since it carries at least a suggestion that maybe they believe things that imply X without accepting X, hence inconsistency or insincerity or something.)
My point, in case it wasn’t clear, is that the nearest thing to the “great people” theory that actually follows from anything I’ve said is what you are describing as “generic mush”. (Perhaps next time I will be less polite and just say “No, that’s bullshit, no such thing follows from anything I’ve said” rather than trying to find the nearest thing I can that does follow. I was hoping that you would either explain why it would be interesting if I accepted the “generic mush” or else explain why you think something stronger than the “generic mush” follows from what I wrote, and confess myself rather taken aback at the tack you have actually taken.)
As to the “great people” theory: I believe that some historical events are down to the actions of individuals (who may or may not be great in any other sense) while some are much more the result of large and diffuse phenomena involving many people. That isn’t a statement that has a lot of readily evaluable observable consequences, but it’s the best answer I can give to the question you asked. (As I said above, I’m not sure that the “great people” theory itself, even in strong forms, actually fares any better in terms of verifiability or falsifiability.)
Note that infinite sets can have very low informational complexity-- that’s why complexity isn’t a slam-dunk against MUH.
Don’t think of infinite entities as very large finite entities.
I’m pretty sure I wasn’t thinking of infinite entities as very large finite entities, nor was I claiming that infinite sets must have infinite complexity or anything of the kind. What I was claiming high complexity for is the concept of “good”, not God or “perfectly good” as opposed to “merely very good”.
Wouldn’t “perfectly good” be the appropriate concept here?
Yes, but the point is that the “perfectly” part (1) isn’t what I’m blaming for the complexity and (2) doesn’t appear to me to make the complexity go away by its presence.
I don’t see how you can be sure about, when there is so much disagreement about the meaning of good. Human preferences are complex because they are idiosyncratic, but why would a deity, particularly a “philosopher’s god”, have idiosyncratic preferences? And an omniscient deity could easily be a 100% accurate consequentialist..the difficult part of consequentialism, having reliable knowledge of the consequences, has been granted...all you need to add to omniscience is a Good Will.
IOW, regarding both atheism and consequentialism as slam-dunks is a bit of a problem, because if you follow through the consequences of consequentialism, many of the arguments atheism unravel: a consequentialist deity is fully entitled to destroy two cities to save 10, that would be his version of a trolley problem.
It seems to me that no set of preferences that can be specified very simply without appeal to human-level concepts is going to be close enough to what we call “good” to deserve that name.
I entirely agree, but I don’t see how this makes a substantial fraction of the arguments for atheism unravel; in particular, most thoughtful statements of the argument from evil say not “bad things happen, therefore no god” but “bad things happen without any sign that they are necessary to enable outweighing gains, therefore probably no god”.
Not if the deity is omnipotent.
That’s debatable, at which point it is no longer a slam dunk.
They can be derived from simple game theory as applied to humans.
I’m not entirely convinced, but in any case even “human” is a really complicated concept.
I guess that means humans don’t exist. Oh, wait.
No idea where you get that from. Theories don’t get a complexity penalty for the complexity of things that appear in universes governed by the theories, but for the complexity of their assumptions. If you have an explanation of the universe that has “there is a good god” as a postulate, then whatever complexity is hidden in the words “good” and “god” counts against that explanation.
Yes, and God would care about game theory concepts and apply them to whatever being exist.
If I’m correctly understanding what you’re claiming, it’s something like this: “One can postulate a supremely good being without needing human-level concepts that turn out to be really high-complexity, by defining ‘good’ in very general game-theoretic terms”. (And, I assume from the context in which you’re making the claims: ”… And this salvages the project, mentioned above by CCC, of postulating God as an explanation for the world we see, the idea being that ultimately the details of physical law follow from God’s commitment to making the best possible world or something of the kind”.)
I’m very pessimistic about the prospects for defining “good” in abstract game-theoretic terms with enough precision to carry out any project like this. You’d need your definition to pick out what parts of the world are to count as agents that can be involved in game-like interactions, and to identify what their preferences are, and to identify what counts as a move in each game, and so forth. That seems really difficult (and high-complexity) to me, whether you focus on identifying human agents or whether you try to do something much more general. Evidently you think otherwise. Could you explain why?
(I’ll mention two specific difficulties I anticipate if you’re aiming for simplicity through generality. First: how do you avoid identifying everything as an agent and everything that happens as an action? Second: if the notion of goodness that emerges from this is to resemble ours enough for the word “good” actually to be appropriate, it will have to give different weight to different agents’s interests—humans should matter more than ducks, etc. How will it do that?)
So it would be difficult for a fintie being that is figuring out some facts that it doesn’t already know on the basis of other facts that it does know. Now..how about an omniscient being?
I think you may be misunderstanding what the relevance of the “difficulty” is here.
The context is the following question:
If we are comparing explanations for the universe on the basis of hypothesis-complexity (e.g., because we are using something like a Solomonoff prior), what complexity should we estimate for notions like “good”?
If some notion like “perfectly benevolent being of unlimited power” turns out to have very low complexity, so much the better for theistic explanations of the universe. If it turns out to have very high complexity, so much the worse for such explanations.
(Of course that isn’t the only relevant question. We also need to estimate how likely a universe like ours is on any given hypothesis. But right now it’s the complexity we’re looking at.)
In answering this question, it’s completely irrelevant how good some hypothetical omniscient being might be at figuring out what parts of the world count as “agents” and what their preferences are and so on, even though ultimately hypothetical omniscient beings are what we’re interested in. The atheistic argument here isn’t “It’s unlikely that the world was created by a god who wants to satisfy the preferences of agents in it, because identifying those agents and their preferences would be really difficult even for a god” (to which your question would be an entirely appropriate rejoinder). It’s something quite different: “It’s not a good explanation for the universe to say that it was created by a god who wants to satisfy the preferences of agents in it, because that’s a very complex hypothesis, because the notions of ‘agent’ and ‘preferences’ don’t correspond to simple computer programs”.
(Of course this argument will only be convincing to someone who is on board with the general project of assessing hypotheses according to their complexity as defined in terms of computer programs or something roughly equivalent, and who agrees with the claim that human-level notions like ‘agent’ and ‘preference’ are much harder to write programs for than physics-level ones like ‘electron’. Actually formalizing all this stuff seems like a very big challenge, but I remark that in principle—if execution time and computer memory are no object—we basically already know how to write a program that implements physics-so-far-as-we-understand-it, but we seem to be some way from writing one that implements anything much like morality-as-we-understand-it.)
It’s not surprising that one particular parsimony principle can be used to overturn one particular form of theism. After all, most theists disagree with most theisms...and most believres in a Weird Science hypothesis (MUH, Matrix, etcv ) don’t believe in the others.
The question is: where is the slam dunk against theism..the one that works against all forms of theism, that works only against theism , and not against similar scientific ideas like Matrix Lords, and works against the strongest arguments for theism, not just biblically literalist creationist protestant Christianity, and doesn’t rest on cherry-picking particular parisimony principles?
There are multiple principles of parsimony, multiple Occam’s razors.
Some focus on ontology, on the multiplication of entities, as in the original razor others on epistemology the multiplication of assumptions. The Kolmogorov complexity measure is more alligned to the latter.
Smaller universes are favoured by the ontological razor,but disfavoured by the Epistemological razor, because they are more arbitrary. Maximally large universes can have low epistemic complexity (because you have to add information specifying hwat has been left out to arrive at smaller universs), and low K. complexity (because short programmes can generate infinite bitstrings, eg an expansion of pi).
Morality as we know it evolved from physics plus starting conditions. When you say that physics is soluble but morality isn’t, I suppose you mean that the starting conditions are absent.
You need to know not just the starting conditions, but also the position where morality evolves. That position can theoretically have huge complexity.
Well, obviously we should pick the simplest one :-).
Seriously: I wouldn’t particularly expect there to be a single all-purpose slam dunk against all varieties of theism. Different varieties of theism are, well, very different. (Even within, say, protestant Christianity, one has the fundamentalists and the super-fuzzy liberals, and since they agree on scarcely any point of fact I wouldn’t expect any single argument to be effective against both positions.)
I’m pretty sure that around these parts the “epistemological” sort (minimize description / program rather than size of what it describes / produces) is much, much more widely held than then “ontological” sort.
That’s one reasonable way of looking at it, but if the best way we can find to compute morality-as-we-understand-it is to run a complete physical simulation of our universe then the outlook doesn’t look good for the project of finding a simpler-than-naturalism explanation of our universe based on the idea that it’s the creation of a supremely good being.
So you don’t think we’re mostly solipsists? :)
This is motivated stopping. You don’t want to admit any evidence for theism so you declare the problem impossible instead of thinking about it for 10 seconds.
Here are some hints: If you were dropped into an alien planet or even an alien universe you would have no trouble identifying the most agenty things.
Well there you go, agents are things that can be involved in game-like interactions.
This is the third time in the last few weeks that you have impugned my integrity on what seems to me to be zero evidence. I do wish you would at least justify such claims when you make them. (When I have asked you to do so in the past you have simply ignored the requests.)
Would it kill you to entertain some other hypotheses—e.g., “the other guy is simply failing to notice something I have noticed” and “I am simply failing to notice something the other guy has noticed”? Perhaps it would; your consistent strategy of downvoting everyone who disagrees with you doesn’t exactly suggest that you’re here for a collaborative search for truth as opposed to fighting a war with arguments as soldiers.
[EDITED to add: I didn’t, in fact, declare anything impossible; and before declaring it very difficult I did in fact think about it for more than ten seconds. I see little evidence that you’ve given as much thought to anything I’ve said in this discussion.]
I have agent-identifying hardware in my brain. It is, I think, quite complicated. I don’t know how to make a computer identify agents, and so far as I know no one else does either. The best automated things I know of for tasks remotely resembling agent-identification are today’s state-of-the-art image classifiers, which typically involve large mysterious piles of neural network weights, which surely count as high-complexity if anything does.
Identifying game-like interactions is also (so far as I can tell) a problem no one has any inkling how to solve, especially if we don’t have the prior ability to identify the agents.
No, but I do downvote people who appear to be completely mind-killed.
Rather, identifying agents using algorithms with reasonable running time is a hard problem.
Also, consider the following relatively uncontroversial beliefs around here:
1) The universe has low Kolmogorov complexity.
2) An AGI is likely to be developed and when it does it’ll take over the universe.
Now let’s consider some implications of these beliefs:
3) An AGI has low Kolmogorov complexity since it can be specified as “run this low Kolmogorov complexity universe for a sufficiently long period of time”.
Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.
I think your mindkill detection algorithms need some tuning; they have both false positives and false negatives.
I know of no credible way to do it with unreasonable running time either. (Unless you count saying “AIXI can solve any solvable problem, in principle, so use AIXI”, but I see no reason to think that this leads you to a solution with low Kolmogorov complexity.)
I don’t think your argument from superintelligent AI works; exactly where it fails depends on some details you haven’t specified, but the trouble is some combination of the following.
For your first premise to be uncontroversial around here, I think you need to either take it as applying only to the form of the laws of physics and not to initial conditions, arbitrary constants, etc. (in which case you can’t identify “this universe” and still have it be of low complexity) or adopt something like Tegmark’s MUH that amounts to running every version of the universe (all boundary conditions, all values for the constants, etc.) in parallel (in which case what gets taken over by a superintelligent AI is no longer the whole thing but a possibly-tiny part, and specifying that part costs a lot of complexity).
You need to say where in the universe the AGI is, which imposes a large complexity cost—unless …
… unless you are depending on it taking over the whole universe so that you can just point at the whole caboodle and say “that thing”—but then presumably its agent-detection facilities are a tiny part of the whole (not necessarily a spatially localized part, of course), and singling those out so you can say “agents are things that that identifies as agents” again has a large complexity cost from locating them.
Doesn’t that undermine the premise of the whole “a godless universe has low Kolmogorov complexity” argument that you’re trying to make?
Well, all the universes that support can life are likely wind up taken over by AGI’s.
But, the AGI can. Agentiness is going to be a very important concept for it. Thus it’s likely to have a short referent to it.
Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.
What do you mean by “short referent?” Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that “agentiness” is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn’t fail on any conceivable edge-cases.
Saying that it’s important doesn’t mean it’s simple. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity.”
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”. For example, the Mandelbrot set is “complicated” in the intuitive sense, but has low Kolmogorov complexity since it can be constructed by a simple process.
It does if you look at the rest of my argument.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
The rest of your argument is fundamentally misinformed.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
It might, perhaps, if I were actually trying to make that argument. But so far as I can see no one is claiming here that the universe has low komplexity. (All the atheistic argument needs is for the godless version of the universe to have lower komplexity than the godded one.)
Even if so, you still have the locate-the-relevant-bit problem. (Even if you can just say “pick any universe”, you have to find the relevant bit within that universe.) It’s also not clear to me that locating universes suitable for life within something like the Tegmark multiverse is low-komplexity.
An easy-to-use one, perhaps, but I see no guarantee that it’ll be something easy to identify for others, which is what’s relevant.
Consider humans; we’re surely much simpler than a universe-spanning AGI (and also more likely to have a concept that nicely matches the human concept of “agent”; perhaps a universe-spanning AGI would instead have some elaborate range of “agent”-like concepts making fine distinctions we don’t see or don’t appreciate; but never mind that). Could you specify how to tell, using a human brain, whether something is an agent? (Recall that for komplexity-measuring purposes, if you do so by means of language or something then the komplexity of that language is part of the cost you pay. In fact, it’s worse; you need to specify how to work out that language by looking at human brains. Similarly, if you want to say “look at the neurons located here”, the thing you need to pay the komplexity-cost of is not just specifying “here” but specifying how to find “here” in a way that works for any possible human-like thing.)
Krusty’s Komplexity Kalkulator!
Kolmogorov’s, which is of course the actual reason for my initial “k”s.
It reminded me of reading Simpsons comics, is all.
What part of “universe taken over by AGI” is causing your reading comprehension to fail?
You haven’t played with cellular automata much, have you?
Ask it.
The cost of specifying a language is the cost of specifying the entity that can decode it, and we’ve already established that a universe spanning AGI has low Kolmogorov complexity.
No part. I already explained why I don’t think “universe taken over by AGI” implies “no need for lots of bits to locate what we need within the universe”; I really shouldn’t have to do so again two comments downthread.
Fair comment (though, as ever, needlessly obnoxiously expressed); I agree that there are low-komplexity things that surely contain powerful intelligences. But now take a step back and look at what you’re arguing. I paraphrase thus: “A large instance of Conway’s Life, seeded pseudorandomly, will surely end up taken over by a powerful AI. A powerful AI will be good at identifying agents and their preferences. Therefore the notions of agent and preference are low-komplexity.” Is it not obvious that you’re proving too much on the basis of too little here, and therefore that something must have gone wrong? I mean, if this argument worked it would appear to obliterate differences in komplexity between any two concepts we might care about, because our hypothetical super-powerful Life AI should also be good at identifying any other kind of pattern.
I’ve already indicated one important thing that I think has gone wrong: saying how to use whatever (doubtless terribly complicated) AI may emerge from running “Life” on a board of size 10^100 for 10^200 ticks to identify agents may require a great many bits. I think I see a number of other problems, but it’s 2.30am local time so I’ll leave you to look for them, if you choose to do so.
No. It is the cost of specifying that entity and indicating somehow that it is to decode that language rather than some other.
Let’s make this a little more concrete. You are claiming that the likely emergence of universe-spanning AGIs able to detect agency means that the notion of “agent” has low komplexity. Could you please sketch what a short program for identifying agents would look like? I gather that it begins with something like “Make a size-10^100 Life instance, seeded according to such-and-such a rule, and run it for 10^200 ticks”, which I agree is low-komplexity. But then what? How, in genuinely low-komplexity terms, are you then going to query this thing so as to identify agents in our universe?
I am not expecting you to actually write the program, of course. But you seem sure that it can be done and doesn’t need many bits, so you surely ought to be able to outline how it would work in general terms, without any points where you have to say “and then a miracle happens”.
Hard code the question in the AI’s language directly into the stimulation. (This is what is known in the computational complexity world as a non-constructive existence proof.)
OK, so first let me check I’ve understood how your proposal works. I’ve rolled the agent-identifying bit into a rough attempt at a “make the universe a god would make” algorithm, since of course that’s what we’re actually after. It isn’t necessarily exactly what you have in mind, but it seems like a reasonable extrapolation.
Make a simulated universe of size N operating according to algorithm A, initially seeded according to algorithm B, and run it for time T. (Call the result U.)
Here N and T are large and A and B are simple algorithms with the property that when we do this we end up with a superintelligent AI occupying a large fraction of U.
It is a presupposition of this approach that such algorithms exist.
Now let X be a complete description of any candidate universe. Modify U to make U(X), which is like U but has somehow incorporated whatever one needs to do in universe U to ask the superintelligent AI “In the universe described by X, to what extent do the agents it contains have their preferences satisfied?”.
I’m assuming something like preference utilitarianism here; one could adapt the procedure for other notions of ethics.
It is a presupposition of this approach that there is a way to ask such a question and be confident of getting an answer within a reasonable time.
Run our simulation for a further time T’ and decode the resulting changes in U to get an answer to our question.
Now our make-a-universe algorithm goes like this: Consider all possible X below some (large but fixed) complexity bound. Do the above for each. Identify the X that gives the largest answer to our question.
Congratulations! We have now identified the Best Possible World. Predict that whatever happens in the Best Possible World is what actually happens.
And now—if in fact this is the best of all possible worlds—we have an algorithm that predicts everything, and doesn’t need any particular (perhaps-complex) laws of physics built into it. In which case, the simplest explanation for our world is that it is the best possible world and was made with that as desideratum.
So, first of all: Yes, I kinda-agree that something kinda like this could in principle kinda work, and that if it did we would have good reason to believe in a god or something like one, and that this shows that there are kinda-conceivable worlds, perhaps even rather complex ones, in which belief in a god is not absurd on the basis of Kolmogorov complexity. Excellent!
None the less, I find it unconvincing even on those terms, and considerably less convincing still as an argument that our world might be such a world. I’ll explain why.
(But first, a preliminary note. I have used the same superintelligent AI for agent-identification and universe-assessment. We don’t have to do that; we could use different ones for those two problems or something. I don’t see any particular advantage to doing so, but it was only the agent-identification problem that we were specifically discussing and for all I know you may have some completely different approach in mind for the universe-assessment. If so, some of what follows may miss the mark.)
First, there are some technical difficulties. For instance, it’s one thing to say that almost all universes (hence, hopefully, at least one very simple one) eventually contain a superintelligent AI; but it’s another to say that they eventually contain a superintelligent AI that we can induce to answer arbitrary questions and understand the answers of, by simply-specifiable diddling with its universe. It could be, e.g., that AIs in very simple universes always have very complicated implementations, in which case specifying how to ask it our question might take as much complexity as specifying how our existing world works. And it seems very unlikely that a superintelligent universe-dominating AI is going to answer whatever questions we put to it just because we ask. And there’s no particular reason to expect one of these things to have a language in any sense we can use. (If it’s a singleton, what need has it of language?)
Second, this works only when our world is in fact best-possible according to some very specific criterion. (As described above, the algorithm fails disastrously if our world isn’t exactly the best-possible world according to that criterion. We can make it more robust by making it not a make-a-universe machine but a what-happens-next machine: in any given situation it feeds a description of that to the AI and asks “what happens next, to maximize agents’ preference satisfaction?”. Or maybe it iterates over possible worlds again, looks only at those that at some point closely resemble the situation whose sequel it’s trying to predict, and chooses the one of those for which the AI gives the best rating. These both have problems of their own, and this comment is too long as it is so I won’t expand on them here. Let’s just suppose that we do somehow at least manage to make something that makes not-completely-absurd predictions about whatever situations we may encounter in the real world, using techniques closely resembling the above.)
Anyway: the point here is twofold. Even supposing our universe is best-possible according to some god’s preferences, there is no particular reason to think that the simplest superintelligent AI we find will have the exact same preferences, and predictions for what happens may well depend in a very sensitive and fiddly manner on exactly what preferences the god in question has. I see absolutely no reason to think that specifying those preferences accurately enough to enable prediction doesn’t require as many bits as just describing our physical universe does. And: in any case our universe looks so hilariously unlike a world that’s best-possible according to any simple criterion (unless the criterion is, e.g., “follows the actual world’s laws of physics) that this whole exercise seems to have little chance of producing good predictions of our world.
That’s a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.
The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your “necessary” condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.
The Second Law of Thermodynamics causes the Kolmogorov complexity of the universe to increase over time. What you’ve actually constructed is an argument against being able to simulate the universe in full fidelity.
This is not right, K(.) is a function that applies to computable objects. It either does not apply to our Universe, or is a constant if it does (this constant would “price the temporal evolution in”).
I sincerely don’t think it works that way. Consider the usual relationship between Shannon entropy and Kolmogorov complexity: H(x) \proportional E[K(x)]. We know that the Gibbs, and thus Shannon, entropy of the universe is nondecreasing, and that thus means that the distribution over universe-states is getting more concentrated on more complex states over time. So the Kolmogorov complexity of the universe, viewed at a given instant in time but from a “god’s eye view”, is going up.
You could try to calculate the maximum possible entropy in the universe and “price that in” as a constant, but I think that dodges the point in the same way as AIXI_{tl} does by using an astronomically large “constant factor”. You’re just plain missing information if you try to simulate the universe from its birth to its death from within the universe. At some point, your simulation won’t be identical to the real universe anymore, it’ll diverge from reality because you’re not updating it with additional empirical data (or rather, because you never updated it with any empirical data).
Hmmm… is there an extension of Kolmogorov complexity defined to describe the information content of probabilistic Turing machines (which make random choices) instead of deterministic ones? I think that would better help describe what we mean by “complexity of the universe”.
What does this mean? What is the expectation taken with respect to? I can construct an example where the above is false. Let x1 be the first n bits of Chaitin’s omega, x2 be the (n+1)th, …, 2nth bits of Chaitin’s omega. Let X be a random variable which takes the value x1 with probability 0.5 and the value x2 with probability 0.5. Then E[K(X)] = 0.5 O(n) + 0.5 O(n) = O(n), but H(X) = 1.
edit: Oh, I see, this is a result on non-adversarial sample spaces, e.g. {0,1}^n, in Li and Vitanyi.
Yep. I should have gone and cited it, actually.
This is not and can not be true. I mean, for one the universe doesn’t have a Kolmogorov complexity*. But more importantly, a hypothesis is not penalized for having entropy increase over time as long as the increases in entropy arise from deterministic, entropy-increasing interactions specified in advance. Just as atomic theory isn’t penalized for having lots of distinct objects, thermodynamics is not penalized for having seemingly random outputs which are secretly guided by underlying physical laws.
*If you do not see why this is true, consider that there can be multiple hypothesis which would output the same state in their resulting universes. An obvious example would be one which specifies our laws of physics and another which specifies the position of every atom without compression in the form of physical law.
This is exactly the sort of thing for which Kolmogorov complexity exists: to specify the length of the shortest hypothesis which outputs the correct result.
Atomic theory isn’t “penalized” because it has lots of distinct but repeated objects. It actually has very few things that don’t repeat. Atomic theory, after all, deals with masses of atoms.
Um, you appear to be trying to argue that the universe has infinite Kolmogorov complexity. Well, if it does it kind of undermines the whole “we must reject God because a godless universe has lower Kolmogorov” complexity argument.
Not infinite, just growing over time. This just means that it’s impossible to simulate the universe with full fidelity from inside the universe, as you would need a bigger universe to do it in.
Not sure anyone is dumb enough to think the visible universe has low Kolmogorov complexity. That’s actually kind of the reason why we keep talking about a universal wavefunction, and even larger Big Worlds, none of which an AGI could plausibly control.
I think there a word missing there. (“trouble believing”? “trouble with”? “trouble recognizing”?)
Thanks, fixed.
No, but it does mean that if you want to argue that humans exist you must provide strong positive evidence, perhaps telling us an address where we can meet a real live human ;)
I could stand to meet a real-life human. I’ve heard they exist, but I’ve had such a hard time finding one!
“Omnipotent”, “omniscient”, and “being” are packing a whole shit-ton of complexity, especially “being”. They’re definitely packing more than a model of particle physics, since we know that all known “beings” are implemented on top of particle physics.
I don’t think mind designs are dependent on their underlying physics. The physics is a substrate, and as long as it provides general computation, intelligence would be achievable in a configuration of that physics. The specifics of those designs may depend on how those worlds function, like how jellyfish-like minds may be different from bird-like minds, but not the common elements of induction, analysis of inputs, and selection of outputs. That would mean the simplest a priori mind would have to be computed by the simplest provision of general computation, however. An infinitely divine Turing Machine, if you will.
That doesn’t mean a mind is more basic than physics, though. That’s an entirely separate issue. I haven’t ever seen a coherent model of God in the first place, so I couldn’t begin to judge the complexity of its unproposed existence. If God is a mind, then what substrate does it rest on?
We don’t know that beings require particle physics—if the only animal I’ve ever seen is a dog, that is not proof that zebras don’t exist.
I’m not saying that there isn’t complexity in the word “being”, just that I’m not convinced that your argument in favour of there being more complexity than particle physics is good.
“Being” surely does not have more complexity than particle physics. Particles are already beings.
“Being” in the sense of intelligent mind sure as hell does. Particles are not beings in that sense of the word, and that’s the common sense.
Kolmogorov complexity is, in essence, “How many bits do you need to specify an algorithm which will output the predictions of your hypothesis?” A hypothesis which gives a universally applicable formula is of lower complexity than one which specifies each prediction individually. More simple formulas are of lower complexity than more complex formulas. And so on and so forth.
The source of the high Kolmogorov complexity for the theistic hypothesis is God’s intelligence. Any religious theory which involves the laws of physics arising from God has to specify the nature of that God as an algorithm which specifies God’s actions in every situation with mathematical precision and without reference to any physical law which would (under this theory) later arise from God. As you can imagine, doing so would take very, very many bits to do successfully. This leads to very high complexity as a result.
If we assume that God is a free-willed agent, then that might even be impossible in a finite number of bits...
The number of bits required to specify an agent with free will (insofar as free will is a meaningful term when discussing a deterministic universe) is definitely finite. Very large, but finite. Which is a good thing, since Kolmogorov priors specify a prior of 0 for a hypothesis with infinite complexity and assigning a prior of 0 to a hypothesis is a Bad Thing for a variety of reasons.
I don’t understand the concept of specifying (in bits) an agent with free will.
The length (in bits for a program in a universal Turing machine) of the smallest algorithm which will output the same outputs as the agent if the agent were given the same inputs as the algorithm.
Do note that I said “insofar as free will is a meaningful term when discussing a deterministic universe”. Many definitions of free will are defined around being non-deterministic, or non-computable. Obviously you couldn’t write a deterministic computer program which has those properties. But there are reasons presented on this site to think that once you pare down the definition to the basic essentials of what is really meant and stop being confused by the language used to traditionally describe free will, that you should in principle be able to have a deterministic agent who does, in fact, have free will for all meaningful purposes.
I don’t read it this way. The approach you linked to basically says that free will does not exist and is just a concept humans came up with to confuse themselves. If you accept this, then you should not use the “free will” terminology at all because there is no point to it. So I still don’t understand that concept.
Exactly so.
The only reason I’m using the free will terminology at all here is because the hypothesis under consideration (an entity with free will which resembles the Abrahamic God is responsible for the creation of our universe) was phrased in those terms. In order to evaluate the plausibility of that claim, we need a working definition of free will which is amiable to being a property of an algorithm rather than only applying to agents-in-abstract. I see no conflict between the basic notion of a divinely created universe and the framework for free will provided in the article hairyfigment links. One can easily imagine God deciding to make a universe, contemplating possible universes which They could create, using Their Godly foresight to determine what would happen in each universe and then ultimately deciding that the one we’re in is the universe They would most prefer to create. There’s many steps there, and many possible points of failure, but it is a hypothesis which you could, in principle, assign an objective Solomonoff prior to.
(Note: This post should not be taken as saying that the theistic hypothesis is true. Only that its likelihood can successfully be evaluated. I know it is tempting to take arguments of the form “God is a hypothesis which can be considered” to mean “God should be considered” or even “God is real” due to arguments being foot soldiers and it being really tempting to decry religion as not even coherent enough to parse successfully.)
Would you care to demonstrate? Preferably starting with explaining how the Solomonoff prior is relevant (note that a major point in theologies of all Abrahamic religions is that God is radically different from everything else (=universe)).
No, I would not care to demonstrate. A proof that a solution exists is not the same thing as a procedure for obtaining a solution. And this isn’t even a formal proof: it’s a rough sketch of how you’d go about constructing one, informally posted in a blog’s comment section as part of a pointless and unpleasant discussion of religion.
If you can’t follow how “It is possible-in-principle to calculate a Solomonoff prior for this hypothesis” relates to “We are dismissive of this hypothesis because it has high complexity and little evidence supporting it.” I honestly can’t help. This is all very technical and I don’t know what you already know, so I have no idea what explanation would be helpful to close that inferential distance. And the comments section of a blog really isn’t the best format. And I’m certainly not the best person to teach about this topic.
Sure, that’s fine.
And yet here we have someone talking about “free will” as if it meant something, and CCC’s usage seems entirely consistent with the meaning described here. (The link is a spoiler for the questions linked in the grandparent, but I’ve already tried to direct CCC’s attention to the computable kind of “free will” in the hope of clarifying the discussion. That user claimed to have read a large part of the Sequences.)
...could you elaborate on this point a bit more? I’d really like to know how you prove that.
Ok, everyone. LessWrong has now descended to actually arguing over the Kolmogorov complexity of the Christian God, as if this was a serious question. The Slate Star Codex readers demanding “charity” for this, that, and everything else have taken over.
LessWrong is now officially a branch of /r/philosophy. The site is dead, and everyone who actually wanted LessWrongian things can now migrate somewhere else.
Of blessed memory, 2008-2015.
Two or three people confused about K-complexity doesn’t herald the death of LW.
Well, there is a lot of motivated cognition on that topic (relevant disclaimer, I’m an atheist in the conventional sense of the word) and it seems deceptively straight forward to answer (mostly by KC-dabblers), but it is in fact anything but. The non-triviality arises from technical considerations, not some philosophical obscurantism.
This may be the wrong comment chain to get into it, and your grandstanding doesn’t exactly signal an immediate willingness to engage in medias res, so I won’t elaborate for the moment (unless you want me to).
The laws of physics as we know them are very simple, and we believe that they may actually be even simpler. Meanwhile, a mind existing outside of physics is somehow a more consistent and simple explanation than humans having hardware in the brain that promotes hypotheses involving human-like agents behind everything, which explains away every religion ever? Minds are not simpler than physics. This is not a technical controversy.
Go on and elaborate, but unless you can show some very thorough technical considerations, I just don’t see how you’re able to claim a mind has low Kolmogorov complexity.
“Mind” is a high level concept, on a base level it is just a subset of specific physical structures. The precise arrangement of water molecules in a waterfall, over time, matches if not dwarves the KC of a mind.
That is, if you wanted to recreate precisely this or that waterfall as it precisely happened (with the orientation of each water molecule preserved with high fidelity), the strict computational complexity would be way higher than for a comparatively more ordered and static mind.
The data doesn’t care what importance you ascribe to it. It’s not as if, say, “power”, automatically comes with “hard to describe computationally”. On the contrary, allowing for a function to do arbitrary code changes is easier to implement that defining precise power limitations (see constraining an AI’s utility function).
Then there’s the sheer number of mind-phenomena, are you suggesting adding one by necessity increases complexity? In fact, removing one can increase it as well: If I were to describe a reality in which ceteris is paribus, with the exception of your mind not actually being a mind, then by removing a mind I would have increased overall complexity. Not even taking into account that there are plenty of mind-templates around already (implicitly, since KC, even though uncomputable, is optimal), and that for complexity considerations, adding another of a template isn’t even adding much, necessarily (I’m aware that adding just a few bits already comes with a steep penalty, this comment isn’t meant to be exhaustive). See also the alphabet example further on.
Then there’s the illusion that somehow our universe is of low complexity just because the physical laws governing the transition between time-steps are simple. That is mistaken. If we just look at the laws, and start with a big bang that is not precisely informationally described, we get a multiverse host of possible universes with our universe not in the beginning, which goes counter the KC demands. You may say “I don’t care, as long as our universe is somewhere in the output, that’s fine”. But then I propose an even simpler theory of everything: Output a long enough sequence of Pi, and you eventually get our universe somewhere down the line as well. So our universe’s actual complexity is enourmous, down to atoms in a stone on a hill on some moon somewhere in the next galaxy. There exists a clear trade-off between explanatory power and conciseness. I used to link an old Hutter lecture on that latter topic a few years ago, I can dig it out if you’d like. (ETA: See for example the paragraph labeled “A” on page 6 in this paper of his).
The old argument that |”universe + mind”| > |”universe”| is simplistic and ill-applied. Unlike with probabilities, the sequence ABCDABCDABCDABCD can be less complex than ABCDABCDABCDABC.
The list goes on, if you want to focus on some aspect of it we can go into greater depth on that. Bottom line is, if there’s a slam dunk case, I don’t see it.
Because rationality isn’t about following reason where it takes you, it’s about sticking as dogmatically as possible to the 39 articles of lwrationality as laid down in the seq-tures.
Rationality is indeed about following reason where it takes you. This is very different from following wherever someone would have their feelings hurt if you didn’t go. Of course, rationality also involves the use of priors, evidence, and accumulated information over your entire lifetime. You are not merely allowed but required to assign a very low prior, in the range of “bloody ridiculous”, to propositions which contradict all your available information, or require some massively complex rationalization to be compatible with all your available information.
What did you have in mind specifically?
Rationality also involves paradigm shifts, revolutions and inversions. “Use priors” is not, should not be, a call for fundamental conservatism.
One person’s complex rationalisation is another’s paradigm shift.
Evolution, relativity and quantum physics are paradigm shifts. Some people still aren’t aboard with some of them, finding them against “logic”, “reason”, “common sense”, etc. The self-professed rationalist Ayn Rand rejected all three: do you want to be another Ayn Rand?
The conservative incremental paradigm, applied retroactivley, would lead lwrationalists to reject good science. So they kind of don’t believe in it as the only paradigm. But they also kind of do, since it is the only paradigm they use when discussing theology., or other things they don’t like.
Not sure what “paradigm shift” is supposed to mean, but it sounds to me like “nobody had the slightest suspicion, then came a prophet, told something completely unexpected, and everyone’s mind was blown”. Well, if it is supposed to be anything like that, then evolution and relativity are poor examples (not completely sure about quantum physics).
With evolution, people already had millenia of experience with breeding. Darwin’s new idea was, essentially: “if human breeders can achieve some changes by selecting individuals with certain traits… couldn’t the forces of nature, by automatically selecting individuals who have a greater chance to survive or a greater chance to reproduce, have ultimately a similar effect on the species?”
With relativity, people already had many equations, already did the experiments that disproved the aether, etc. A large part of the puzzle was already known, Einstein “only” had to connect a few pieces together in a creative way. And then it was experimentally tested and confirmed.
By “paradigm shift”, I mean a certain amount of unlearning, overturning previously established beliefs—the fixity of species ion the case of evolution, absolute simultaneity in the case of relativity, determinism in the case of quantum mechanics.
ETA:
Note the contradicitions to “available information” listed above.