You were not entirely clear, but you seem to be taking these as signals of things being Bad or Good in the morality sense, right? Ok so it feels like there is an objective morality. Let’s come up with hypotheses:
You have a morality that is the thousand shards of desire left over by an alien god. Things that were a good idea (for game theory, etc reasons) to avoid in the ancestral environment tend to feel good so that you would do them. Things that feel bad are things you would have wanted to avoid. As we know, an objective morality is what a personal morality feels like from the inside. That is, you are feeling the totally natural feelings of morality that we all feel. Why you attached special affect to the bible, I suppose that’s the affect hueristic: you feel like the bible is true and it is the center of your belief or something, and that goodness gets confused with a moral goodness. This is all hindsight, but it seems pretty sound.
Or it could be Jesus-is-Son-of-a-Benevolent-Love-Agent-That-Created-the-Universe. I guess God is sending you signals to say what sort of things he likes/doesn’t like? Is that the proposed mechanism for morality? I don’t know enough about the theory to say much more.
Ok now let’s consider the prior. The complex loving god hypothesis is incredibly complicated. Minds are so complex we can’t even build one yet. It would take a hell of a lot more than your feeling-of-morality evidence to even raise this to our attention. A lot more than any scientific hypothesis has ever collected, I would say. You must have other evidence, not only to overcome the prior, but all the evidence against a loving god who intelligently arranged anything,
Anyways, It sounds like you were primarily a moral nihilist before your encounter with the god-prescribes-a-morality hypothesis. Have you read Eliezers metaethics stuff? it deals the with subject of morality in a neutral universe quite well.
I’m afraid I don’t see why you call your reward-signal-from-god is an “objective morality” It sounds like the best course of action would be to learn the mechanism and seize control of it like AIXI would.
I (as a human) already have a strong morality, so if I figured out that the agent responsible for all of the evil in the universe were directly attempting to steer me with a subtle reward signal, I’d be pissed. It’s interesting that you didn’t have that reaction. I guess that’s the moral nihilism thing. You didn’t know you had your own morality.
The complex loving god hypothesis is incredibly complicated. Minds are so complex we can’t even build one yet.
There are two problems with this argument. First, each individual god might be very improbable, but that could be counterbalanced by the astronomical number of possible gods (e.g. consider all possible tweaks to the holy book), so you can argue apriori against specific flavors of theism but not against theism in general. Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity. A powerful mind (or a program that blossoms into one) could even be simpler than physics as we currently know it, which is already quite complex and seems to have even more complexity waiting in store.
IMO a correct argument against theism should focus on the “loving” part rather than the “mind” part, and focus on evidence rather than complexity priors. The observed moral neutrality of physics is more probable if there’s no moral deity. Given what we know about evolution etc., it’s hard to name any true fact that makes a moral deity more likely.
I’m not sure that everything in my comment is correct. But I guess LW could benefit from developing an updated argument against (or for) theism?
Your argument about K-complexity is a decent shorthand but causes people to think that this “simplicity” thing is baked into the universe (universal prior) as if we had direct access to the universe (universal prior, reference machine language) and isn’t just another way of saying it’s more probable after having updated on a ton of evidence. As you said it should be about evidence not priors. No one’s ever seen a prior, at best a brain’s frequentist judgment about what “priors” are good to use when.
Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity.
That may be somewhat misleading. A seed AI, denied access to external information, will be a moron. Yet the more information it takes into memory the higher the K-complexity of the thing, taken as a whole, is.
You might be able to code a relatively simple AI in your garage, but if it’s going to be useful it can’t stay simple.
ETA: Also if you take the computer system as a whole with all of the programming libraries and hardware arrangements—even ‘hello world’ would have high K-complexity. If you’re talking about whatsoever produces a given output on the screen in terms of a probability mass I’m not sure it’s reasonable to separate the two out and deal with K-complexity as simply a manifestation of high level APIs.
For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is. Assuming a physics god is even a coherent concept.
In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
I bet physics is a lot simpler than it appears right now tho.
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
Bayesians don’t believe in evidence silly goose, you know that. Anyway, User:cousin_it, you’re essentially right, though I think that LW would benefit less from developing updated arguments and more from reading Aquinas, at least in the counterfactual universe where LW knew how to read. Anyway. In the real world Less Wrong is hopeless. You’re not hopeless. As a decision theorist you’re trying to find God, so you have to believe in him in a sense, right? And if you’re not trying to find God you should probably stay the hell away from FAI projects. Just sayin’.
A really intelligent response, so I upvoted you, even though, as I said, it surprised me by telling me that, just as one example, tarot cards are Bad when I had not even considered the possibility, so I doubt this came from inside me.
Well you are obviously not able to predict the output of your own brain, that’s the whole point of the brain. If morality is in the brain and still too complex to understand, you would expect to encounter moral feelings that you had not anticipated.
Er, I thought it was overall pretty lame, e.g. the whole question-begging w.r.t. the ‘prior probability of omnibenevolent omnipowerful thingy’ thingy (nothing annoys me more than abuses of probability theory these days, especially abuses of algorithmic probability theory). Perhaps you are conceding too much in order to appear reasonable. Jesus wasn’t very polite.
By the way, in case you’re not overly familiar with the heuristics and biases literature, let me give you a hint: it sucks. At least the results that most folk around her cite have basically nothing to do with rationality. There’s some quite good stuff with tons of citations, e.g. Gigerenzer’s, but Eliezer barely mentioned it to Less Wrong (as fastandfrugal.com which he endorsed) and therefore as expected Less Wrong doesn’t know about it. (Same with interpretations of quantum mechanics, as Mitchell Porter often points out. I really hope that Eliezer is pulling some elaborate prank on humanity. Maybe he’s doing it unwittingly.)
Anyway the upshot is that when people tell you about ‘confirmation bias’ as if it existed in the sense they think it does then they probably don’t know what the hell they’re talking about and you should ignore them. At the very least don’t believe them until you’ve investigated the literature yourself. I did so and was shocked at how downright anti-informative the field is, and less shocked but still shocked at how incredibly useless statistics is (both Bayesianism as a theoretical normative measure and frequentism as a practical toolset for knowledge acquisition). The opposite happened with the parapsychology literature, i.e. low prior, high posterior. Let’s just say that it clearly did not confirm my preconceptions; lolol.
Lastly, towards the esoteric end: All roads lead to Rome, if you’ll pardon a Catholicism. If they don’t it’s not because the world is mad qua mad; it is because it is, alas, sinful. An easy way to get to hell is to fall into a fully-general-counterargument blackhole, or a literal blackhole maybe. Those things freak me out.
(P.S. My totally obnoxious arrogance is mostly just a passive aggressive way of trolling LW. I’m not actually a total douchebag IRL. /recursive-compulsive-self-justification)
I love how Less Wrong basically thinks that all evidence that doesn’t support its favored conclusion is bad because it just leads to confirmation bias. “The evidence is on your side, granted, but I have a fully general counterargument called ‘confirmation bias’ that explains why it’s not actually evidence!” Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn’t actually exist. (Eliezer knew about the controversy, which is why his post is titled “Positive Bias”, which arguably also doesn’t exist, especially not in a cognitively relevant way.) Then they talk about Occam’s razor while completely failing to understand what algorithmic probability is actually saying. Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable! It’s like they’re trolling and I’m not supposed to feed them but they look sort of like a very hungry, incredibly stupid puppy.
Searching and skimming, the first link does not seem to actually say that confirmation bias does not exist. It says that it does not appear to be the cause of “overconfidence bias”—it seems to take no position on whether it exists otherwise.
Okay, yeah, I was taking a guess. There are other papers that talk about confirmation/positive bias specifically, a lot of in the vein of this kinda stuff. Maybe Kaj’s posts called ‘Heuristics and Biases Biases?’ from here on LW references some relevant papers too. Sorry, I have limited cognitive resources at the moment, I’m mostly trying to point in the general direction of the relevant literature because there’s quite a lot of it.
So I think you’re quite right in that “supernatural” and “natural” are sets that contain possible universes of very different complexity and that those two adjectives are not obviously relevant to the complexity of the universes they describe. I support tabooing those terms. But if you compare two universes, one of which is described most simply by the wave function and an initial state, and another which is described by the wave function, an initial state and another section of code describing the psychic powers of certain agents the latter universe is a priori more unlikely (bracketing for the moment the simulation issue), Obviously if psi phenomenon can be incorporated into the physical model without adding additional lines of code that’s another matter entirely.
Returning to the simulation issue I take your position to be that there are conceivable “meta-physics” (meant literally; not necessarily referring to the branch of philosophy) which can make local complexities more common? Is that a fair restatement? I have a suspicion that this is not possibly without paying the complexity back at the other end, though I’m not sure.
Anyway the upshot is that when people tell you about ‘confirmation bias’ as if it existed in the sense they think it does then they probably don’t know what the hell they’re talking about and you should ignore them.
...
I love how Less Wrong basically thinks that all evidence that doesn’t support its favored conclusion is bad because it just leads to confirmation bias. “The evidence is on your side, granted, but I have a fully general counterargument called ‘confirmation bias’ that explains why it’s not actually evidence!” Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn’t actually exist.
What was said that’s a synonym for or otherwise invoked the confirmation bias?
It’s mentioned a few times in this thread re AspiringKnitter’s evidence for Christianity. I’m too lazy to link to them, especially as it’d be so easy to get the answer to your question with control+f “confirmation” that I’m not sure I interpreted it correctly?
You were not entirely clear, but you seem to be taking these as signals of things being Bad or Good in the morality sense, right? Ok so it feels like there is an objective morality. Let’s come up with hypotheses:
You have a morality that is the thousand shards of desire left over by an alien god. Things that were a good idea (for game theory, etc reasons) to avoid in the ancestral environment tend to feel good so that you would do them. Things that feel bad are things you would have wanted to avoid. As we know, an objective morality is what a personal morality feels like from the inside. That is, you are feeling the totally natural feelings of morality that we all feel. Why you attached special affect to the bible, I suppose that’s the affect hueristic: you feel like the bible is true and it is the center of your belief or something, and that goodness gets confused with a moral goodness. This is all hindsight, but it seems pretty sound.
Or it could be Jesus-is-Son-of-a-Benevolent-Love-Agent-That-Created-the-Universe. I guess God is sending you signals to say what sort of things he likes/doesn’t like? Is that the proposed mechanism for morality? I don’t know enough about the theory to say much more.
Ok now let’s consider the prior. The complex loving god hypothesis is incredibly complicated. Minds are so complex we can’t even build one yet. It would take a hell of a lot more than your feeling-of-morality evidence to even raise this to our attention. A lot more than any scientific hypothesis has ever collected, I would say. You must have other evidence, not only to overcome the prior, but all the evidence against a loving god who intelligently arranged anything,
Anyways, It sounds like you were primarily a moral nihilist before your encounter with the god-prescribes-a-morality hypothesis. Have you read Eliezers metaethics stuff? it deals the with subject of morality in a neutral universe quite well.
I’m afraid I don’t see why you call your reward-signal-from-god is an “objective morality” It sounds like the best course of action would be to learn the mechanism and seize control of it like AIXI would.
I (as a human) already have a strong morality, so if I figured out that the agent responsible for all of the evil in the universe were directly attempting to steer me with a subtle reward signal, I’d be pissed. It’s interesting that you didn’t have that reaction. I guess that’s the moral nihilism thing. You didn’t know you had your own morality.
There are two problems with this argument. First, each individual god might be very improbable, but that could be counterbalanced by the astronomical number of possible gods (e.g. consider all possible tweaks to the holy book), so you can argue apriori against specific flavors of theism but not against theism in general. Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don’t need high K-complexity. A powerful mind (or a program that blossoms into one) could even be simpler than physics as we currently know it, which is already quite complex and seems to have even more complexity waiting in store.
IMO a correct argument against theism should focus on the “loving” part rather than the “mind” part, and focus on evidence rather than complexity priors. The observed moral neutrality of physics is more probable if there’s no moral deity. Given what we know about evolution etc., it’s hard to name any true fact that makes a moral deity more likely.
I’m not sure that everything in my comment is correct. But I guess LW could benefit from developing an updated argument against (or for) theism?
Your argument about K-complexity is a decent shorthand but causes people to think that this “simplicity” thing is baked into the universe (universal prior) as if we had direct access to the universe (universal prior, reference machine language) and isn’t just another way of saying it’s more probable after having updated on a ton of evidence. As you said it should be about evidence not priors. No one’s ever seen a prior, at best a brain’s frequentist judgment about what “priors” are good to use when.
That may be somewhat misleading. A seed AI, denied access to external information, will be a moron. Yet the more information it takes into memory the higher the K-complexity of the thing, taken as a whole, is.
You might be able to code a relatively simple AI in your garage, but if it’s going to be useful it can’t stay simple.
ETA: Also if you take the computer system as a whole with all of the programming libraries and hardware arrangements—even ‘hello world’ would have high K-complexity. If you’re talking about whatsoever produces a given output on the screen in terms of a probability mass I’m not sure it’s reasonable to separate the two out and deal with K-complexity as simply a manifestation of high level APIs.
Relevant LW post.
For every every program that could be called a mind, there are very very very many that are not.
Eliezer’s “simple” seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there’s no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it’s own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn’t care about speed. A slow naive implementation of “perfect AI” should be about the size of the math required to define a “perfect AI”. I’d be surprised if it were bigger than the laws of physics.
You’re right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I’m unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who’s domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn’t be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised;
max(morality)
is simpler thangod(morality)
almost no matter how simple god is. Assuming a physics god is even a coherent concept.In our case, assuming a fundamental god is coherent, the “god did it” hypothesis is strictly defeated (same predictions, less theory) by the “god did physics” hypothesis, which is strictly defeated by the “physics” hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn’t have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don’t exist here by the reasoning I gave in parent.
What did I miss?
That’s a reasonable bet. Another reasonable bet is that “laws of physics are about as complex as minds, but small details have too little measure to matter”.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough “morality” that’s still recognizable as such. Will_Newsome seems to think so, or at least that’s the most sense I could extract from his comments...
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I’m not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn’t seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
I take it you don’t think we have a chance of creating a superpowerful AI with our own morality?
We don’t have to be very intelligent to be a threat if we can create something that is.
I don’t think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don’t think it likely that we have such a creator.)
Bayesians don’t believe in evidence silly goose, you know that. Anyway, User:cousin_it, you’re essentially right, though I think that LW would benefit less from developing updated arguments and more from reading Aquinas, at least in the counterfactual universe where LW knew how to read. Anyway. In the real world Less Wrong is hopeless. You’re not hopeless. As a decision theorist you’re trying to find God, so you have to believe in him in a sense, right? And if you’re not trying to find God you should probably stay the hell away from FAI projects. Just sayin’.
A really intelligent response, so I upvoted you, even though, as I said, it surprised me by telling me that, just as one example, tarot cards are Bad when I had not even considered the possibility, so I doubt this came from inside me.
Well you are obviously not able to predict the output of your own brain, that’s the whole point of the brain. If morality is in the brain and still too complex to understand, you would expect to encounter moral feelings that you had not anticipated.
Er, I thought it was overall pretty lame, e.g. the whole question-begging w.r.t. the ‘prior probability of omnibenevolent omnipowerful thingy’ thingy (nothing annoys me more than abuses of probability theory these days, especially abuses of algorithmic probability theory). Perhaps you are conceding too much in order to appear reasonable. Jesus wasn’t very polite.
By the way, in case you’re not overly familiar with the heuristics and biases literature, let me give you a hint: it sucks. At least the results that most folk around her cite have basically nothing to do with rationality. There’s some quite good stuff with tons of citations, e.g. Gigerenzer’s, but Eliezer barely mentioned it to Less Wrong (as fastandfrugal.com which he endorsed) and therefore as expected Less Wrong doesn’t know about it. (Same with interpretations of quantum mechanics, as Mitchell Porter often points out. I really hope that Eliezer is pulling some elaborate prank on humanity. Maybe he’s doing it unwittingly.)
Anyway the upshot is that when people tell you about ‘confirmation bias’ as if it existed in the sense they think it does then they probably don’t know what the hell they’re talking about and you should ignore them. At the very least don’t believe them until you’ve investigated the literature yourself. I did so and was shocked at how downright anti-informative the field is, and less shocked but still shocked at how incredibly useless statistics is (both Bayesianism as a theoretical normative measure and frequentism as a practical toolset for knowledge acquisition). The opposite happened with the parapsychology literature, i.e. low prior, high posterior. Let’s just say that it clearly did not confirm my preconceptions; lolol.
Lastly, towards the esoteric end: All roads lead to Rome, if you’ll pardon a Catholicism. If they don’t it’s not because the world is mad qua mad; it is because it is, alas, sinful. An easy way to get to hell is to fall into a fully-general-counterargument blackhole, or a literal blackhole maybe. Those things freak me out.
(P.S. My totally obnoxious arrogance is mostly just a passive aggressive way of trolling LW. I’m not actually a total douchebag IRL. /recursive-compulsive-self-justification)
Explain?
Explain?
Elaborate?
I love how Less Wrong basically thinks that all evidence that doesn’t support its favored conclusion is bad because it just leads to confirmation bias. “The evidence is on your side, granted, but I have a fully general counterargument called ‘confirmation bias’ that explains why it’s not actually evidence!” Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn’t actually exist. (Eliezer knew about the controversy, which is why his post is titled “Positive Bias”, which arguably also doesn’t exist, especially not in a cognitively relevant way.) Then they talk about Occam’s razor while completely failing to understand what algorithmic probability is actually saying. Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable! It’s like they’re trolling and I’m not supposed to feed them but they look sort of like a very hungry, incredibly stupid puppy.
Explain?
http://library.mpib-berlin.mpg.de/ft/gg/gg_how_1991.pdf is exemplary of the stuff I’m thinking of. Note that that paper has about 560 citations. If you want to learn more then dig into the literature. I really like Gigerenzer’s papers as they’re well-cited and well-reasoned, and he’s a statistician. He even has a few papers about how to improve rationality, e.g. http://library.mpib-berlin.mpg.de/ft/gg/GG_How_1995.pdf has over 1,000 citations.
Searching and skimming, the first link does not seem to actually say that confirmation bias does not exist. It says that it does not appear to be the cause of “overconfidence bias”—it seems to take no position on whether it exists otherwise.
Okay, yeah, I was taking a guess. There are other papers that talk about confirmation/positive bias specifically, a lot of in the vein of this kinda stuff. Maybe Kaj’s posts called ‘Heuristics and Biases Biases?’ from here on LW references some relevant papers too. Sorry, I have limited cognitive resources at the moment, I’m mostly trying to point in the general direction of the relevant literature because there’s quite a lot of it.
Hard to know whether to agree or disagree without knowing “more probable than what?”
Sorry. More probable than supernaturalistic universes of the sort that the majority of humans finds more likely (where e.g. psi phenomena exist).
So I think you’re quite right in that “supernatural” and “natural” are sets that contain possible universes of very different complexity and that those two adjectives are not obviously relevant to the complexity of the universes they describe. I support tabooing those terms. But if you compare two universes, one of which is described most simply by the wave function and an initial state, and another which is described by the wave function, an initial state and another section of code describing the psychic powers of certain agents the latter universe is a priori more unlikely (bracketing for the moment the simulation issue), Obviously if psi phenomenon can be incorporated into the physical model without adding additional lines of code that’s another matter entirely.
Returning to the simulation issue I take your position to be that there are conceivable “meta-physics” (meant literally; not necessarily referring to the branch of philosophy) which can make local complexities more common? Is that a fair restatement? I have a suspicion that this is not possibly without paying the complexity back at the other end, though I’m not sure.
Boltzmann brain, maybe?
Explain?
What was said that’s a synonym for or otherwise invoked the confirmation bias?
It’s mentioned a few times in this thread re AspiringKnitter’s evidence for Christianity. I’m too lazy to link to them, especially as it’d be so easy to get the answer to your question with control+f “confirmation” that I’m not sure I interpreted it correctly?