I am going to write the same warning I have written to rationalist friends in relation to the Great Filter Hypothesis and almost everything on Overcoming Bias: BEWARE OF MODELS WITH NO CAUSAL COMPONENTS! I repeat: BEWARE NONCAUSAL MODELS!!! In fact, beware of nonconstructive mental models as well, while we’re at it! Beware classical logic, for it is nonconstructive! Beware noncausal statistics, for it is noncausal and nonconstructive! All these models, when they contain true information, and accurately move that information from belief to belief in strict accordance with the actual laws of statistical inference, still often fail at containing coherent propositions to which belief-values are being assigned, and at corresponding to the real world.
Now apply the above warning to virtue ethics.
Now let’s dissolve the above warning about virtue ethics and figure out what it really means anyway, since almost all of us real human beings use some amount of it.
It’s not enough to say that human beings are not perfectly rational optimizers moving from terminal goals to subgoals to plans to realized actions back to terminal goals. We must also acknowledge that we are creatures of muscle and neural-net, and that the lower portions (ie: almost all) of our minds work via reinforcement, repetition, and habit, just as our muscles are built via repeated strain.
Keep in mind that anything you consciously espouse as a “terminal goal” is in fact a subgoal: people were not designed to complete a terminal goal and shut off.
Practicing virtue just means that I recognize the causal connection between my present self and future self, and optimize my future self for the broad set of goals I want to be able to accomplish, while also recognizing the correlations between myself and other people, and optimizing my present and future self to exploit those correlations for my own goals.
Because my true utility function is vast and complex and only semi-known to me, I have quite a lot of logical uncertainty over what subgoals it might generate for me in the future. However, I do know some actions I can take to make my future self better able to address a broad range of subgoals I believe my true utility function might generate, perhaps even any possible subgoal. The qualities created in my future self by those actions are virtues, and inculcating them in accordance with the design of my mind and body is virtue ethics.
As an example, I helped a friend move his heavy furniture from one apartment to another because I want to maintain the habit of loyalty and helpfulness to my friends (cue House Hufflepuff) for the sake of present and future friends, despite this particular friend being a total mooching douchebag. My present decision will change the distribution of my future decisions, so I need to choose for myself now and my potential future selves.
Not really that complicated, when you get past the philosophy-major stuff and look at yourself as a… let’s call it, a naturalized human being, a body and soul together that are really just one thing.
Virtue Ethics is like weightlifting. You gotta hit the gym if you want strong muscles. You gotta throw yourself into situations that cultivate virtue if you want to be able to act virtuously.
Consequentialism is like firefighting. You need to set yourself up somewhere with a firetruck and hoses and rebreathers and axes and a bunch of cohorts who are willing to run into a fire with you if you want to put out fires.
You can’t put out fires by weightlifting, but when the time comes to actually rush into a fire, bust through some walls, and drag people out, you really should have been hitting the gym consistently for the past several months.
That’s such a good summary I wish I’d just written that instead of the long shpiel I actually posted.
Thanks for the compliment!
I am currently wracking my brain to come up with a virtue-ethics equivalent to the “bro do you even lift” shorthand—something pithy to remind people that System-1 training is important to people who want their System-1 responses to act in line with their System-2 goals.
Here’s how I think about the distinction on a meta-level:
“It is best to act for the greater good (and acting for the greater good often requires being awesome).”
vs.
“It is best to be an awesome person (and awesome people will consider the greater good).”
where ″acting for the greater good” means “having one’s own utility function in sync with the aggregate utility function of all relevant agents” and “awesome” means “having one’s own terminal goals in sync with ‘deep’ terminal goals (possibly inherent in being whatever one is)” (e.g. Sam Harris/Aristotle-style ‘flourishing’).
I am going to write the same warning I have written to rationalist friends in relation to the Great Filter Hypothesis and almost everything on Overcoming Bias: BEWARE OF MODELS WITH NO CAUSAL COMPONENTS! I repeat: BEWARE NONCAUSAL MODELS!!! In fact, beware of nonconstructive mental models as well, while we’re at it! Beware classical logic, for it is nonconstructive! Beware noncausal statistics, for it is noncausal and nonconstructive! All these models, when they contain true information, and accurately move that information from belief to belief in strict accordance with the actual laws of statistical inference, still often fail at containing coherent propositions to which belief-values are being assigned, and at corresponding to the real world.
Ok, so the old definition of “knowledge” was “justified true belief”. Then it turned out that there were times when you could believe something true, but have the justification be mere coincidence. I could believe “Someone is coming to see me today” because I expect to see my adviser, but instead my girlfriend shows up. The statement as I believed it was correct, but for a completely different reason than I thought. So Alvin Goldman changed this to say, “knowledge is true belief caused by the truth of the proposition believed-in.” This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.
Where do causal and noncausal statistical models come in here? Well, right here, actually: Bayesian inference is actually just a logic of plausible reasoning, which means it’s a way of moving belief around from one proposition to another, which just means that it works on any set of propositions for which there exists a mutually-consistent assignment of probabilities.
This means that quite often, even the best Bayesians (and frequentists as well) construct models (let’s switch to saying “map” and “territory”) which not only are not caused by reality, but don’t even contain enough causal machinery to describe how reality could have caused the statistical data.
This happens most often with propositions of the form “There exists X such that P(X)” or “X or Y” and so forth. These are the propositions where belief can be deduced without constructive proof: without being able to actually exhibit the object the proposition applies to. Unfortunately, if you can’t exhibit the object via constructive proof (note that constructive proofs are isomorphic to algorithms for actually generating the relevant objects), I’m fairly sure you cannot possess a proper description of the causal mechanisms producing the data you see. This means that not only might your hypotheses be wrong, your entire hypothesis space might be wrong, which could make your inferences Not Even Wrong, or merely confounded.
(I can’t provide mathematics showing any formal tie between causation/causal modeling and constructive proof, but I think this might be because I’m too much an amateur at the moment. My intuitions say that in a universe where incomputable things don’t generate results in real-time and things don’t happen for no reason at all, any data I see must come from a finitely-describable causal process, which means there must exist a constructive description of that process—even if classical logic could prove the existence of and proper value for the data without encoding that constructive decision!)
What can also happen, again particularly if you use classical logic, is that you perform sound inference over your propositions, but the propositions themselves are not conceptually coherent in terms of grounding themselves in causal explanations of real things.
So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it’s not actually telling us much at all, as far as I can say.
(In relation to Overcoming Bias, I’ve ranted on similarly about explaining all possible human behaviors in terms of signalling, status, wealth, and power. Paging /u/Quirinus_Quirrell… If they see a man flirting with a woman at a party, Quirrell and Hanson will seem to explain it in terms of signalling and status, while I will deftly and neatly predict that the man wants to have sex with the woman. Their explanation sounds until you try to read its source code, look at the causal machine working, and find that it dissolves into cloud around the edges. My explanation grounds itself in hormonal biology and previous observation of situations where similar things occurred.)
So Alvin Goldman changed this to say, “knowledge is true belief caused by the truth of the proposition believed-in.” This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.
If I am insane and think I’m the Roman emperor Nero, and then reason “I know that according to the history books the emperor Nero is insane, and I am Nero, so I must be insane”, do I have knowledge that I am insane?
Note that this also messes up counterfactual accounts of knowledge as in “A is true and I believe A; but if A were not true then I would not believe A”. (If I were not insane, then I would not believe I am Nero, so I would not believe I am insane.)
We likely need some notion of “reliability” or “reliable processes” in an account of knowledge, like “A is true and I believe A and my belief in A arises through a reliable process”. Believing things through insanity is not a reliable process.
Gettier problems arise because processes that are usually reliable can become unreliable in some (rare) circumstances, but still (by even rarer chance) get the right answers.
The insanity example is not original to me (although I can’t seem to Google it up right now). Using reliable processes isn’t original, either, and if that actually worked, the Gettier Problem wouldn’t be a problem.
Interesting thought but surely the answer is no. If I take the word “knowledge” in this context to mean having a model that reasonably depicts reality in its contextually relevant features, then the same model of what the word “insane” in this specific instance depicts two very different albeit related brain patterns.
Simply put the brain pattern (wiring + process) that makes the person think they are Nero is a different though surely related physical object than the brain pattern that depicts what that person thinks “Nero being insane” might actually manifest like in terms of beliefs and behaviors. In light of the context we can say the person doesn’t have any knowledge about being insane, since that person’s knowledge does not include (or take seriously) the belief that depicts the presumably correct reality/model of that person not actually being Nero.
Put even simpler we use the same concept/word to model two related but fundamentally different things. Does that person have knowledge about being insane? It’s the tree and the sound problem, the word insane is describing two fundamentally different things yet wrongfully taken to mean the same. I’d claim any reasonable concept of the word insane results in you concluding that that person does not have knowledge about being insane in the sense that is contextually relevant in this scenario, while the person might have actually roughly true knowledge about how Nero might have been insane and how that manifested itself. But those are two different things and the latter is not the contextually relevant knowledge about insanity here.
I don’t think that explanation works. One of the standard examples of the Gettier problem is, as eli described, a case where you believe A, A is false, B is true, and the question is “do you have knowledge of (A OR B)”. The “caused by the truth of the proposition” definition is an attempt to get around this.
So your answer fails because it doesn’t actually matter that the word “insane” can mean two different things—A is “is insane like Nero”, B is “is insane in the sense of having a bad model”, and “A OR B” is just “is insane in either sense”. You can still ask if he knows he’s insane in either sense (that is, whether he knows “(A OR B)”, and in that case his belief in (A OR B) is caused by the truth of the proposition.
So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it’s not actually telling us much at all, as far as I can say.
Yes it is causal in the same sense that mathematics of physical laws are causal.
In relation to Overcoming Bias, I’ve ranted on similarly about explaining all possible human behaviors in terms of signalling, status, wealth, and power. Paging /u/Quirinus_Quirrell… If they see a man flirting with a woman at a party, Quirrell and Hanson will seem to explain it in terms of signalling and status, while I will deftly and neatly predict that the man wants to have sex with the woman.
You do realize the two explanations aren’t contradictory and are in fact mutually reinforcing? In particular, the man wants to have sex with here and is engaging in status signalling games to accomplish his goal. Also his reasons for wanting to have sex with her may also include signaling and status.
So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it’s not actually telling us much at all, as far as I can say.
?
If the Filter is real, then its effects are what causes us to think of it as a hypothesis. That makes it “true belief caused by the truth of the proposition believed-in”, conditional on it actually being true.
If the Filter is real, then its effects are what causes us to think of it as a hypothesis.
That could only be true if it lay in our past, or in the past of the other Big Finite Number of other species in the galaxy it already killed off. The actual outcome we see is just an absence of Anyone Else detectable to our instruments so far, despite a relative abundance of seemingly life-capable planets. We don’t see particular signs of any particular causal mechanism acting as a Great Filter, like a homogenizing swarm expanding across the sky because some earlier species built a UFAI or something.
When we don’t see signs of any particular causal mechanism, but we’re still not seeing what we expect to see, I personally would say the first and best explanation is that we are ignorant, not that some mysterious mechanism destroys things we otherwise expect to see.
Hm? Why doesn’t Rare Earth solve this problem? We don’t have the tech yet to examine the surfaces of exoplanets so for all we know the foreign-Earth candidates we’ve got now will end up being just as inhospitable as the rest of them. “Seemingly life capable” isn’t a very high bar at the minute.
Now, if we did have the tech, and saw a bunch of lifeless planets that as far as we know had nearly exactly the same conditions as pre-Life Earth, and people started rattling off increasingly implausible and special-pleading reasons why (“no planet yet found has the same selenium-tungsten ratio as Earth!”), then there’d be a problem.
I don’t see why you need to posit exotic scenarios when the mundane will do.
I don’t see why you need to posit exotic scenarios when the mundane will do.
Neither do I, hence my current low credence in a Great Filter and my currently high credence for, “We’re just far from the mean; sometimes that does happen, especially in distributions with high variance, and we don’t know the variance right now.”
Well I agree with you on all of that. How is it non-causal?
Or have I misunderstood and you only object to the “aliens had FOOM AI go wrong” explanations but have no trouble with the “earth is just weird” explanation?
It isn’t. The people who affirmatively believe in the Great Filter being a real thing rather than part of their ignorance are, in my view, the ones who believe in a noncausal model.
The problem with the signaling hypothesis is that in everyday life there is essentially no observation you could possibly make that could disprove it. What is that? This guy is not actually signaling right now? No way, he’s really just signaling that he is so über-cool that he doesn’t even need to signal to anyone. Wait there’s not even anyone else in the room? Well through this behavior he is signaling to himself how cool he is to make him believe it even more.
Guess the only way to find out is if we can actually identify “the signaling circuit” and make functional brain scans. I would actually expect signaling to explain an obscene amount of human behavior… but really everything? As I said I can’t think of any possible observation outside of functional brain scans we could potentially make that could have the potential to disprove the signaling hypothesis of human behavior. (A brain scan where we actually know what we are looking at and where we are measuring the right construct obviously).
Thanks for pushing this. I nodded along to the grandparent post and then when I came to your reply I realized I had no idea what this part was talking about.
It is not enough to say we don’t move smoothly from terminal goal to subgoal. It is enough to say we are too mesilly constructed to have distinct terminal goals and subgoals.
It sounds like you’re thinking of the “true utility function’s” preferences as a serious attempt to model the future consequences of present actions, including their effect on future brain-states.
I don’t think that’s always how the brain works, even if you can tell a nice story that way.
I think that’s usually not how the brain works, but I also think that I’m less than totally antirational. That is, it’s possible to construct a “true utility function” that would dictate to me a life I will firmly enjoy living.
That statement has a large inferential distance from what most people know, so I should actually hurry up and write the damn LW entry explaining it.
I think you could probably construct several mutually contradictory utility functions which would dictate lives you enjoy living. I think it’s even possible that you could construct several which you’d perceive as optimal, within the bounds of your imagination and knowledge.
I don’t think we yet have the tools to figure out which one actually is optimal. And I’m pretty sure the latter aren’t a subset of the former; we see plenty of people convincing themselves that they can’t do better than their crappy lives.
Like I said: there’s a large inferential distance here, so I have an entire post on the subject I’m drafting for notions of construction and optimality.
I am going to write the same warning I have written to rationalist friends in relation to the Great Filter Hypothesis and almost everything on Overcoming Bias: BEWARE OF MODELS WITH NO CAUSAL COMPONENTS! I repeat: BEWARE NONCAUSAL MODELS!!! In fact, beware of nonconstructive mental models as well, while we’re at it! Beware classical logic, for it is nonconstructive! Beware noncausal statistics, for it is noncausal and nonconstructive! All these models, when they contain true information, and accurately move that information from belief to belief in strict accordance with the actual laws of statistical inference, still often fail at containing coherent propositions to which belief-values are being assigned, and at corresponding to the real world.
Now apply the above warning to virtue ethics.
Now let’s dissolve the above warning about virtue ethics and figure out what it really means anyway, since almost all of us real human beings use some amount of it.
It’s not enough to say that human beings are not perfectly rational optimizers moving from terminal goals to subgoals to plans to realized actions back to terminal goals. We must also acknowledge that we are creatures of muscle and neural-net, and that the lower portions (ie: almost all) of our minds work via reinforcement, repetition, and habit, just as our muscles are built via repeated strain.
Keep in mind that anything you consciously espouse as a “terminal goal” is in fact a subgoal: people were not designed to complete a terminal goal and shut off.
Practicing virtue just means that I recognize the causal connection between my present self and future self, and optimize my future self for the broad set of goals I want to be able to accomplish, while also recognizing the correlations between myself and other people, and optimizing my present and future self to exploit those correlations for my own goals.
Because my true utility function is vast and complex and only semi-known to me, I have quite a lot of logical uncertainty over what subgoals it might generate for me in the future. However, I do know some actions I can take to make my future self better able to address a broad range of subgoals I believe my true utility function might generate, perhaps even any possible subgoal. The qualities created in my future self by those actions are virtues, and inculcating them in accordance with the design of my mind and body is virtue ethics.
As an example, I helped a friend move his heavy furniture from one apartment to another because I want to maintain the habit of loyalty and helpfulness to my friends (cue House Hufflepuff) for the sake of present and future friends, despite this particular friend being a total mooching douchebag. My present decision will change the distribution of my future decisions, so I need to choose for myself now and my potential future selves.
Not really that complicated, when you get past the philosophy-major stuff and look at yourself as a… let’s call it, a naturalized human being, a body and soul together that are really just one thing.
I will reframe this to make sure I understand it:
Virtue Ethics is like weightlifting. You gotta hit the gym if you want strong muscles. You gotta throw yourself into situations that cultivate virtue if you want to be able to act virtuously.
Consequentialism is like firefighting. You need to set yourself up somewhere with a firetruck and hoses and rebreathers and axes and a bunch of cohorts who are willing to run into a fire with you if you want to put out fires.
You can’t put out fires by weightlifting, but when the time comes to actually rush into a fire, bust through some walls, and drag people out, you really should have been hitting the gym consistently for the past several months.
That’s such a good summary I wish I’d just written that instead of the long shpiel I actually posted.
Thanks for the compliment!
I am currently wracking my brain to come up with a virtue-ethics equivalent to the “bro do you even lift” shorthand—something pithy to remind people that System-1 training is important to people who want their System-1 responses to act in line with their System-2 goals.
Rationalists should win?
Maybe with a sidenote how continuously recognizing in detail how you failed to win just now is not winning.
‘Do you even win [bro/sis/sib]?’
How about ‘Train the elephant’?
Here’s how I think about the distinction on a meta-level:
“It is best to act for the greater good (and acting for the greater good often requires being awesome).”
vs.
“It is best to be an awesome person (and awesome people will consider the greater good).”
where ″acting for the greater good” means “having one’s own utility function in sync with the aggregate utility function of all relevant agents” and “awesome” means “having one’s own terminal goals in sync with ‘deep’ terminal goals (possibly inherent in being whatever one is)” (e.g. Sam Harris/Aristotle-style ‘flourishing’).
So arete, then?
Can you explain this part more?
With pleasure!
Ok, so the old definition of “knowledge” was “justified true belief”. Then it turned out that there were times when you could believe something true, but have the justification be mere coincidence. I could believe “Someone is coming to see me today” because I expect to see my adviser, but instead my girlfriend shows up. The statement as I believed it was correct, but for a completely different reason than I thought. So Alvin Goldman changed this to say, “knowledge is true belief caused by the truth of the proposition believed-in.” This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.
Where do causal and noncausal statistical models come in here? Well, right here, actually: Bayesian inference is actually just a logic of plausible reasoning, which means it’s a way of moving belief around from one proposition to another, which just means that it works on any set of propositions for which there exists a mutually-consistent assignment of probabilities.
This means that quite often, even the best Bayesians (and frequentists as well) construct models (let’s switch to saying “map” and “territory”) which not only are not caused by reality, but don’t even contain enough causal machinery to describe how reality could have caused the statistical data.
This happens most often with propositions of the form “There exists X such that P(X)” or “X or Y” and so forth. These are the propositions where belief can be deduced without constructive proof: without being able to actually exhibit the object the proposition applies to. Unfortunately, if you can’t exhibit the object via constructive proof (note that constructive proofs are isomorphic to algorithms for actually generating the relevant objects), I’m fairly sure you cannot possess a proper description of the causal mechanisms producing the data you see. This means that not only might your hypotheses be wrong, your entire hypothesis space might be wrong, which could make your inferences Not Even Wrong, or merely confounded.
(I can’t provide mathematics showing any formal tie between causation/causal modeling and constructive proof, but I think this might be because I’m too much an amateur at the moment. My intuitions say that in a universe where incomputable things don’t generate results in real-time and things don’t happen for no reason at all, any data I see must come from a finitely-describable causal process, which means there must exist a constructive description of that process—even if classical logic could prove the existence of and proper value for the data without encoding that constructive decision!)
What can also happen, again particularly if you use classical logic, is that you perform sound inference over your propositions, but the propositions themselves are not conceptually coherent in terms of grounding themselves in causal explanations of real things.
So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it’s not actually telling us much at all, as far as I can say.
(In relation to Overcoming Bias, I’ve ranted on similarly about explaining all possible human behaviors in terms of signalling, status, wealth, and power. Paging /u/Quirinus_Quirrell… If they see a man flirting with a woman at a party, Quirrell and Hanson will seem to explain it in terms of signalling and status, while I will deftly and neatly predict that the man wants to have sex with the woman. Their explanation sounds until you try to read its source code, look at the causal machine working, and find that it dissolves into cloud around the edges. My explanation grounds itself in hormonal biology and previous observation of situations where similar things occurred.)
If I am insane and think I’m the Roman emperor Nero, and then reason “I know that according to the history books the emperor Nero is insane, and I am Nero, so I must be insane”, do I have knowledge that I am insane?
Note that this also messes up counterfactual accounts of knowledge as in “A is true and I believe A; but if A were not true then I would not believe A”. (If I were not insane, then I would not believe I am Nero, so I would not believe I am insane.)
We likely need some notion of “reliability” or “reliable processes” in an account of knowledge, like “A is true and I believe A and my belief in A arises through a reliable process”. Believing things through insanity is not a reliable process.
Gettier problems arise because processes that are usually reliable can become unreliable in some (rare) circumstances, but still (by even rarer chance) get the right answers.
The insanity example is not original to me (although I can’t seem to Google it up right now). Using reliable processes isn’t original, either, and if that actually worked, the Gettier Problem wouldn’t be a problem.
Interesting thought but surely the answer is no. If I take the word “knowledge” in this context to mean having a model that reasonably depicts reality in its contextually relevant features, then the same model of what the word “insane” in this specific instance depicts two very different albeit related brain patterns.
Simply put the brain pattern (wiring + process) that makes the person think they are Nero is a different though surely related physical object than the brain pattern that depicts what that person thinks “Nero being insane” might actually manifest like in terms of beliefs and behaviors. In light of the context we can say the person doesn’t have any knowledge about being insane, since that person’s knowledge does not include (or take seriously) the belief that depicts the presumably correct reality/model of that person not actually being Nero.
Put even simpler we use the same concept/word to model two related but fundamentally different things. Does that person have knowledge about being insane? It’s the tree and the sound problem, the word insane is describing two fundamentally different things yet wrongfully taken to mean the same. I’d claim any reasonable concept of the word insane results in you concluding that that person does not have knowledge about being insane in the sense that is contextually relevant in this scenario, while the person might have actually roughly true knowledge about how Nero might have been insane and how that manifested itself. But those are two different things and the latter is not the contextually relevant knowledge about insanity here.
I don’t think that explanation works. One of the standard examples of the Gettier problem is, as eli described, a case where you believe A, A is false, B is true, and the question is “do you have knowledge of (A OR B)”. The “caused by the truth of the proposition” definition is an attempt to get around this.
So your answer fails because it doesn’t actually matter that the word “insane” can mean two different things—A is “is insane like Nero”, B is “is insane in the sense of having a bad model”, and “A OR B” is just “is insane in either sense”. You can still ask if he knows he’s insane in either sense (that is, whether he knows “(A OR B)”, and in that case his belief in (A OR B) is caused by the truth of the proposition.
Yes it is causal in the same sense that mathematics of physical laws are causal.
You do realize the two explanations aren’t contradictory and are in fact mutually reinforcing? In particular, the man wants to have sex with here and is engaging in status signalling games to accomplish his goal. Also his reasons for wanting to have sex with her may also include signaling and status.
?
If the Filter is real, then its effects are what causes us to think of it as a hypothesis. That makes it “true belief caused by the truth of the proposition believed-in”, conditional on it actually being true.
I don’t get it.
That could only be true if it lay in our past, or in the past of the other Big Finite Number of other species in the galaxy it already killed off. The actual outcome we see is just an absence of Anyone Else detectable to our instruments so far, despite a relative abundance of seemingly life-capable planets. We don’t see particular signs of any particular causal mechanism acting as a Great Filter, like a homogenizing swarm expanding across the sky because some earlier species built a UFAI or something.
When we don’t see signs of any particular causal mechanism, but we’re still not seeing what we expect to see, I personally would say the first and best explanation is that we are ignorant, not that some mysterious mechanism destroys things we otherwise expect to see.
Hm? Why doesn’t Rare Earth solve this problem? We don’t have the tech yet to examine the surfaces of exoplanets so for all we know the foreign-Earth candidates we’ve got now will end up being just as inhospitable as the rest of them. “Seemingly life capable” isn’t a very high bar at the minute.
Now, if we did have the tech, and saw a bunch of lifeless planets that as far as we know had nearly exactly the same conditions as pre-Life Earth, and people started rattling off increasingly implausible and special-pleading reasons why (“no planet yet found has the same selenium-tungsten ratio as Earth!”), then there’d be a problem.
I don’t see why you need to posit exotic scenarios when the mundane will do.
Neither do I, hence my current low credence in a Great Filter and my currently high credence for, “We’re just far from the mean; sometimes that does happen, especially in distributions with high variance, and we don’t know the variance right now.”
Well I agree with you on all of that. How is it non-causal?
Or have I misunderstood and you only object to the “aliens had FOOM AI go wrong” explanations but have no trouble with the “earth is just weird” explanation?
It isn’t. The people who affirmatively believe in the Great Filter being a real thing rather than part of their ignorance are, in my view, the ones who believe in a noncausal model.
The problem with the signaling hypothesis is that in everyday life there is essentially no observation you could possibly make that could disprove it. What is that? This guy is not actually signaling right now? No way, he’s really just signaling that he is so über-cool that he doesn’t even need to signal to anyone. Wait there’s not even anyone else in the room? Well through this behavior he is signaling to himself how cool he is to make him believe it even more.
Guess the only way to find out is if we can actually identify “the signaling circuit” and make functional brain scans. I would actually expect signaling to explain an obscene amount of human behavior… but really everything? As I said I can’t think of any possible observation outside of functional brain scans we could potentially make that could have the potential to disprove the signaling hypothesis of human behavior. (A brain scan where we actually know what we are looking at and where we are measuring the right construct obviously).
Thanks for pushing this. I nodded along to the grandparent post and then when I came to your reply I realized I had no idea what this part was talking about.
It is not enough to say we don’t move smoothly from terminal goal to subgoal. It is enough to say we are too mesilly constructed to have distinct terminal goals and subgoals.
It sounds like you’re thinking of the “true utility function’s” preferences as a serious attempt to model the future consequences of present actions, including their effect on future brain-states.
I don’t think that’s always how the brain works, even if you can tell a nice story that way.
I think that’s usually not how the brain works, but I also think that I’m less than totally antirational. That is, it’s possible to construct a “true utility function” that would dictate to me a life I will firmly enjoy living.
That statement has a large inferential distance from what most people know, so I should actually hurry up and write the damn LW entry explaining it.
I think you could probably construct several mutually contradictory utility functions which would dictate lives you enjoy living. I think it’s even possible that you could construct several which you’d perceive as optimal, within the bounds of your imagination and knowledge.
I don’t think we yet have the tools to figure out which one actually is optimal. And I’m pretty sure the latter aren’t a subset of the former; we see plenty of people convincing themselves that they can’t do better than their crappy lives.
Well that post happened.
Like I said: there’s a large inferential distance here, so I have an entire post on the subject I’m drafting for notions of construction and optimality.