Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can’t answer that. Is the color blue the best color? We can’t answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don’t know, but it can be well answered through reported preferences. I don’t know if these currently unanswerable questions will always be unanswerable, but given what I know I can only say that they will almost certainly remain unanswerable (because it’s unfeasible or because it’s a nonsensical question).
Wouldn’t science need to do conceptual analysis? Not really, though it could appear that way. Philosophy has “free will”, science has “volition.” Free will is a label for a continually argued concept. Volition is a label for an axiom that’s been nailed in stone. Science doesn’t really care about concepts, it just wants to ask questions such that it can answer them definitely.
Even though science might provide all the knowledge necessary to easily answer a question, it doesn’t actually answer it, right? My answer: so what? Science doesn’t answer a lot of trivial questions like what I exactly should eat for breakfast, even though the answer is perfectly obvious (healthy food as discovered by science if I want to remain healthy).
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand. Give another century or so. We’ve barely explored the brain.
What if consciousness isn’t explainable by science? When we get to that point, we’ll be much better prepared to understand what direction we need to go to understand the brain. As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.” This is actually how philosophy works now. You get a whole bunch of argumentation as evidence, and then you must enact it personally through hypothetical injunctions like “if I want to maximize well being, then I should act as a utilitarian.”
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
Providing you ignore the enornous amount of substructure hanging off each option.
do we know if something is answerable?
We generally perform some sort of armchair conceptual analysis.
Wouldn’t science need to do conceptual analysis? Not really,
Why not? Doesn’t it need to decide which questions it can answer?
Volition is a label for an axiom that’s been nailed in stone.
First I’ve heard of it. Who did that? Where was it published?
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand.
Or impossible, or the brain isn’t solely or responsible, ro something else. It would have helped to have argued
for your prefered option.
Give another century or so. We’ve barely explored the brain.
As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
Philosophy generally can’t solve scientific problems, and science generally can’t solve philosophical ones.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.”
And what about my interactions with others? Am I entitled to snatch an orange from a starving man because I need a few extra milligrams of vitamin C?
Even though the wikipedia page for “meaning of life” is enormous, it boils all down to the very simple either/or statement I gave.
How do we know if something is answerable? Did a chicken just materialize 10 billion light years from Earth? We can’t answer that. Is the color blue the best color? We can’t answer that. We can answer questions that contact reality such that we can observe them directly or indirectly. Did a chicken just materialize in front me? No. Is the color blue the most preferred color? I don’t know, but it can be well answered through reported preferences. I don’t know if these currently unanswerable questions will always be unanswerable, but given what I know I can only say that they will almost certainly remain unanswerable (because it’s unfeasible or because it’s a nonsensical question).
Wouldn’t science need to do conceptual analysis? Not really, though it could appear that way. Philosophy has “free will”, science has “volition.” Free will is a label for a continually argued concept. Volition is a label for an axiom that’s been nailed in stone. Science doesn’t really care about concepts, it just wants to ask questions such that it can answer them definitely.
Even though science might provide all the knowledge necessary to easily answer a question, it doesn’t actually answer it, right? My answer: so what? Science doesn’t answer a lot of trivial questions like what I exactly should eat for breakfast, even though the answer is perfectly obvious (healthy food as discovered by science if I want to remain healthy).
Why still have the hard problem of consciousness if it’s answerable by science? Because the brain is hard to understand. Give another century or so. We’ve barely explored the brain.
What if consciousness isn’t explainable by science? When we get to that point, we’ll be much better prepared to understand what direction we need to go to understand the brain. As it is now, philosophy is simply following science’s breadcrumbs. There is no point in doing philosophy, unless there is a reasonable expectation that it will solve a problem that can be more likely solved by something else.
A scientific theory of ethics? It wouldn’t have any “you ought to do X because X is good,” but would be more of the form of “science says X,Y,Z are healthy for you” and then you would think “hey, I want to be healthy, so I’m going to eat X,Y,Z.” This is actually how philosophy works now. You get a whole bunch of argumentation as evidence, and then you must enact it personally through hypothetical injunctions like “if I want to maximize well being, then I should act as a utilitarian.”
Providing you ignore the enornous amount of substructure hanging off each option.
We generally perform some sort of armchair conceptual analysis.
Why not? Doesn’t it need to decide which questions it can answer?
First I’ve heard of it. Who did that? Where was it published?
Or impossible, or the brain isn’t solely or responsible, ro something else. It would have helped to have argued for your prefered option.
Give another century or so. We’ve barely explored the brain.
Philosophy generally can’t solve scientific problems, and science generally can’t solve philosophical ones.
And what about my interactions with others? Am I entitled to snatch an orange from a starving man because I need a few extra milligrams of vitamin C?