You raised a very interesting point in the last comment, that metaphilosophy already encompasses everything, that we could conceive of at least.
So a ‘solution’ is not tractable due to various well known issues such as the halting problem and so on. (Though perhaps in the very distant future this could be different.)
However this leads to a problem, as exemplified by your phrasing here:
Fundamentally, I believe that good philosophy should make you stronger and allow you to make the world better, otherwise, why are you bothering …
‘good philosophy’ is not a sensible category since you already know you have not, and cannot, ‘solve’ metaphilosophy. Nor can any other LW reader do so.
‘good’ or ‘bad’ in real practice are, at best, whatever the popular consensus is in the present reality, at worst, just someone’s idiosyncratic opinions.
Very few concepts are entirely independent from any philosophical or metaphilosophical implications whatsoever, and ‘good philosophy’ is not one of them.
But you still felt a need to attach these modifiers, due to a variety of reasons well analyzed on LW, so the pretenseof a solved or solvable metaphilosophy is still needed for this part of the comment to make sense.
I don’t want to single out your comment too much though, since it’s just the most convenient example, this applies to most LW comments.
i.e. If everyone actually accepted the point, which I agree with, I dare say a huge chunk of LW comments are close to meaningless from a formal viewpoint, or at least very open to interpretation by anyone who isn’t immersed in 21st century human culture.
“good” always refers to idiosyncratic opinions, I don’t really take moral realism particularly seriously. I think there is “good” philosophy in the same way there are “good” optimization algorithms for neural networks, while also I assume there is no one optimizer that “solves” all neural network problems.
‘”good” optimization algorithms for neural networks’ also has no difference in meaning from ‘”glorxnag” optimization algorithms for neural networks’, or any random permutation, if your prior point holds.
I don’t understand what point you are trying to make, to be honest. There are certain problems that humans/I care about that we/I want NNs to solve, and some optimizers (e.g. Adam) solve those problems better or more tractably than others (e.g. SGD or second order methods). You can claim that the “set of problems humans care about” is “arbitrary”, to which I would reply “sure?”
Similarly, I want “good” “philosophy” to be “better” at “solving” “problems I care about.” If you want to use other words for this, my answer is again “sure?” I think this is a good use of the word “philosophy” that gets better at what people actually want out of it, but I’m not gonna die on this hill because of an abstract semantic disagreement.
That’s the thing, there is no definable “set of problems humans care about” without some kind of attached or presumed metaphilosophy,at least none that you, or anyone, could possibly figure out in the foreseeable future and prove to a reasonable degree of confidence to the LW readerbase.
It’s not even ‘arbitrary’, that string of letters is indistinguishable from random noise.
i.e. Right now your first paragraph is mostly meaningless if read completely literally and by someone who accepts the claim. Such a hypothetical person would think you’ve gone nuts because it would appear like you took a well written comment and inserted strings of random keyboard bashing in the middle.
Of course it’s unlikely that someone would be so literal minded, and so insistent on logical correctness, that they would completely equate it with random bashing of a keyboard. But it’s possible some portion of readers lean towards that.
You raised a very interesting point in the last comment, that metaphilosophy already encompasses everything, that we could conceive of at least.
So a ‘solution’ is not tractable due to various well known issues such as the halting problem and so on. (Though perhaps in the very distant future this could be different.)
However this leads to a problem, as exemplified by your phrasing here:
‘good philosophy’ is not a sensible category since you already know you have not, and cannot, ‘solve’ metaphilosophy. Nor can any other LW reader do so.
‘good’ or ‘bad’ in real practice are, at best, whatever the popular consensus is in the present reality, at worst, just someone’s idiosyncratic opinions.
Very few concepts are entirely independent from any philosophical or metaphilosophical implications whatsoever, and ‘good philosophy’ is not one of them.
But you still felt a need to attach these modifiers, due to a variety of reasons well analyzed on LW, so the pretense of a solved or solvable metaphilosophy is still needed for this part of the comment to make sense.
I don’t want to single out your comment too much though, since it’s just the most convenient example, this applies to most LW comments.
i.e. If everyone actually accepted the point, which I agree with, I dare say a huge chunk of LW comments are close to meaningless from a formal viewpoint, or at least very open to interpretation by anyone who isn’t immersed in 21st century human culture.
“good” always refers to idiosyncratic opinions, I don’t really take moral realism particularly seriously. I think there is “good” philosophy in the same way there are “good” optimization algorithms for neural networks, while also I assume there is no one optimizer that “solves” all neural network problems.
‘”good” optimization algorithms for neural networks’ also has no difference in meaning from ‘”glorxnag” optimization algorithms for neural networks’, or any random permutation, if your prior point holds.
I don’t understand what point you are trying to make, to be honest. There are certain problems that humans/I care about that we/I want NNs to solve, and some optimizers (e.g. Adam) solve those problems better or more tractably than others (e.g. SGD or second order methods). You can claim that the “set of problems humans care about” is “arbitrary”, to which I would reply “sure?”
Similarly, I want “good” “philosophy” to be “better” at “solving” “problems I care about.” If you want to use other words for this, my answer is again “sure?” I think this is a good use of the word “philosophy” that gets better at what people actually want out of it, but I’m not gonna die on this hill because of an abstract semantic disagreement.
That’s the thing, there is no definable “set of problems humans care about” without some kind of attached or presumed metaphilosophy, at least none that you, or anyone, could possibly figure out in the foreseeable future and prove to a reasonable degree of confidence to the LW readerbase.
It’s not even ‘arbitrary’, that string of letters is indistinguishable from random noise.
i.e. Right now your first paragraph is mostly meaningless if read completely literally and by someone who accepts the claim. Such a hypothetical person would think you’ve gone nuts because it would appear like you took a well written comment and inserted strings of random keyboard bashing in the middle.
Of course it’s unlikely that someone would be so literal minded, and so insistent on logical correctness, that they would completely equate it with random bashing of a keyboard. But it’s possible some portion of readers lean towards that.
That is not a fact.