You should tell Google and academia, they will be most interested in your ideas. Don’t you think people already thought very hard about this? This is such a typical LW attitude.
This reply contributes nothing to the discussion of the problem at hand, and is quite uncharitable, I hope such replies were discouraged, and if downvoting was enabled, I would have downvoted it.
If thinking that they can solve the problem at hand (and making attempts at it) is a “typical LW attitude”, then it is an attitude I want to see more of and believe should be encouraged (thus, I’ll be upvoting /u/John_Maxwell_IV ’s post). A priori assuming that one cannot solve a problem (that hasn’t been proven/isnt known to be unsolvable) and thus refraining from even attempting the problem, isn’t an attitude that I want to see become the norm in Lesswrong. It’s not an attitude that I think is useful, productive, optimal or efficient.
It is my opinion, that we want to encourage people to attempt problems of interest to the community (the potential benefits are vast (e.g the problem is solved, and/or significant improvements are made on the problem, and future endeavours would have a better starting point), and the potential demerits are of lesser impact (time (ours and whoever attempts it) is wasted on an unpromising solution).
Coming back to the topic that was being discussed, I think methods of costly signalling are promising (for example, when you upvote a post you transfer X karma to the user, and you lose k*X (k < 1)).
I have been here for a few years, I think my model of “the LW mindset” is fairly good.
I suppose the general thing I am trying to say is: “speak less, read more.” But at the end of the day, this sort of advice is hopelessly entangled with status considerations. So it’s hard to give to a stranger, and have it be received well. Only really works in the context of an existing apprenticeship relationship.
(“A priori” suggests lack of knowledge to temper an initial impression, which doesn’t apply here.)
There are problems one can’t by default solve, and a statement, standing on its own, that it’s feasible to solve them is known to be wrong. A “useful attitude” of believing something wrong is a popular stance, but is it good? How does its usefulness work, specifically, if it does, and can we get the benefits without the ugliness?
that hasn’t been proven/isnt known to be unsolvable)
An optimistic attitude towards problems that are potentially solvable is instrumentally useful—and dare I argue—instrumentally rational. The drawbacks of encouraging an optimistic attitude towards open problems are far outweighed by the potential benefits.
(The quote markup in your comment designates a quote from your earlier comment, not my comment.)
You are not engaging the distinction I’ve drawn. Saying “It’s useful” isn’t the final analysis, there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya).
The problem of improving over the stance of an “optimistic attitude” might be solvable.
(The quote markup in your comment designates a quote from your earlier comment, not my comment.)
I know: I was quoting myself.
Saying “It’s useful” isn’t the final analysis
I guess for me it is.
there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya)
The beliefs aren’t known to be false. It is not clear to me, that someone believing they can solve a problem (that isn’t known/proven or even strongly suspected to be unsolvable) is a false belief.
What do you propose to replace the optimism I suggest?
This reply contributes nothing to the discussion of the problem at hand, and is quite uncharitable, I hope such replies were discouraged, and if downvoting was enabled, I would have downvoted it.
If thinking that they can solve the problem at hand (and making attempts at it) is a “typical LW attitude”, then it is an attitude I want to see more of and believe should be encouraged (thus, I’ll be upvoting /u/John_Maxwell_IV ’s post). A priori assuming that one cannot solve a problem (that hasn’t been proven/isnt known to be unsolvable) and thus refraining from even attempting the problem, isn’t an attitude that I want to see become the norm in Lesswrong. It’s not an attitude that I think is useful, productive, optimal or efficient.
It is my opinion, that we want to encourage people to attempt problems of interest to the community (the potential benefits are vast (e.g the problem is solved, and/or significant improvements are made on the problem, and future endeavours would have a better starting point), and the potential demerits are of lesser impact (time (ours and whoever attempts it) is wasted on an unpromising solution).
Coming back to the topic that was being discussed, I think methods of costly signalling are promising (for example, when you upvote a post you transfer X karma to the user, and you lose k*X (k < 1)).
I have been here for a few years, I think my model of “the LW mindset” is fairly good.
I suppose the general thing I am trying to say is: “speak less, read more.” But at the end of the day, this sort of advice is hopelessly entangled with status considerations. So it’s hard to give to a stranger, and have it be received well. Only really works in the context of an existing apprenticeship relationship.
Status games outside, the sentiment expressed in my reply are my real views on the matter.
(“A priori” suggests lack of knowledge to temper an initial impression, which doesn’t apply here.)
There are problems one can’t by default solve, and a statement, standing on its own, that it’s feasible to solve them is known to be wrong. A “useful attitude” of believing something wrong is a popular stance, but is it good? How does its usefulness work, specifically, if it does, and can we get the benefits without the ugliness?
An optimistic attitude towards problems that are potentially solvable is instrumentally useful—and dare I argue—instrumentally rational. The drawbacks of encouraging an optimistic attitude towards open problems are far outweighed by the potential benefits.
(The quote markup in your comment designates a quote from your earlier comment, not my comment.)
You are not engaging the distinction I’ve drawn. Saying “It’s useful” isn’t the final analysis, there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya).
The problem of improving over the stance of an “optimistic attitude” might be solvable.
I know: I was quoting myself.
I guess for me it is.
The beliefs aren’t known to be false. It is not clear to me, that someone believing they can solve a problem (that isn’t known/proven or even strongly suspected to be unsolvable) is a false belief.
What do you propose to replace the optimism I suggest?