If there’s some other weird attractor state of beliefs that also fulfills those requirements, I guess I risk falling into it. But then again, so do you—such beliefs would have to predict experience as successfully as the truth, which means they would have to give you the same widget-making capacity as true beliefs.
There are plenty of things like this—engineering models, heuristics, etc. You don’t have to have a “true” map to have a “useful” map. An idealized right-angle, not-to-scale map of a city which nonetheless allowed you to logically navigate from point A to point B would be “useful” even if not “true” or “accurate” in certain senses.
Meanwhile, if you wait around for a “true” map, you’re not going anywhere.
But such maps are only useful insofar as they are true. For example, the London Tube Map claims to be a useful representation of which stations are on which lines. It’s useful in doing that because it is correct in its domain—every station it says is on the Piccadilly Line really is on the Piccadilly Line. It doesn’t claim to accurately represent distance, and anyone who tried to use it to determine distances would quickly get some surprises.
There the danger doesn’t seem to be getting something that isn’t the truth, the danger is stopping at something that’s just true enough for a certain purpose, and no more.
And a seeker of truth seems less likely to get stuck there than a seeker of win—witness classical mechanics, which is still close enough to be useful for everything practical, versus relativity, which exists because Einstein wouldn’t accept a theory which worked well enough but had a few little loose ends.
There the danger doesn’t seem to be getting something that isn’t the truth, the danger is stopping at something that’s just true enough for a certain purpose, and no more.
Why is that bad?
And a seeker of truth seems less likely to get stuck there than a seeker of win—witness classical mechanics, which is still close enough to be useful for everything practical, versus relativity, which exists because Einstein wouldn’t accept a theory which worked well enough but had a few little loose ends.
How has relativity made us better off? If you want to pursue truth because you like truth, that’s great—it’s a “win” for you. But if you only need the truth to get to something else, it’s not a win to add useless knowledge.
Are you sure that this isn’t all about signaling being a truth-seeker? (i.e. “Truth-Seeking Isn’t About The Truth”)
After all, credibly signaling that you value the truth could make you a valuable ally, be considered a neutral judge, etc. etc. For these reasons, credibly valuing the truth above all else might be beneficial… for reasons not having anything to do with actually getting to the truth.
So, if you’re saying we should seek truth just because it’s the truth, and not because it brings practical benefit or pleasure or sends good signals, then what is the use of seeking truth?
What if it actually doesn’t and their craft are really only limited by how fast their typical UFO-discs can spin without killing the crew inside (apparently they are spongelike inside) since unlike us they already know how to create anti-gravity to pull their ships forward? In that case the reason we are not dead yet is because they needed to figure out how to construct fast enough motherships capable of a full-scale earth invasion after we apparently killed most of their messengers. In that case our strategy should be to pool resources into defending the earth against an alien invasion and make it so costly to them that they will instead consider a trade agreement with us, which may at some point be more attractive to them than an all-out war. Trade is the way forward. Of course that is only conjecture, I don’t really know if they exist, but assigning literally zero probability to this may be stupid.
There the danger doesn’t seem to be getting something that isn’t the truth, the danger is stopping at something that’s just true enough for a certain purpose, and no more.
Why is that bad?
It can be bad if you mistakenly rest at a local maximum in your results.
You take a theory that is close enough to being true that it gives you results. Let’s say, you make $1000 a month from a certain theory of web advertising on your website. If you worked a little harder to uncover the truth, you might confuse yourself and go down to $500 a month. Yet if you worked even harder, you might make $2000 a month. The $1000 was a local maximum. If so, seeking the truth could help you find it out, if we assume that (on average at least) more truth leads to more results in solving real world problems.
Are you sure that this isn’t all about signaling being a truth-seeker?
Pretty sure. If I wanted to signal, I’d be a lot more high-falutin about it. Actually, my comments do sound a bit high-falutin’ (I was looking for a better word than “truth seeker”, but couldn’t find one) but that wasn’t exactly what I wanted to express. The untangling-wires metaphor works a little better. Nominull’s “I only seek to be right because I hate being wrong.” works too. It’s less of a “I vow to follow the pure light of Truth though it lead me to the very pits of Hell” and more of an “Aaargh, my brain feels so muddled right now, how do I clear this up?”
Also, this would be a terrible community to signal truth-seeking in, considering how entrenched the “rationality as win” metaphor is. As I mentioned in the hair example, I think a lot more people here are signaling a burning interest in real-world application than really have one.
So, if you’re saying we should seek truth just because it’s the truth, and not because it brings practical benefit or pleasure or sends good signals, then what is the use of seeking truth?
Um...this line of argument applies to everything, doesn’t it? What is the use of seeking money, if it doesn’t bring pleasure or send good signals? What is the use of seeking love, if it doesn’t bring pleasure or send good signals? What is the use of seeking ‘practical benefits’, if they don’t bring pleasure or send good signals?
Darned if I know. That’s the way my utility function works. And it certainly is mediated by pleasure and good signals, but I prefer not to say it’s about pleasure and good signals because I’d rather not be turned into orgasmium just yet.
Do you really believe that you engage in Truth-Seeking for utilitarian reasons? I get the impression that you don’t really believe that.
Would you be willing to to enter a computer simulation where you got to investigate higher math puzzles (or metaphysics) with no applications? Spend your days in a fantastic and never-ending Truth-Seeking project (we’ll throw great sex, food and housing into the holodeck for you as well)?
I liked this better at the beginning when you were prodding people who say that they see rationalism as a means to an end! You seem to be going back to consequentialism!
I don’t believe that rationalists WIN because I don’t believe that winning WINS
Would you be willing to to enter a computer simulation where you got to investigate higher math puzzles (or metaphysics) with no applications? Spend your days in a fantastic and never-ending Truth-Seeking project (we’ll throw great sex, food and housing into the holodeck for you as well)?
Maybe a few videogames (or other forms of entertainment in addition to sex) and this sounds like a very sweet deal.
That’s possible and probably partially accurate; if there were more posts taking the form “I believe X because...” on Less Wrong, I might be more open to the idea that people are doing that.
Ciphergoth:
Also, this would be a terrible community to signal truth-seeking in, considering how entrenched the “rationality as win” metaphor is. As I mentioned in the hair example, I think a lot more people here are signaling a burning interest in real-world application than really have one.
I just wanted to get Yvain’s opinion about how much value from posting on Less Wrong was coming from signaling. Yvain suggested that this was not his or her main goal and that LW would be a uniquely poor place to attempt it. I personally doubt both of those points, but I was hoping to get some clarification since the comments about signaling and the nature of truth-seeking don’t seem to be part of a system of beliefs.
Are you worried that signaling truth-seeking is legitimate enough?
I inherently value humanity’s success in understanding as much as we do, but I don’t discount the utility much in time; I don’t much mind if we learn something later rather than earlier.
As a result, it’s not that important to me to try to serve that end directly; I think it’s a bigger gain to serve it indirectly, by trying to reduce the probability of extinction in the next hundred years. This also serves several other goals I value.
There the danger doesn’t seem to be getting something that isn’t the truth, the danger is stopping at something that’s just true enough for a certain purpose, and no more.
Why is that bad?
It’s not, if you know you’re doing it.
This is an interesting debate.
I believe all the truth we’ll ever get will be like the tube map: good for purpose X, and no more. Or at least, bad for purpose Y.
Wanting more is surrendering to metaphysics, realism, platonism, absolutism—whatever you wish to call it.
I believe platonism shaped first the Hellenistic world, then christianity (Paul was of Greek culture, the whole new testament was written in Greek, and books like the one of John are soaked in primary platonic philosophy), and rules until today. It also really sucks. Because it makes people to not want to be less wrong. They want to be completely, absolutely right, in a way you can never claim with the help of mere rationality. Only delusion can help with that.
Wanting more is surrendering to metaphysics, realism, platonism, absolutism—whatever you wish to call it. ….
Because it makes people to not want to be less wrong. They want to be completely, absolutely right, in a way you can never claim with the help of mere rationality. Only delusion can help with that.
The Truth Pilgrim’s progress goes like this:
Slightly Rational → Less Wrong → Delusional
Yep—and that’s probably as close to an “absolute truth” as you can get. Robert Anton Wilson’s “Quantum Psychology” (bad title, awesome book, some parts approach GEB in awesomeness) has some very good information along these lines, along with lots of “class exercises” that might be useful for developing an instrumental rationality group.
Good point! Though inasmuch as one can see the history of ideas as a conflict between Plato and Aristotle (not an entirely fruitless endeavor) it’s worth noting that Aristotle is still alive and kicking.
In a TV tube, an electron gun shoots electrons at a cathode ray tube. An electromagnet bends these rays in a precisely-timed manner to make them scan the screen. Since they’re travelling at relativistic speeds, they are time-dilated from our point of view; and you need to use relativity to bend them the right amount.
Since they’re travelling at relativistic speeds, they are time-dilated from our point of view; and you need to use relativity to bend them the right amount.
And engineers wouldn’t have discovered this fact without relativity theory?
I’m a bit stunned by this question, so maybe it was intended to be rhetorical. But if not, I believe things like GPS relies on relativity. And my life has been so much better ever since I got an iPhone with a GPS receiver. It integrates with google maps and the local public transportation system to actually tell me what time I should leave my house in order to be able to arrive on time at another location at a specific time by crossreferencing the departure and arrival times of all the subwaytrains and busses in the system.
I’m a bit stunned by this question, so maybe it was intended to be rhetorical. But if not, I believe things like GPS relies on relativity. And my life has been so much better ever since I got an iPhone with a GPS receiver.
It was intended to point out that Einstein didn’t seek out relativity in order to produce useful results, and that, with the possible exception of nuclear energy and atomic bombs, it’s quite likely that, had Einstein not come along, most of the “practical” uses of relativity today would’ve prompted engineers to add time dilation fudge factors to their plans, and then inspired some not-Einstein physicists to figure out what the heck was going on.
In other words, there was really no danger of “stopping at something that’s just true enough for a certain purpose, and no more”, in a way that would actually produce a bad result, or deprive us of a good one for more than a limited time.
In other words, Einstein’s truth-seeking was about his personal desire to “know God’s thoughts”, not to improve the lot of humanity by helping us get iPhone GPS receivers. And as I said earlier in this thread, wanting truth because you’re curious is all well and good, but in the end it’s the search for practical models that drives progress. Science having “true” models saves engineers time and mistakes getting started, but they still have to work out practical models anyway… and sometimes need to be able to deal with things that the scientists haven’t even started figuring out yet.
Case in point: hypnotism. Scientists still don’t have a “true” model for it, AFAIK, but hypnotists have plenty of practical models for it.
There are plenty of things like this—engineering models, heuristics, etc. You don’t have to have a “true” map to have a “useful” map. An idealized right-angle, not-to-scale map of a city which nonetheless allowed you to logically navigate from point A to point B would be “useful” even if not “true” or “accurate” in certain senses.
Meanwhile, if you wait around for a “true” map, you’re not going anywhere.
But such maps are only useful insofar as they are true. For example, the London Tube Map claims to be a useful representation of which stations are on which lines. It’s useful in doing that because it is correct in its domain—every station it says is on the Piccadilly Line really is on the Piccadilly Line. It doesn’t claim to accurately represent distance, and anyone who tried to use it to determine distances would quickly get some surprises.
There the danger doesn’t seem to be getting something that isn’t the truth, the danger is stopping at something that’s just true enough for a certain purpose, and no more.
And a seeker of truth seems less likely to get stuck there than a seeker of win—witness classical mechanics, which is still close enough to be useful for everything practical, versus relativity, which exists because Einstein wouldn’t accept a theory which worked well enough but had a few little loose ends.
Why is that bad?
How has relativity made us better off? If you want to pursue truth because you like truth, that’s great—it’s a “win” for you. But if you only need the truth to get to something else, it’s not a win to add useless knowledge.
Are you sure that this isn’t all about signaling being a truth-seeker? (i.e. “Truth-Seeking Isn’t About The Truth”)
After all, credibly signaling that you value the truth could make you a valuable ally, be considered a neutral judge, etc. etc. For these reasons, credibly valuing the truth above all else might be beneficial… for reasons not having anything to do with actually getting to the truth.
So, if you’re saying we should seek truth just because it’s the truth, and not because it brings practical benefit or pleasure or sends good signals, then what is the use of seeking truth?
The lightspeed limit stops aliens from eating us.
What if it actually doesn’t and their craft are really only limited by how fast their typical UFO-discs can spin without killing the crew inside (apparently they are spongelike inside) since unlike us they already know how to create anti-gravity to pull their ships forward? In that case the reason we are not dead yet is because they needed to figure out how to construct fast enough motherships capable of a full-scale earth invasion after we apparently killed most of their messengers. In that case our strategy should be to pool resources into defending the earth against an alien invasion and make it so costly to them that they will instead consider a trade agreement with us, which may at some point be more attractive to them than an all-out war. Trade is the way forward. Of course that is only conjecture, I don’t really know if they exist, but assigning literally zero probability to this may be stupid.
It can be bad if you mistakenly rest at a local maximum in your results.
You take a theory that is close enough to being true that it gives you results. Let’s say, you make $1000 a month from a certain theory of web advertising on your website. If you worked a little harder to uncover the truth, you might confuse yourself and go down to $500 a month. Yet if you worked even harder, you might make $2000 a month. The $1000 was a local maximum. If so, seeking the truth could help you find it out, if we assume that (on average at least) more truth leads to more results in solving real world problems.
It’s not, if you know you’re doing it.
Pretty sure. If I wanted to signal, I’d be a lot more high-falutin about it. Actually, my comments do sound a bit high-falutin’ (I was looking for a better word than “truth seeker”, but couldn’t find one) but that wasn’t exactly what I wanted to express. The untangling-wires metaphor works a little better. Nominull’s “I only seek to be right because I hate being wrong.” works too. It’s less of a “I vow to follow the pure light of Truth though it lead me to the very pits of Hell” and more of an “Aaargh, my brain feels so muddled right now, how do I clear this up?”
Also, this would be a terrible community to signal truth-seeking in, considering how entrenched the “rationality as win” metaphor is. As I mentioned in the hair example, I think a lot more people here are signaling a burning interest in real-world application than really have one.
Um...this line of argument applies to everything, doesn’t it? What is the use of seeking money, if it doesn’t bring pleasure or send good signals? What is the use of seeking love, if it doesn’t bring pleasure or send good signals? What is the use of seeking ‘practical benefits’, if they don’t bring pleasure or send good signals?
Darned if I know. That’s the way my utility function works. And it certainly is mediated by pleasure and good signals, but I prefer not to say it’s about pleasure and good signals because I’d rather not be turned into orgasmium just yet.
Yeah, “rationalists WIN!” is the most widely misued EYism on all of LessWrong.com.
Yvain:
Do you really believe that you engage in Truth-Seeking for utilitarian reasons? I get the impression that you don’t really believe that.
Would you be willing to to enter a computer simulation where you got to investigate higher math puzzles (or metaphysics) with no applications? Spend your days in a fantastic and never-ending Truth-Seeking project (we’ll throw great sex, food and housing into the holodeck for you as well)?
I liked this better at the beginning when you were prodding people who say that they see rationalism as a means to an end! You seem to be going back to consequentialism!
I don’t believe that rationalists WIN because I don’t believe that winning WINS
Maybe a few videogames (or other forms of entertainment in addition to sex) and this sounds like a very sweet deal.
And you must enjoy the signal value you a little bit! You aren’t keeping your Less Wrong postings in your diary under lock and key!
logi:
That’s possible and probably partially accurate; if there were more posts taking the form “I believe X because...” on Less Wrong, I might be more open to the idea that people are doing that.
Ciphergoth:
I just wanted to get Yvain’s opinion about how much value from posting on Less Wrong was coming from signaling. Yvain suggested that this was not his or her main goal and that LW would be a uniquely poor place to attempt it. I personally doubt both of those points, but I was hoping to get some clarification since the comments about signaling and the nature of truth-seeking don’t seem to be part of a system of beliefs.
Are you worried that signaling truth-seeking is legitimate enough?
Sure, but it’s pretty clear that a lot of people are enjoying the WIN! signal too. Let’s try not to get too caught up in who is signalling what.
Even if he did not value the signal, surely you can conjecture a rational strategy of publishing beliefs in order to refine them.
I inherently value humanity’s success in understanding as much as we do, but I don’t discount the utility much in time; I don’t much mind if we learn something later rather than earlier.
As a result, it’s not that important to me to try to serve that end directly; I think it’s a bigger gain to serve it indirectly, by trying to reduce the probability of extinction in the next hundred years. This also serves several other goals I value.
This is an interesting debate. I believe all the truth we’ll ever get will be like the tube map: good for purpose X, and no more. Or at least, bad for purpose Y. Wanting more is surrendering to metaphysics, realism, platonism, absolutism—whatever you wish to call it.
I believe platonism shaped first the Hellenistic world, then christianity (Paul was of Greek culture, the whole new testament was written in Greek, and books like the one of John are soaked in primary platonic philosophy), and rules until today. It also really sucks. Because it makes people to not want to be less wrong. They want to be completely, absolutely right, in a way you can never claim with the help of mere rationality. Only delusion can help with that.
The Truth Pilgrim’s progress goes like this:
Slightly Rational → Less Wrong → Delusional
Yep—and that’s probably as close to an “absolute truth” as you can get. Robert Anton Wilson’s “Quantum Psychology” (bad title, awesome book, some parts approach GEB in awesomeness) has some very good information along these lines, along with lots of “class exercises” that might be useful for developing an instrumental rationality group.
Good point! Though inasmuch as one can see the history of ideas as a conflict between Plato and Aristotle (not an entirely fruitless endeavor) it’s worth noting that Aristotle is still alive and kicking.
In a TV tube, an electron gun shoots electrons at a cathode ray tube. An electromagnet bends these rays in a precisely-timed manner to make them scan the screen. Since they’re travelling at relativistic speeds, they are time-dilated from our point of view; and you need to use relativity to bend them the right amount.
And engineers wouldn’t have discovered this fact without relativity theory?
I’m a bit stunned by this question, so maybe it was intended to be rhetorical. But if not, I believe things like GPS relies on relativity. And my life has been so much better ever since I got an iPhone with a GPS receiver. It integrates with google maps and the local public transportation system to actually tell me what time I should leave my house in order to be able to arrive on time at another location at a specific time by crossreferencing the departure and arrival times of all the subwaytrains and busses in the system.
It was intended to point out that Einstein didn’t seek out relativity in order to produce useful results, and that, with the possible exception of nuclear energy and atomic bombs, it’s quite likely that, had Einstein not come along, most of the “practical” uses of relativity today would’ve prompted engineers to add time dilation fudge factors to their plans, and then inspired some not-Einstein physicists to figure out what the heck was going on.
In other words, there was really no danger of “stopping at something that’s just true enough for a certain purpose, and no more”, in a way that would actually produce a bad result, or deprive us of a good one for more than a limited time.
In other words, Einstein’s truth-seeking was about his personal desire to “know God’s thoughts”, not to improve the lot of humanity by helping us get iPhone GPS receivers. And as I said earlier in this thread, wanting truth because you’re curious is all well and good, but in the end it’s the search for practical models that drives progress. Science having “true” models saves engineers time and mistakes getting started, but they still have to work out practical models anyway… and sometimes need to be able to deal with things that the scientists haven’t even started figuring out yet.
Case in point: hypnotism. Scientists still don’t have a “true” model for it, AFAIK, but hypnotists have plenty of practical models for it.