It took me a few minutes to see what you meant there. I read ‘quotes’ as a simple plural. Which leads to a parsing of your first sentence as a position of some merit purely by accident.
And sadly sometimes used as an argument against life extension.
Really? Well, I suppose that would actually make sense according to a certain not-outright-insane value system.
It would be bad even if the premise were true. Then the pure idea of ‘yeah, we have to let you all die because otherwise all the shiny new ideas would not prosper’ is so much out of proportion. Most people do not even work in idea maintaining, but do pretty mundane jobs, or moonlight as grandparents.
Over time I notice the occasional instance of ageism in young people. It is very easy to ignore collected experiences of others, and in some cases bad.
It would be awesome to have people still around that lived through history. Instead each generation to some degree forgets what was before.
It hurts me each time someone (my age or younger) claims how he does not care about history at all, because -
The premise is true and generally accepted as such; a slightly more formal treatment was given by Kuhn, but it amounts roughly to “new scientists produce advancements, old scientists stick to dogma, the status of oldies is so powerful they have to die or retire for advancements to prosper.”
Shortly after “Structure of Scientific Revolutions”, there was a paradigm shift in geology: plate tectonics. Which went from fringe to scientific consensus in, as I understand it, well under a decade thanks to overwhelming evidence. Did unusually many geologists die that decade?
I hope there have been some changes in the way scientists work since the 1960s. Also I hope that it depends on the specific field.
As a conclusion of the initial argument one could add time limits to tenure, but please lets not argue for killing off scientists justs for being to old.
Try to get someone to put it in these words. Usually no one demands the killing of professors, or even mentions how he likes to have old people die from neglect.
If someone boldly states that he wants all these old people to die to free up space, or what ever, than you probably found a person you do not actually want to have a discussion with.
I hope there have been some changes in the way scientists work since the 1960s.
I completely forgot about a very important point. When rejuvenation actually works, then it might also make the brain work better, younger and so on.
If it is true, that great scientists do their most important work before reaching age X, then after a rejuvenation they might be able to do even more with their good as new brain + more experience.
Then it would not be a matter of getting rid of holders of old ideas, but find a way to deal with people that have an unreachable time advantage, that cannot be made up. It would be good for society to keep experienced mind in work.
No real need to kill them off, as long as new ones are being born. Unanimity is nice, but simple majorities can usually get the job done.
As for your time limits idea, I might go further, and send everybody back to school to get a new PhD every 100 years: in a new field, at a different school, in a different language.
Kuhn did not say that. His notion of paradigm advancement had a lot to do with a lot of other things. His canonical example of paradigm change (the Copernican revolution) had people actively changing their minds even in his narrative. And there are a lot of problems with his story of how things went, see for example this essay.
Furthermore, in many other shifts where new theories came into play, the overall trend happened with many old people accepting the new theory. Thus for example, Einstein’s special relativity was accepted by many older physicists.
...While Einstein himself rejected quantum mechanics!
(And, yes, I’m aware of the philosophical glitches in the Copenhagen Interpretation. But Einstein refused to accept QM on principle, and I’m not sure any evidence could have convinced him, which is rather poor form for one of the greatest thinkers of all time.)
This is probably wrong. If Einstein were transported to today we could almost certainly convince him of the correctness of quantum mechanics. Not only that, the guy did a lot of important quantum mechanics research, which should suggest that it’s not as simple as “he rejected it.” Wikipedia says that he initially thought matrix mechanics was wrong, but became convinced of it when it was shown to be equivalent to the Schroedinger forumulation.
Not only that, the guy did a lot of important quantum mechanics research, which should suggest that it’s as simple as “he rejected it.”
You are probably right on with this comment, but I think I may have misunderstood you on one point. Did you mean “it’s not as simple as ‘he rejected it.’ ”? The way it is now looks like it contradicts the rest of the post.
Also, I recall that Einstein did change his mind at least one important point, the existence of the “cosmological constant.” So that implies he wasn’t especially close-minded.
No. It is more a case of ‘history is old stuff, that happened a long time ago, is done & over with, and does not matter any more’. Why care about the past when so much is happening right now.
I do not think the way history is dealt with is that much better, to some degree visiting historic museums or sites is just signaling.
I do not think the way history is dealt with is that much better, to some degree visiting historic museums or sites is just signaling.
That is basically the concept behind ‘costly signalling’, that people will pay time and money to visit a museum in order to signal, and in doing so accidentally learn something about history.
Ooops. To redeem my tarnished honor, I propose an algorithmic solution to the duplicate quote problem: a full list of quotes indexed by author (of the quote). Checking to see if a quote has already been posted would then be a fast operation.
Your honour remains intact! I predicted that the quote had been used, based primarily on how much I like it. Google didn’t find it in a quotes thread. I suppose that would mean my honour is tarnished. How much honour does one lose by assigning greater than 0.5 probability to something that turns out to be incorrect. Is there some kind of algorithm for that? ;)
You add the log of the probability you gave for what happened, so add ln(1-0.87) = −2.04 honor. Unfortunately, there’s no way to make it go up, and it’s pretty much guaranteed to go down a lot.
Just don’t assign anything a probability of 0. If you’re wrong, you lose infinite honor.
I like it, but that ‘no way to make it go up’ is a problem. It feels like we should have some sort of logarithmic representation of honour too, allowing for increasing honour if you get something right, mostly when your honour is currently low.
To what extent do we want ‘honour’ to be a measure of calibration and to what extent a measure of predictive power?
A naive suggestion could be to take log(x) - log(p), where p is the probability given by MAXENT. That is, honor is how much better you do than the “completely uninformed” maximal entropy predictor. This would enable better-than-average predictors to make their honor go up.
This of course has the shortcoming that maximal entropy may not be practical to actually calculate in many situations. It also may or may not produce incentives to strategically make certain predictions and not others. I haven’t analysed that very much.
I can’t remember the Post I got that from. It wasn’t talking about honor.
This is the only possible system in which you’re rewarded most for giving the answers accurately, and your honor remains the same regardless of how you count it. For example, predicting A and B loses the same honor as predicting A and predicting B given A.
Technically, you can use a different log base, but that just amounts to a scaling factor.
I like it, but that ‘no way to make it go up’ is a problem.
I agree; the typical human brain balks and runs away when faced with a scale of merit whose max-point is 0.
To what extent do we want ‘honour’ to be a measure of calibration and to what extent a measure of predictive power?
Yes.
In other words, my honor as an epistemic rationalist should be a mix of calibration and predictive power. An amusing but arbitrary formula might be just to give yourself 2x honor when your binary prediction with probability x comes true and to dock yourself ln (1-x) honor when it doesn’t. If you make 20 predictions each at p = 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, and 0.95 for a total of 200 predictions a day and you are perfectly calibrated, you would expect to lose about 3.4 honor each day.
There’s gotta be a way to fix this so that a perfectly calibrated person would gain a tiny amount of honor each day rather than lose it. It might not be elegant, though. Got any ideas?
I agree; the typical human brain balks and runs away when faced with a scale of merit whose max-point is 0.
Zero does seem more appropriate either as a minimum or a midpoint. If everything is going to be negative then flip it around and say ‘less is good’! But the main problem I have with only losing honor based on making predictions is that it essentially rewards never saying anything of importance that could be contradicted. That sounds a bit too much like real life for some reason. ;)
There’s gotta be a way to fix this so that a perfectly calibrated person would gain a tiny amount of honor each day rather than lose it. It might not be elegant, though. Got any ideas?
The tricky part is not so much making up the equations but in determining what criteria to rate the scale against. We would inevitably be injecting something arbitrary.
You’re supposed to have a probability for everything. The closest you can do to not guessing is give every possibility equal probabilities, in which case you’d lose honor even faster than normal.
You could give yourself honor equal to the square of the probability you gave, but that means you’d have incentive to phrase it in as many questions possible. After all, if you gave a single probability for what happens for your entire life, you couldn’t get more than one point of honor. With the system I mentioned first, you’d lose exactly the same honor.
Honour I don’t know about; I feel like any honour lost you could gain back by giving us a costly signal that you are recalibrating. But it does let us determine how badly calibrated you are, and then we can make judgements like pr(wedrifid is wrong | wedrifid is badly calibrated).
Particularly when the ‘prediction’ was largely my way of complimenting the quote in a non-boring way. :P
I was actually relieved when I didn’t found it wasn’t in the quotes thread. I wasn’t sure what I would update to if it was a double post. Slightly upward, only a little—there were too many complications. I can even imagine lowering p(double post | a quote is awesome and relevant) based finding that the instance is, in fact, a double post. (If the probability is particularly high and the underlying reasoning was such that I expected comments of that level of awesome to have been reposted half a dozen times.)
The tricky part now is not to prevent my intuitive expectation from updating too much. I’ve paid particular attention to this instance so by default I would expect my intuitions to base to much on the single case.
The hard part would then be making that list algorithmically. An easier algorithmic method would be to do approximate string matches with previous quote threads, using something like the Smith-Waterman algorithm for pairwise local sequence alignment. This is what biologists do when they have a gene sequence and want to know if something like it is already in the databases, and there’s no reason why the method shouldn’t also apply just as well to English text.
The way this would look to users is just a text box where you paste in the quote, and it’ll tell you if the quote has been posted before. Even easier to use than a full list of quotes.
Actually, I’m not sure. “Max Plank” isn’t mention in a quotes thread. It does have an sequence post essentially dedicated to it and references elsewhere in posts.
Let’s see. About:
p(double post | a quote is awesome and relevant) = 0.82
I have updated p(quote is in the quotes section | quote is discussed on the site) and p(quote is attributed) somewhat too.
(And, pre-emptively, I do feel comfortable providing two digits of precision. Not because I have excessive confidence in my ability to quantise my subjective judgements but rather because using significant figures as a method of communicating confidence or accuracy is a terrible idea.)
And, pre-emptively, I do feel comfortable providing two digits of precision. Not because I have excessive confidence in my ability to quantise my subjective judgements but rather because using significant figures as a method of communicating confidence or accuracy is a terrible idea.
This seems right but I’m not sure why. Can you articulate your reasons?
Let’s see. I need to purge my conclusion cache. (What’s the name for Eliezer’s post on not asking ‘why’ but asking ‘if’? I definitely needed to apply that.)
Yes, approximately what FAWS said. If I know I’m only accurate plus or minus 0.1 and the value I calculate is 0.75 then it would be silly to round off to 0.8. Compressing the two pieces of information (number and precision) into one number is just lossy. It can become a problem when writing say, 100 too. Although that can technically be avoided by always using scientific notation.
Not wedrifid, but you needlessly lose some small amount of information. The digits after the last significant one still are your best bet for the actual value, so you systematically do worse than you could.
p(double post | a quote is awesome and relevant) = 0.87
Which way do I need to update?
The quotes idea is pretty much wrong. And sadly sometimes used as an argument against life extension.
It took me a few minutes to see what you meant there. I read ‘quotes’ as a simple plural. Which leads to a parsing of your first sentence as a position of some merit purely by accident.
Really? Well, I suppose that would actually make sense according to a certain not-outright-insane value system.
It would be bad even if the premise were true. Then the pure idea of ‘yeah, we have to let you all die because otherwise all the shiny new ideas would not prosper’ is so much out of proportion. Most people do not even work in idea maintaining, but do pretty mundane jobs, or moonlight as grandparents.
Over time I notice the occasional instance of ageism in young people. It is very easy to ignore collected experiences of others, and in some cases bad. It would be awesome to have people still around that lived through history. Instead each generation to some degree forgets what was before.
It hurts me each time someone (my age or younger) claims how he does not care about history at all, because -
And in middle aged people and old people too. :)
The premise is true and generally accepted as such; a slightly more formal treatment was given by Kuhn, but it amounts roughly to “new scientists produce advancements, old scientists stick to dogma, the status of oldies is so powerful they have to die or retire for advancements to prosper.”
Shortly after “Structure of Scientific Revolutions”, there was a paradigm shift in geology: plate tectonics. Which went from fringe to scientific consensus in, as I understand it, well under a decade thanks to overwhelming evidence. Did unusually many geologists die that decade?
I hope there have been some changes in the way scientists work since the 1960s. Also I hope that it depends on the specific field.
As a conclusion of the initial argument one could add time limits to tenure, but please lets not argue for killing off scientists justs for being to old.
Nice way to put it! To phrase it another way:
To argue in favor of mortality because of fears of entrenched conservatives is to demand capital punishment where term limits would suffice.
Thank you!
Try to get someone to put it in these words. Usually no one demands the killing of professors, or even mentions how he likes to have old people die from neglect.
If someone boldly states that he wants all these old people to die to free up space, or what ever, than you probably found a person you do not actually want to have a discussion with.
I completely forgot about a very important point. When rejuvenation actually works, then it might also make the brain work better, younger and so on. If it is true, that great scientists do their most important work before reaching age X, then after a rejuvenation they might be able to do even more with their good as new brain + more experience. Then it would not be a matter of getting rid of holders of old ideas, but find a way to deal with people that have an unreachable time advantage, that cannot be made up. It would be good for society to keep experienced mind in work.
No real need to kill them off, as long as new ones are being born. Unanimity is nice, but simple majorities can usually get the job done.
As for your time limits idea, I might go further, and send everybody back to school to get a new PhD every 100 years: in a new field, at a different school, in a different language.
You’re only going to give me 100 years to study mathematics, uninterrupted?
B-b-but! That’s nowhere near enough time!
I am happy to see how it will turn out
This might be the answer you are looking for.
Kuhn did not say that. His notion of paradigm advancement had a lot to do with a lot of other things. His canonical example of paradigm change (the Copernican revolution) had people actively changing their minds even in his narrative. And there are a lot of problems with his story of how things went, see for example this essay.
Furthermore, in many other shifts where new theories came into play, the overall trend happened with many old people accepting the new theory. Thus for example, Einstein’s special relativity was accepted by many older physicists.
...While Einstein himself rejected quantum mechanics!
(And, yes, I’m aware of the philosophical glitches in the Copenhagen Interpretation. But Einstein refused to accept QM on principle, and I’m not sure any evidence could have convinced him, which is rather poor form for one of the greatest thinkers of all time.)
This is probably wrong. If Einstein were transported to today we could almost certainly convince him of the correctness of quantum mechanics. Not only that, the guy did a lot of important quantum mechanics research, which should suggest that it’s not as simple as “he rejected it.” Wikipedia says that he initially thought matrix mechanics was wrong, but became convinced of it when it was shown to be equivalent to the Schroedinger forumulation.
You are probably right on with this comment, but I think I may have misunderstood you on one point. Did you mean “it’s not as simple as ‘he rejected it.’ ”? The way it is now looks like it contradicts the rest of the post.
Also, I recall that Einstein did change his mind at least one important point, the existence of the “cosmological constant.” So that implies he wasn’t especially close-minded.
Hah, yes. Typos strike again. Fixed.
because—there are not enough elves and wizardesses in that genre of story?
No. It is more a case of ‘history is old stuff, that happened a long time ago, is done & over with, and does not matter any more’. Why care about the past when so much is happening right now.
I do not think the way history is dealt with is that much better, to some degree visiting historic museums or sites is just signaling.
That is basically the concept behind ‘costly signalling’, that people will pay time and money to visit a museum in order to signal, and in doing so accidentally learn something about history.
thx for the reminder
Yes.
Ooops. To redeem my tarnished honor, I propose an algorithmic solution to the duplicate quote problem: a full list of quotes indexed by author (of the quote). Checking to see if a quote has already been posted would then be a fast operation.
Your honour remains intact! I predicted that the quote had been used, based primarily on how much I like it. Google didn’t find it in a quotes thread. I suppose that would mean my honour is tarnished. How much honour does one lose by assigning greater than 0.5 probability to something that turns out to be incorrect. Is there some kind of algorithm for that? ;)
You add the log of the probability you gave for what happened, so add ln(1-0.87) = −2.04 honor. Unfortunately, there’s no way to make it go up, and it’s pretty much guaranteed to go down a lot.
Just don’t assign anything a probability of 0. If you’re wrong, you lose infinite honor.
I like it, but that ‘no way to make it go up’ is a problem. It feels like we should have some sort of logarithmic representation of honour too, allowing for increasing honour if you get something right, mostly when your honour is currently low.
To what extent do we want ‘honour’ to be a measure of calibration and to what extent a measure of predictive power?
A naive suggestion could be to take log(x) - log(p), where p is the probability given by MAXENT. That is, honor is how much better you do than the “completely uninformed” maximal entropy predictor. This would enable better-than-average predictors to make their honor go up.
This of course has the shortcoming that maximal entropy may not be practical to actually calculate in many situations. It also may or may not produce incentives to strategically make certain predictions and not others. I haven’t analysed that very much.
I can’t remember the Post I got that from. It wasn’t talking about honor.
This is the only possible system in which you’re rewarded most for giving the answers accurately, and your honor remains the same regardless of how you count it. For example, predicting A and B loses the same honor as predicting A and predicting B given A.
Technically, you can use a different log base, but that just amounts to a scaling factor.
I agree; the typical human brain balks and runs away when faced with a scale of merit whose max-point is 0.
Yes.
In other words, my honor as an epistemic rationalist should be a mix of calibration and predictive power. An amusing but arbitrary formula might be just to give yourself 2x honor when your binary prediction with probability x comes true and to dock yourself ln (1-x) honor when it doesn’t. If you make 20 predictions each at p = 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, and 0.95 for a total of 200 predictions a day and you are perfectly calibrated, you would expect to lose about 3.4 honor each day.
There’s gotta be a way to fix this so that a perfectly calibrated person would gain a tiny amount of honor each day rather than lose it. It might not be elegant, though. Got any ideas?
Zero does seem more appropriate either as a minimum or a midpoint. If everything is going to be negative then flip it around and say ‘less is good’! But the main problem I have with only losing honor based on making predictions is that it essentially rewards never saying anything of importance that could be contradicted. That sounds a bit too much like real life for some reason. ;)
The tricky part is not so much making up the equations but in determining what criteria to rate the scale against. We would inevitably be injecting something arbitrary.
You’re supposed to have a probability for everything. The closest you can do to not guessing is give every possibility equal probabilities, in which case you’d lose honor even faster than normal.
You could give yourself honor equal to the square of the probability you gave, but that means you’d have incentive to phrase it in as many questions possible. After all, if you gave a single probability for what happens for your entire life, you couldn’t get more than one point of honor. With the system I mentioned first, you’d lose exactly the same honor.
Honour I don’t know about; I feel like any honour lost you could gain back by giving us a costly signal that you are recalibrating. But it does let us determine how badly calibrated you are, and then we can make judgements like pr(wedrifid is wrong | wedrifid is badly calibrated).
:P
Particularly when the ‘prediction’ was largely my way of complimenting the quote in a non-boring way. :P
I was actually relieved when I didn’t found it wasn’t in the quotes thread. I wasn’t sure what I would update to if it was a double post. Slightly upward, only a little—there were too many complications. I can even imagine lowering p(double post | a quote is awesome and relevant) based finding that the instance is, in fact, a double post. (If the probability is particularly high and the underlying reasoning was such that I expected comments of that level of awesome to have been reposted half a dozen times.)
The tricky part now is not to prevent my intuitive expectation from updating too much. I’ve paid particular attention to this instance so by default I would expect my intuitions to base to much on the single case.
The hard part would then be making that list algorithmically. An easier algorithmic method would be to do approximate string matches with previous quote threads, using something like the Smith-Waterman algorithm for pairwise local sequence alignment. This is what biologists do when they have a gene sequence and want to know if something like it is already in the databases, and there’s no reason why the method shouldn’t also apply just as well to English text.
The way this would look to users is just a text box where you paste in the quote, and it’ll tell you if the quote has been posted before. Even easier to use than a full list of quotes.
Actually, I’m not sure. “Max Plank” isn’t mention in a quotes thread. It does have an sequence post essentially dedicated to it and references elsewhere in posts.
Let’s see. About:
p(double post | a quote is awesome and relevant) = 0.82
I have updated p(quote is in the quotes section | quote is discussed on the site) and p(quote is attributed) somewhat too.
(And, pre-emptively, I do feel comfortable providing two digits of precision. Not because I have excessive confidence in my ability to quantise my subjective judgements but rather because using significant figures as a method of communicating confidence or accuracy is a terrible idea.)
This seems right but I’m not sure why. Can you articulate your reasons?
Let’s see. I need to purge my conclusion cache. (What’s the name for Eliezer’s post on not asking ‘why’ but asking ‘if’? I definitely needed to apply that.)
Yes, approximately what FAWS said. If I know I’m only accurate plus or minus 0.1 and the value I calculate is 0.75 then it would be silly to round off to 0.8. Compressing the two pieces of information (number and precision) into one number is just lossy. It can become a problem when writing say, 100 too. Although that can technically be avoided by always using scientific notation.
Not wedrifid, but you needlessly lose some small amount of information. The digits after the last significant one still are your best bet for the actual value, so you systematically do worse than you could.