I don’t know precisely what you mean by rational in this context. Given your invocation of Aumann’s theorem below, I presume you mean something like “close to perfect Bayesians.”
I have no such specially-tailored definition. The generics are sufficient for my context. I strongly dislike using specially-tailored definitions of widely used terms without first openly declaring that this is being done. It’s an “underhanded” sort of thing to do in conversation, and counterproductive.
This is a really bad idea. Humans can try to be more rational, but they are far from rational.
A. Far from ideally rational. “Rational” is not a binary state; it is a relative one. One can be, as you note, “more” or “less” rational.
B. You are badly failing to properly parse my statement. I make the axiomatic assumption of rationality and honesty in those I interact with until such time as they give me reason to believe otherwise of them.
I think you are misinterpreting a number of my comments and have other issues with what I’ve brought up,
I can see why you’d think I’m misinterpreting you. The trouble is, I don’t believe I am, and in each instance where you’ve raised an objection or I have, I have provided the reasoning and motivation for that instance (insofar as I can recall at this instant.) On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did. You routinely introduce new and unnecessary assumptions into my statements, thereby altering their meanings, and you have left out elements of my statements, also thereby altering their meanings. One such example of this is my relating to you of my axiomatic assumption of rationality and honesty, which I clearly related as being conditional to contra-evidence. You proceded to lecture me on how bad a notion it is to not allow for irrationality in others. This can only mean that you did not, in fact, properly parse my statement… despite the clear and plain language I used to make it.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
Notice that this statement says nothing at all about caloric restriction.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue. So yes, I’m telling you what your hypothesis is. And I am right to do so—because I am using your own statements in an internally consistent manner to themselves.
This, by the way, represents another instance of you making an internally inconsistent statement.
Are we operating on the same data sets? Certainly they overlap a lot, but it looks like I’m operating with a much stronger attitude about general problems of overconfidence.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
If as you claim, you are assuming that I’m rational, have you updated in my direction? If not, why not? If so, by how much?
Unfortunately, I’m being forced to update my beliefs in the direction of the elimination of the assumption. There’s too many internally inconsistent statements you continue to make, and you claim points of fact as evidence for positions that the datasets in question directly contradict. (Case in point: your continued use of Stipp’s text as a justification of the belief in mainstream medical acceptance of antagapics research. Additionally: Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.)
What it means for someone to be rational doesn’t really have a good generic. There’s some intuition behind it that seems ok. But even that is simply a bad premise to use. It doesn’t apply to most humans. Assuming even an approximate degree of rationality is probably not a justified assumption. You seem to be making a point that you are willing to update to conclude that someone isn’t that rational. But whether you are willing to update or not isn’t terrible relevant. I can assign a low probability to the sun rising each day and every time it does rise update accordingly. This isn’t a good approach.
On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did.
This suggests that we have different ideas of what constitutes plain language or have severely different communication styles. In such contexts it helps to just spell everything out explicitly. It does seem that there’s a similar problem going both ways. Look for example at how you seem to have interpreted my comment about not assuming rationality in others. In this context it seems that you seem to think that I parsed your statement as you insisting that everyone is rational. In contrast if you reread what I wrote you’ll see that I understood you perfectly and was objecting to the assumption as a starting assumption whether or not you would then update given evidence. The fact that as discussed in the other piece of this thread you totally missed my remark about fusion power suggests that there are some serious communication gaps in this conversation.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue.
I’m not sure where we agreed to this claim. Resveratrol has features that do look very similar to those caloric restriction, but I don’t see anywhere we agreed to the claim that it is mimicking caloric restriction. Could you point to me where we agreed to that? I’ve just looked over the thread and I don’t see anywhere where I made that statement Since you’ve stated that you are likely not going to be continuing this conversation, can someone else who is reading this comment if they thought I said so and if so point where I did so? It is possible that I said something that is being interpreted that way. I’ve tried rereading the conversation to find that point and there’s nothing that seems to do that. But this could very well be due to the illusion of transparency in reading my own words.
Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.
So this seems again to be a failure to communicate between us. I didn’t reject that; their data is very important to how aging occurs. There data as it exists simply doesn’t rule out the possibility I outlined to you for very old ages That’s not the same thing at all.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
Ok. This is actually a major issue: the outside view for human predictions about technology is that they are almost always overconfident when they make predictions. Humans in general have terrible overconfidence problems. Even if one had absolutely no specific issue with a given technology one would want to move any prediction in the direction of being less confident. We don’t need to go anywhere to justify a suspicion of overconfidence. For an individual who is marginally aware of human cognitive biases, that should be the default setting.
I have no such specially-tailored definition. The generics are sufficient for my context. I strongly dislike using specially-tailored definitions of widely used terms without first openly declaring that this is being done. It’s an “underhanded” sort of thing to do in conversation, and counterproductive.
A. Far from ideally rational. “Rational” is not a binary state; it is a relative one. One can be, as you note, “more” or “less” rational.
B. You are badly failing to properly parse my statement. I make the axiomatic assumption of rationality and honesty in those I interact with until such time as they give me reason to believe otherwise of them.
I can see why you’d think I’m misinterpreting you. The trouble is, I don’t believe I am, and in each instance where you’ve raised an objection or I have, I have provided the reasoning and motivation for that instance (insofar as I can recall at this instant.) On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did. You routinely introduce new and unnecessary assumptions into my statements, thereby altering their meanings, and you have left out elements of my statements, also thereby altering their meanings. One such example of this is my relating to you of my axiomatic assumption of rationality and honesty, which I clearly related as being conditional to contra-evidence. You proceded to lecture me on how bad a notion it is to not allow for irrationality in others. This can only mean that you did not, in fact, properly parse my statement… despite the clear and plain language I used to make it.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue. So yes, I’m telling you what your hypothesis is. And I am right to do so—because I am using your own statements in an internally consistent manner to themselves.
This, by the way, represents another instance of you making an internally inconsistent statement.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
Unfortunately, I’m being forced to update my beliefs in the direction of the elimination of the assumption. There’s too many internally inconsistent statements you continue to make, and you claim points of fact as evidence for positions that the datasets in question directly contradict. (Case in point: your continued use of Stipp’s text as a justification of the belief in mainstream medical acceptance of antagapics research. Additionally: Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.)
What it means for someone to be rational doesn’t really have a good generic. There’s some intuition behind it that seems ok. But even that is simply a bad premise to use. It doesn’t apply to most humans. Assuming even an approximate degree of rationality is probably not a justified assumption. You seem to be making a point that you are willing to update to conclude that someone isn’t that rational. But whether you are willing to update or not isn’t terrible relevant. I can assign a low probability to the sun rising each day and every time it does rise update accordingly. This isn’t a good approach.
This suggests that we have different ideas of what constitutes plain language or have severely different communication styles. In such contexts it helps to just spell everything out explicitly. It does seem that there’s a similar problem going both ways. Look for example at how you seem to have interpreted my comment about not assuming rationality in others. In this context it seems that you seem to think that I parsed your statement as you insisting that everyone is rational. In contrast if you reread what I wrote you’ll see that I understood you perfectly and was objecting to the assumption as a starting assumption whether or not you would then update given evidence. The fact that as discussed in the other piece of this thread you totally missed my remark about fusion power suggests that there are some serious communication gaps in this conversation.
I’m not sure where we agreed to this claim. Resveratrol has features that do look very similar to those caloric restriction, but I don’t see anywhere we agreed to the claim that it is mimicking caloric restriction. Could you point to me where we agreed to that? I’ve just looked over the thread and I don’t see anywhere where I made that statement Since you’ve stated that you are likely not going to be continuing this conversation, can someone else who is reading this comment if they thought I said so and if so point where I did so? It is possible that I said something that is being interpreted that way. I’ve tried rereading the conversation to find that point and there’s nothing that seems to do that. But this could very well be due to the illusion of transparency in reading my own words.
So this seems again to be a failure to communicate between us. I didn’t reject that; their data is very important to how aging occurs. There data as it exists simply doesn’t rule out the possibility I outlined to you for very old ages That’s not the same thing at all.
Ok. This is actually a major issue: the outside view for human predictions about technology is that they are almost always overconfident when they make predictions. Humans in general have terrible overconfidence problems. Even if one had absolutely no specific issue with a given technology one would want to move any prediction in the direction of being less confident. We don’t need to go anywhere to justify a suspicion of overconfidence. For an individual who is marginally aware of human cognitive biases, that should be the default setting.