Responses below. As a meta-remark, your comment doesn’t steelman my argument, and I think that steelmanning arguments helps keep the conversation on track, so I’d appreciate it if you were to do so in the future.
Penrose is a worrisome case to bring as an example, since he is in fact wrong, and therefore you’re giving an example where your reasoning leads to the wrong conclusion.
The point of the example is that one shouldn’t decisively conclude that Penrose is wrong — one should instead hedge.
Perhaps a relevant analogy is that of the using seat belts to guard against car accidents — one shouldn’t say “The claim that I’m going to get into a potentially fatal car accident is in fact wrong, so I’m not going to wear seat belts.” You may argue that the relevant probabilities are sufficiently different so that the analogy isn’t a good one. If so, I disagree.
If you can’t easily find examples where your reasoning led you to a new correct conclusion instead of new sympathy toward a wrong conclusion, this is worrisome.
There are many such examples. My post extended to a length of eight pages without my going into them, and I wanted to keep the post to a reasonable length. I’m open to the possibility of writing another post with other examples. The reason that I chose the Penrose example is to vividly illustrate the shift in my epistemology.
In general, I tend to flag recounts of epistemological innovations which lead to new sympathy toward a wrong conclusion, as though the one were displaying compassion for a previously hated enemy, for in epistemology this is not virtue.
One would expect this sort of thing to sometimes happen by chance in the course of updating based on incoming evidence. So I don’t share your concern.
The Penrose example worries me for other reasons as well, namely it seems like it would be possible to generate hordes and hordes of weak arguments against Penrose; so it’s as if because the argument against Penrose is strong, you aren’t bothering to try to generate weak arguments; reading this feels like you now prefer weak arguments to strong arguments and don’t try to find the many weak arguments once you see a strong argument, which is not good Bayesianism.
I can see how the example might seem disconsonant with my post, and will consider revising the post to clarify. [Edit: I did this.] The point that I intended to make is that I was previously unknowingly ignoring certain nontrivial weak lines of evidence, on the grounds that they weren’t strong enough, and that I’ve recognized this, and have been working on modifying my epistemological framework accordingly.
I don’t think that the hordes and hordes of weak arguments that you refer to are collectively strong enough to nullify the argument that one should trust Penrose because he’s one of the greatest physicists of the second half of the 20′th century.
You also claim there’s a strong argument for Penrose, namely his authority (? wasn’t this the kind of reasoning you were arguing against trusting?) but either we have very different domain models here, or you’re not using the Bayesian definition of strong evidence as “an argument you would be very unlikely to observe, in a world where the theory is false”
I don’t remember arguing against trusting authority above – elaborate if you’d like.
I wasn’t saying that one should give nontrivial credence to Penrose’s views based on his authority. I was saying that one should give nontrivial credence to Penrose’s views based on the fact that he’s a deeper thinker than everybody who I know (in the sense that his accomplishments are deeper than anything that anyone who I know has ever accomplished).
As a meta-remark, your comment doesn’t steelman my argument, and I think that steelmanning arguments helps keep the conversation on track, so I’d appreciate it if you were to do so in the future.
Something has gone severely wrong with the ‘steelman’ concept if it is now being used offensively, to force social obligations onto others. This ‘meta-remark’ amounts to a demand that if JonahSinick says something stupid then it is up to others to search related concept space to find the nearest possible good argument for a better conclusion and act as if Jonah had said that instead of what he actually said. That is an entirely unreasonable expectation of his audience and expecting all readers to come up with what amounts to superior content than the post author whenever they make a reply is just ridiculously computationally inefficient.
Responses below. As a meta-remark, your comment doesn’t steelman my argument, and I think that steelmanning arguments helps keep the conversation on track, so I’d appreciate it if you were to do so in the future.
I have a known problem with this (Anna Salamon told me so, therefore it is true) so Jonah’s remark above is a priori plausible. I don’t know if I can do so successfully, but will make an effort in this direction.
(It’s true that what Jonah means is technically ‘principle of charity’ used to interpret original intent, not ‘steelman’ used to repair original intent, but the principle of charity says we should interpret the request above as if he had said ‘principle of charity’.)
(It’s true that what Jonah means is technically ‘principle of charity’ used to interpret original intent, not ‘steelman’ used to repair original intent, but the principle of charity says we should interpret the request above as if he had said ‘principle of charity’.)
Something has gone severely wrong with the ‘steelman’ concept if it is now being used offensively,
No offense intended :-)
to force social obligations onto others
Request, not force
it is up to others to search related concept space to find the nearest possible good argument for a better conclusion
My remark that steelmanning keeps the discussion on track is genuine in intention. I agree that norms for steelmanning could conceivably become too strong for efficient discourse, but I think that at the margin, it would be better if people were doing much more steelmanning.
I think the concept you’re looking for is the principle of charity. Steel man is what you do to someone else’s argument in order to make sure yours is good, after you’ve defeated their actual argument. Principle of charity is what you do in discourse to make sure you’re having the best possible discussion.
If you think Eliezer should have steelmanned your argument then you think he has already defeated it—before he even commented!
I guess I didn’t mean that he didn’t steelman my argument, I meant that he didn’t steelman the things that he was objecting to. For example, he could have noted that I did give an example of the type that he seems to have been looking for, rather than focusing on the fact that the Penrose example isn’t of the type that he was looking for. I agree that there’s substantial overlap between this and the principle of charity.
It does make for higher quality discussions, especially when posters who command a larger audience are involved. Let’s also assume that Jonah knows his shizzle, and that if he wrote something which seems stupid at first glance, he may have merely used an unfortunate phraseology. Where’s the fun in shooting down the obvious targets, most readers can do so themselves. Rather skip to the subtle disagreements deep down, where true domina… where more refined and non-obvious counters may be revealed for the readers’ benefit.
Where’s the fun in shooting down the obvious targets, most readers can do so themselves.
As one of those readers I would prefer not to have to. I appreciate the effort others put into keeping the garden well tended and saving me the trouble of reading low quality material myself.
Eliezer’s reply is the kind of reply that I want to see more of. I strongly oppose shaming ‘requests’ used to discourage such replies.
Personally I found the quantitative majors example a very vivid introduction to this style of argument, and much more vivid than the Penrose example. I think the quantitative majors does a very good job of illustrating the kind of reasoning you are supporting, and why it is helpful. I don’t understand the relevance of many weak arguments to the Penrose debate—it seems like a case of some strong and some weak arguments vs. one weak argument or something. If others are like me, a different example might be more helpful.
In hindsight, my presentation in this article was suboptimal. I clarify in a number of comments on this thread.
The common thread that ties together the quantitative majors example and the Penrose example is “rather than dismissing arguments that appear to break down upon examination, one should recognize that such arguments often have a nontrivial chance of succeeding owing to model uncertainty, and one should count such arguments as evidence.”
In the case of the quantitative majors example, the point is that you can amass a large number such arguments to reach a confident conclusion. In the Penrose example, the point is that one should hedge rather than concluding that Penrose is virtually certain to be wrong.
I can give more examples of the use of MWAs to reach a confident conclusion. They’re not sufficiently polished to post, so if you’re interested in hearing them, shoot me at email at jsinick@gmail.com.
Perhaps “hedging” is another term that also needs expanding here. One can reasonably assume that Penrose’s analysis has some definite flaws in it, given the number of probable flaws identified, while still suspecting (for the reasons you’ve explained) that it contains insights that may one day contribute to sounder analysis. Perhaps the main implication of your argument is that we need to keep arguments in our mind in more categories then just a spectrum from “strong” to “weak”. Some apparently weak arguments may be worth periodic re-examination, whereas many probably aren’t.
The point of the example is that one shouldn’t decisively conclude that Penrose is wrong — one should instead hedge.
It’s not at all clear to me why this is the case. The argument you give, as I understand it, is “weak arguments, if independent, add nonlinearly instead of linearly, and so we can’t safely ignore weak arguments.”* But in the case of Penrose, you have a weak argument in his favor (he’s really clever), and many strong arguments against him, of which several are independent. The arrow of consilience points against Penrose, and so you should update against Penrose if you’ve gained a new respect for consilience.
*The argument that we shouldn’t ignore arguments because they are below some evidence threshold, to me, falls under “proper epistemic hygiene” and so doesn’t seem novel or need to be justified.
It appears that I didn’t express myself clearly as well as I would have liked. Thanks for pointing this issue out.
My current epistemological framework is “give weight to all arguments, even the (non-negligibly) weak ones.” My prior epistemological framework had been “give weight to all arguments that stand up to scrutiny.” I agree that the arrow of consilience points against Penrose. My update is coming from the change “give weight to arguments that don’t stand up to scrutiny.”
I added an edit to my post explaining this.
I don’t think that “Penrose is really clever” is an accurate description of my argument. Lots of people are really clever. I know hundreds of mathematicians who are really clever. Penrose is on a much higher level.
My current epistemological framework is “give weight to all arguments, even the (non-negligibly) weak ones.” My prior epistemological framework had been “give weight to all arguments that stand up to scrutiny.”
I’m not sure we’re using ‘scrutiny’ in the same way. One potential usage is “if I can think of a counterargument, I can exclude that argument from my analysis,” which is one I don’t endorse and it sounds like you no longer endorse.
What I think scrutiny is useful for is determining the likelihood ratio of an argument. To use the first argument given in support for the quantitative major, you might estimate the likelihood ratio to be, say, 2:1 in support, and then after correcting for the counterargument of native ability, estimate the effect to be 3:2 in support. (Previously, this would look like revising the 2:1 estimate down to a 1:1 estimate.)
And so in the Penrose example, his suggestion that quantum effects might have something to do with consciousness is, say, 10:1 evidence in favor, because of your esteem for Penrose’s ability to think. But when Tegmark comes along and runs the numbers, and finds that it doesn’t pan out, I would revise that down to the neighborhood of 101:100. Lots of smart people speculate things could be the case, and then the math doesn’t work out.
And so if you have a precise mathematical model of scrutiny, you can incorporate this evidence together without having to deal with rules of thumb like “give weight to arguments that don’t stand up to scrutiny,” which Eliezer is rightly complaining will often lead you astray.
I don’t think that “Penrose is really clever” is an accurate description of my argument. Lots of people are really clever.
We’re using different standards for cleverness, but the reason I worded things that way is because everyone has access to the same logic. Penrose’s intuitions are much more honed than yours in particular areas, and so it’s reasonable to use his intuitions as evidence in those areas. But the degree that his intuitions are evidence depends on his skill in that particular area, and if he’s able to articulate the argument, then you can evaluate the argument on its own, and then it doesn’t matter who made it. I’m reminded of the student who wrote to Feynman complaining that she got a test question wrong because she followed his book, which contained a mistake. Feynman responded with “yep, I goofed, and you goofed by trusting me. You should have believed your teacher’s argument, because it’s correct.”
I’m not sure we’re using ‘scrutiny’ in the same way. One potential usage is “if I can think of a counterargument, I can exclude that argument from my analysis,” which is one I don’t endorse and it sounds like you no longer endorse.
Yes. I wasn’t literally discarding arguments whenever I thought of counterarguments, but I strongly tended in that direction, and I don’t endorse this.
What I think scrutiny is useful for is determining the likelihood ratio of an argument. To use the first argument given in support for the quantitative major, you might estimate the likelihood ratio to be, say, 2:1 in support, and then after correcting for the counterargument of native ability, estimate the effect to be 3:2 in support. (Previously, this would look like revising the 2:1 estimate down to a 1:1 estimate.)
I think that these likelihood ratios are too hard to determine with such high precision.
And so in the Penrose example, his suggestion that quantum effects might have something to do with consciousness is, say, 10:1 evidence in favor, because of your esteem for Penrose’s ability to think. But when Tegmark comes along and runs the numbers, and finds that it doesn’t pan out, I would revise that down to the neighborhood of 101:100.
Metaphorically, I agree with this, my skepticism about determining precise numerical estimates not withstanding.
The confidence level in the range of ~ 0.5% sounds about right, up to an order of magnitude in either direction. The issue was that I was implicitly discarding that probability entirely, as if it it was sufficiently small so that it should play no role whatsoever in my thinking.
Lots of smart people speculate things could be the case, and then the math doesn’t work out.
As far I know, Penrose hasn’t fully retracted his position. If so, this should be given some weight.
And so if you have a precise mathematical model of scrutiny, you can incorporate this evidence together without having to deal with rules of thumb like “give weight to arguments that don’t stand up to scrutiny,” which Eliezer is rightly complaining will often lead you astray.
I don’t think that it’s fruitful to numerically quantify things in this way, because I think that the initial estimates are poor, and that making up a number makes epistemology worse rather than better, because of anchoring biases. Certainly when I myself have tried to do this in the past, I’ve had this experience. But maybe I just haven’t seen it done right.
My impression from Eliezer’s comment is that he’s implicitly reasoning in the same way that I was (discarding arguments that have ~ 1% probability of being true, as if they were too unlikely for it to be worthwhile to give any weight to.)
We’re using different standards for cleverness, but the reason I worded things that way is because everyone has access to the same logic.
I think that the difference is significant. There’s a dearth of public knowledge concerning the depth of the achievements of the best mathematicians and physicists (as well a sa dearth of public knowledge as to who the best mathematicians and physicists are). I think that the benefits to people’s epistemology if they appreciated this would be nonnegligible.
But the degree that his intuitions are evidence depends on his skill in that particular area, and if he’s able to articulate the argument, then you can evaluate the argument on its own, and then it doesn’t matter who made it.
Here again lies the key point of contention. The point is that there’s a small but non-negligible probability that Penrose isn’t able to articulate the argument despite attempting to do so, or that he communicates under bad implicit assumptions about the language that his readers think in, or there’s another possibility that I haven’t thought of that’s consistent with his views being sound.
I’m reminded of the student who wrote to Feynman complaining that she got a test question wrong because she followed his book, which contained a mistake. Feynman responded with “yep, I goofed, and you goofed by trusting me. You should have believed your teacher’s argument, because it’s correct.”
I’m certainly not saying that one should believe Penrose’s views with 50+% probability (the level of confidence that the student in the story seems to have had). I’m saying that one should give the possibility enough credence so that one’s world view isn’t turned upside down if one learns that one of the hypotheticals that I give above prevails.
My claim is that “the chance that classical computers aren’t capable of intelligence is negligible” is an inferior epistemic position to “it seems extremely likely that classical computers are capable of intelligence, but Roger Penrose is one of the greatest scientists of the 20th century, has thought about these things, and disagrees, so one could imagine believing otherwise in the future.”
Responses below. As a meta-remark, your comment doesn’t steelman my argument, and I think that steelmanning arguments helps keep the conversation on track, so I’d appreciate it if you were to do so in the future.
The point of the example is that one shouldn’t decisively conclude that Penrose is wrong — one should instead hedge.
Perhaps a relevant analogy is that of the using seat belts to guard against car accidents — one shouldn’t say “The claim that I’m going to get into a potentially fatal car accident is in fact wrong, so I’m not going to wear seat belts.” You may argue that the relevant probabilities are sufficiently different so that the analogy isn’t a good one. If so, I disagree.
There are many such examples. My post extended to a length of eight pages without my going into them, and I wanted to keep the post to a reasonable length. I’m open to the possibility of writing another post with other examples. The reason that I chose the Penrose example is to vividly illustrate the shift in my epistemology.
One would expect this sort of thing to sometimes happen by chance in the course of updating based on incoming evidence. So I don’t share your concern.
I can see how the example might seem disconsonant with my post, and will consider revising the post to clarify. [Edit: I did this.] The point that I intended to make is that I was previously unknowingly ignoring certain nontrivial weak lines of evidence, on the grounds that they weren’t strong enough, and that I’ve recognized this, and have been working on modifying my epistemological framework accordingly.
I don’t think that the hordes and hordes of weak arguments that you refer to are collectively strong enough to nullify the argument that one should trust Penrose because he’s one of the greatest physicists of the second half of the 20′th century.
I don’t remember arguing against trusting authority above – elaborate if you’d like.
I wasn’t saying that one should give nontrivial credence to Penrose’s views based on his authority. I was saying that one should give nontrivial credence to Penrose’s views based on the fact that he’s a deeper thinker than everybody who I know (in the sense that his accomplishments are deeper than anything that anyone who I know has ever accomplished).
Something has gone severely wrong with the ‘steelman’ concept if it is now being used offensively, to force social obligations onto others. This ‘meta-remark’ amounts to a demand that if JonahSinick says something stupid then it is up to others to search related concept space to find the nearest possible good argument for a better conclusion and act as if Jonah had said that instead of what he actually said. That is an entirely unreasonable expectation of his audience and expecting all readers to come up with what amounts to superior content than the post author whenever they make a reply is just ridiculously computationally inefficient.
I have a known problem with this (Anna Salamon told me so, therefore it is true) so Jonah’s remark above is a priori plausible. I don’t know if I can do so successfully, but will make an effort in this direction.
(It’s true that what Jonah means is technically ‘principle of charity’ used to interpret original intent, not ‘steelman’ used to repair original intent, but the principle of charity says we should interpret the request above as if he had said ‘principle of charity’.)
:-)
No offense intended :-)
Request, not force
My remark that steelmanning keeps the discussion on track is genuine in intention. I agree that norms for steelmanning could conceivably become too strong for efficient discourse, but I think that at the margin, it would be better if people were doing much more steelmanning.
I think the concept you’re looking for is the principle of charity. Steel man is what you do to someone else’s argument in order to make sure yours is good, after you’ve defeated their actual argument. Principle of charity is what you do in discourse to make sure you’re having the best possible discussion.
If you think Eliezer should have steelmanned your argument then you think he has already defeated it—before he even commented!
I guess I didn’t mean that he didn’t steelman my argument, I meant that he didn’t steelman the things that he was objecting to. For example, he could have noted that I did give an example of the type that he seems to have been looking for, rather than focusing on the fact that the Penrose example isn’t of the type that he was looking for. I agree that there’s substantial overlap between this and the principle of charity.
It does make for higher quality discussions, especially when posters who command a larger audience are involved. Let’s also assume that Jonah knows his shizzle, and that if he wrote something which seems stupid at first glance, he may have merely used an unfortunate phraseology. Where’s the fun in shooting down the obvious targets, most readers can do so themselves. Rather skip to the subtle disagreements deep down, where true domina… where more refined and non-obvious counters may be revealed for the readers’ benefit.
As one of those readers I would prefer not to have to. I appreciate the effort others put into keeping the garden well tended and saving me the trouble of reading low quality material myself.
Eliezer’s reply is the kind of reply that I want to see more of. I strongly oppose shaming ‘requests’ used to discourage such replies.
Personally I found the quantitative majors example a very vivid introduction to this style of argument, and much more vivid than the Penrose example. I think the quantitative majors does a very good job of illustrating the kind of reasoning you are supporting, and why it is helpful. I don’t understand the relevance of many weak arguments to the Penrose debate—it seems like a case of some strong and some weak arguments vs. one weak argument or something. If others are like me, a different example might be more helpful.
In hindsight, my presentation in this article was suboptimal. I clarify in a number of comments on this thread.
The common thread that ties together the quantitative majors example and the Penrose example is “rather than dismissing arguments that appear to break down upon examination, one should recognize that such arguments often have a nontrivial chance of succeeding owing to model uncertainty, and one should count such arguments as evidence.”
In the case of the quantitative majors example, the point is that you can amass a large number such arguments to reach a confident conclusion. In the Penrose example, the point is that one should hedge rather than concluding that Penrose is virtually certain to be wrong.
I can give more examples of the use of MWAs to reach a confident conclusion. They’re not sufficiently polished to post, so if you’re interested in hearing them, shoot me at email at jsinick@gmail.com.
Perhaps “hedging” is another term that also needs expanding here. One can reasonably assume that Penrose’s analysis has some definite flaws in it, given the number of probable flaws identified, while still suspecting (for the reasons you’ve explained) that it contains insights that may one day contribute to sounder analysis. Perhaps the main implication of your argument is that we need to keep arguments in our mind in more categories then just a spectrum from “strong” to “weak”. Some apparently weak arguments may be worth periodic re-examination, whereas many probably aren’t.
It’s not at all clear to me why this is the case. The argument you give, as I understand it, is “weak arguments, if independent, add nonlinearly instead of linearly, and so we can’t safely ignore weak arguments.”* But in the case of Penrose, you have a weak argument in his favor (he’s really clever), and many strong arguments against him, of which several are independent. The arrow of consilience points against Penrose, and so you should update against Penrose if you’ve gained a new respect for consilience.
*The argument that we shouldn’t ignore arguments because they are below some evidence threshold, to me, falls under “proper epistemic hygiene” and so doesn’t seem novel or need to be justified.
It appears that I didn’t express myself clearly as well as I would have liked. Thanks for pointing this issue out.
My current epistemological framework is “give weight to all arguments, even the (non-negligibly) weak ones.” My prior epistemological framework had been “give weight to all arguments that stand up to scrutiny.” I agree that the arrow of consilience points against Penrose. My update is coming from the change “give weight to arguments that don’t stand up to scrutiny.”
I added an edit to my post explaining this.
I don’t think that “Penrose is really clever” is an accurate description of my argument. Lots of people are really clever. I know hundreds of mathematicians who are really clever. Penrose is on a much higher level.
I’m not sure we’re using ‘scrutiny’ in the same way. One potential usage is “if I can think of a counterargument, I can exclude that argument from my analysis,” which is one I don’t endorse and it sounds like you no longer endorse.
What I think scrutiny is useful for is determining the likelihood ratio of an argument. To use the first argument given in support for the quantitative major, you might estimate the likelihood ratio to be, say, 2:1 in support, and then after correcting for the counterargument of native ability, estimate the effect to be 3:2 in support. (Previously, this would look like revising the 2:1 estimate down to a 1:1 estimate.)
And so in the Penrose example, his suggestion that quantum effects might have something to do with consciousness is, say, 10:1 evidence in favor, because of your esteem for Penrose’s ability to think. But when Tegmark comes along and runs the numbers, and finds that it doesn’t pan out, I would revise that down to the neighborhood of 101:100. Lots of smart people speculate things could be the case, and then the math doesn’t work out.
And so if you have a precise mathematical model of scrutiny, you can incorporate this evidence together without having to deal with rules of thumb like “give weight to arguments that don’t stand up to scrutiny,” which Eliezer is rightly complaining will often lead you astray.
We’re using different standards for cleverness, but the reason I worded things that way is because everyone has access to the same logic. Penrose’s intuitions are much more honed than yours in particular areas, and so it’s reasonable to use his intuitions as evidence in those areas. But the degree that his intuitions are evidence depends on his skill in that particular area, and if he’s able to articulate the argument, then you can evaluate the argument on its own, and then it doesn’t matter who made it. I’m reminded of the student who wrote to Feynman complaining that she got a test question wrong because she followed his book, which contained a mistake. Feynman responded with “yep, I goofed, and you goofed by trusting me. You should have believed your teacher’s argument, because it’s correct.”
Yes. I wasn’t literally discarding arguments whenever I thought of counterarguments, but I strongly tended in that direction, and I don’t endorse this.
I think that these likelihood ratios are too hard to determine with such high precision.
Metaphorically, I agree with this, my skepticism about determining precise numerical estimates not withstanding.
The confidence level in the range of ~ 0.5% sounds about right, up to an order of magnitude in either direction. The issue was that I was implicitly discarding that probability entirely, as if it it was sufficiently small so that it should play no role whatsoever in my thinking.
As far I know, Penrose hasn’t fully retracted his position. If so, this should be given some weight.
I don’t think that it’s fruitful to numerically quantify things in this way, because I think that the initial estimates are poor, and that making up a number makes epistemology worse rather than better, because of anchoring biases. Certainly when I myself have tried to do this in the past, I’ve had this experience. But maybe I just haven’t seen it done right.
My impression from Eliezer’s comment is that he’s implicitly reasoning in the same way that I was (discarding arguments that have ~ 1% probability of being true, as if they were too unlikely for it to be worthwhile to give any weight to.)
I think that the difference is significant. There’s a dearth of public knowledge concerning the depth of the achievements of the best mathematicians and physicists (as well a sa dearth of public knowledge as to who the best mathematicians and physicists are). I think that the benefits to people’s epistemology if they appreciated this would be nonnegligible.
Here again lies the key point of contention. The point is that there’s a small but non-negligible probability that Penrose isn’t able to articulate the argument despite attempting to do so, or that he communicates under bad implicit assumptions about the language that his readers think in, or there’s another possibility that I haven’t thought of that’s consistent with his views being sound.
I’m certainly not saying that one should believe Penrose’s views with 50+% probability (the level of confidence that the student in the story seems to have had). I’m saying that one should give the possibility enough credence so that one’s world view isn’t turned upside down if one learns that one of the hypotheticals that I give above prevails.
My claim is that “the chance that classical computers aren’t capable of intelligence is negligible” is an inferior epistemic position to “it seems extremely likely that classical computers are capable of intelligence, but Roger Penrose is one of the greatest scientists of the 20th century, has thought about these things, and disagrees, so one could imagine believing otherwise in the future.”