Regarding the “unpacking fallacy”: I don’t think you’ve pointed to a fallacy here. You have pointed to a particular causal pathway which seems to be quite specific, and I’ve claimed that this particular causal pathway has a tiny expected effect by virtue of its unlikeliness. The negation of this sequence of events simply can’t be unpacked as a conjunction in any natural way, it really is fundamentally a disjunction. You might point out that the competing arguments are weak, but they can be much stronger in the cases were they aren’t predicated on detailed stories about the future.
As you say, even events that actually happened can also be made to look quite unlikely. But those events were, for the most part, unlikely ex ante. This is like saying “This argument can suggest that any lottery number probably wouldn’t win the lottery, even the lottery numbers that actually won!”
If you had a track record of successful predictions, or if anyone who embraced this view had a track record of successful predictions, maybe you could say “all of these successful predictions could be unpacked, so you shouldn’t be so skeptical of unpackable arguments.” But I don’t know of anyone with a reasonably good predictive record who takes this view, and most smart people seem to find it ridiculous.
I don’t understand your argument here. Yes, future civilization builds AI. It doesn’t follow that the value of the future is first determined by what type of AI they build (they also build nanotech, but the value of the future isn’t determined by the type of nanotech they build, and you haven’t offered a substantial argument that discriminates between the cases). There could be any number of important events beforehand or afterwards; there could be any number of other important characteristics surrounding how they build AI which influence whether the outcome is positive or negative.
Do you think the main effects of economic progress in 1600 were on the degree of parallelization in AI work? 1800? The magnitude of the direct effects of economic progress on AI work depends on how close the economic progress is to the AI work; as the time involved gets larger, indirect effects come to dominate.
You have a specific view, that there is a set of problems which need to be solved in order to make AI friendly, and that these problems have some kind of principled relationship to the problems that seem important to you now. This is as opposed to e.g. “there are two random approaches to AI, one of which leads to good outcomes and one of which leads to bad outcomes,” or “there are many approaches to AI, and you have to think about it in advance to figure out which lead to good outcomes” or “there is a specific problem that you can’t have solved by the time you get to AI if you want to to have a positive outcome” or an incredible variety of alternative models. The “parallelization is bad” argument doesn’t apply to most of these models, and in some you have “parallelization is good.”
Even granting that your picture of AI vs. FAI is correct, and there are these particular theoretical problems that need to be solved, it is completely unclear that more people working in the field makes things worse. I don’t know why you think this follows from 3 or can be sensibly lumped with 3, and you don’t provide an argument. Suppose I said “The most important thing about dam safety is whether you have a good theoretical understanding the dam before building it” and you said “Yes, and if you increase the number of people working on the dam you are less likely to understand it by the time it gets built, because someone will stumble across an ad hoc way to build a dam.” This seems ridiculous both a priori and based on the empirical evidence. There are many possible models for the way that important problems in AI get solved, and you seem to be assuming a particular one.
Suppose that I airdrop in a million knowledge workers this year and they leave next year, corresponding to an exogenous boost in productivity this year. You are claiming that this obviously increases the degree of parallelization on relevant AI work. This isn’t obvious, unless a big part of the relevant work is being done today (which seems unlikely, casually?)
I agree that I’ve only argued that your argument has a tiny impact; it could still dominate if there was literally nothing else going on. But even granting 1-5 there seem to be other big effects from economic growth.
The case in favor of growth seems to be pretty straightforward; I linked to a blog post in the last comment. Let me try to make the point more clearly:
Increasing economic activity speeds up a lot of things. Speeding up everything is neutral, so the important point is the difference between what it speeds up and what it doesn’t speed up. Most things people are actually trying to do get sped up, while a bunch of random things (aging and disease, natural disasters, mood changes) don’t get sped up. Lots of other things get sped up but significantly less than 1-for-1, because they have some inputs that get sped up and some that don’t (accidents of all kinds, conflicts of all kinds, resource depletion). Given that things people are trying to do get sped up, and the things that happen which they aren’t trying to do get sped up less, we should expect the effect to be to positive, as long as people are trying to do good things.
What’s a specific relevant example of something people are trying to speed up / not speed up besides AGI (= UFAI) and FAI? You pick out aging, disease, and natural disasters as not-sped-up but these seem very loosely coupled to astronomical benefits.
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
I picked out conflict, accidents, and resource depletion as not being sped up 1-for-1, i.e. such that a 1% boost in economic activity corresponds to a <1% boost in those processes. Most people would say that war and accidents account for many bad things that happen. War is basically defined by people making decisions that are unusually misaligned with aggregate welfare. Accidents are basically defined by people not getting what they want. I could have lumped in terrorism, and then accounted for basically all of the ways that we can see things going really badly in the present day.
You have a particular story about how a bad thing might happen in the future. Maybe that’s enough to conclude the future will be entirely unlike the present. But it seems like (1) that’s a really brittle way to reason, however much you want to accuse its detractors of the “unpacking fallacy,” and most smart people take this view, and (2) even granting almost all of your assumptions, it’s pretty easy to think of scenarios where war, terrorism, or accidents are inputs into AI going badly, or where better education, more social stability, or better decision-making are inputs into AI going well. People promoting these positive changes are also working against forces that wouldn’t be accelerated, like people growing old and dying and thereby throwing away their accumulated human capital, or infrastructure being stressed to keep people alive, etc. etc.
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
How is an increased capital stock supposed to improve our x-risk / astronomical benefit profile except by being an input into something else? Yes, computer science benefits, that’s putatively the problem. We need certain types of math for FAI but does math benefit more from increased capital stocks compared to, say, computing power? Which of these other things are supposed to save the world faster than computer science destroys it, and how? How the heck would terrorism be a plausible input into AI going badly? Terrorists are not going to be the most-funded organizations with the smartest researchers working on AGI (= UFAI) as opposed to MIT, Google or Goldman Sachs.
Does your argument primarily reduce to “If there’s no local FOOM then economic growth is a good thing, and I believe much less than you do in local FOOM”? Or do you also think that in local FOOM scenarios higher economic growth now expectedly results in a better local FOOM? And if so is there at least one plausible specific scenario that we can sketch out now for how that works, as opposed to general hopes that a higher economic growth exponent has vague nice effects which will outweigh the shortening of time until the local FOOM with a correspondingly reduced opportunity to get FAI research done in time? When you sketch out a specific scenario, this makes it possible to point out fragile links which conjunctively decrease the probability of that scenario, and often these fragile links generalize, which is why it’s a bad idea to keep things vague and not sketch out any concrete scenarios for fear of the conjunction fallacy.
It seems to me that a lot of your reply, going by the mention of things like terrorism and poverty, must be either prioritizing near-term benefits over the astronomical future, or else being predicated on a very different model from local FOOM. We already have a known persistent disagreement on local FOOM. This is an important modular part of the disagreement on which other MIRIfolk do not all line up on one side or another. Thus I would like to know how much we disagree about expected goodness of higher econ growth exponents given local FOOM, and whether there’s a big left over factor where “Paul Christiano thinks you’re just being silly even assuming that a FOOM is local”, especially if this factor is not further traceable to a persistent disagreement about competence of elites. It would then be helpful to sketch out a concrete scenario corresponding to this disagreement to see if it looks even more fragile and conjunctive.
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the ‘obvious’ conclusion.)
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the ‘obvious’ conclusion.)
I’m confused by the logic of this sentence (in particular how the ‘though’ and ‘like me’ fit together). Are you saying that you and Wei both at first accepted that faster econ growth meant less chance of FAI, but then were both caused to doubt this conclusion by the fact that others debated the claim?
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual? I don’t understand where you are coming from there. The secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000. You just shouldn’t expect to be able to think of detailed situations that determine the whole value of the universe, unless you are in an anomalous situation, but that doesn’t mean that your actions have no effect and that you should condition on being in an anomalous situation.
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
How did this happen as a result of economic growth having a marginally greater exponent? Doesn’t that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly?
Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don’t have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual?
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
My secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000.
I could make analogies about smart-people-will-then-decide and don’t-worry-the-elite-wouldn’t-be-that-stupid reasoning to various historical projections that failed, but I don’t think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don’t trust your style of reasoning is that I think it wouldn’t have worked historically, not that I think your reasoning mode would have worked well historically but I’ve decided to reject it because I’m stubborn. (If I were to be more specific, when I listen to your projections of future events they don’t sound very much like recollections of past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.)
I think an important thing to keep in mind throughout is that we’re not asking whether this present world would be stronger and wiser if it were economically poorer. I think it’s much better to frame the question as whether we would be in a marginally better or worse position with respect to FAI today if we had the present level of economic development but the past century from 1913-2013 had taken ten fewer years to get there so that the current date were 2003. This seems a lot more subtle.
past events as I have read about them in history books, where jaw-dropping stupidity
usually plays a much stronger role.
How sure are you that this isn’t hindsight bias, that if various involved historical
figures had been smarter they would have understood the situation and not done things
that look unbelievably stupid looking back?
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else.
In the case of the negative scenarios I outlined this is hopefully clear: wars aren’t sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc.
Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn’t increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want).
Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction.
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.
Regarding the “unpacking fallacy”: I don’t think you’ve pointed to a fallacy here. You have pointed to a particular causal pathway which seems to be quite specific, and I’ve claimed that this particular causal pathway has a tiny expected effect by virtue of its unlikeliness. The negation of this sequence of events simply can’t be unpacked as a conjunction in any natural way, it really is fundamentally a disjunction. You might point out that the competing arguments are weak, but they can be much stronger in the cases were they aren’t predicated on detailed stories about the future.
As you say, even events that actually happened can also be made to look quite unlikely. But those events were, for the most part, unlikely ex ante. This is like saying “This argument can suggest that any lottery number probably wouldn’t win the lottery, even the lottery numbers that actually won!”
If you had a track record of successful predictions, or if anyone who embraced this view had a track record of successful predictions, maybe you could say “all of these successful predictions could be unpacked, so you shouldn’t be so skeptical of unpackable arguments.” But I don’t know of anyone with a reasonably good predictive record who takes this view, and most smart people seem to find it ridiculous.
I don’t understand your argument here. Yes, future civilization builds AI. It doesn’t follow that the value of the future is first determined by what type of AI they build (they also build nanotech, but the value of the future isn’t determined by the type of nanotech they build, and you haven’t offered a substantial argument that discriminates between the cases). There could be any number of important events beforehand or afterwards; there could be any number of other important characteristics surrounding how they build AI which influence whether the outcome is positive or negative.
Do you think the main effects of economic progress in 1600 were on the degree of parallelization in AI work? 1800? The magnitude of the direct effects of economic progress on AI work depends on how close the economic progress is to the AI work; as the time involved gets larger, indirect effects come to dominate.
You have a specific view, that there is a set of problems which need to be solved in order to make AI friendly, and that these problems have some kind of principled relationship to the problems that seem important to you now. This is as opposed to e.g. “there are two random approaches to AI, one of which leads to good outcomes and one of which leads to bad outcomes,” or “there are many approaches to AI, and you have to think about it in advance to figure out which lead to good outcomes” or “there is a specific problem that you can’t have solved by the time you get to AI if you want to to have a positive outcome” or an incredible variety of alternative models. The “parallelization is bad” argument doesn’t apply to most of these models, and in some you have “parallelization is good.”
Even granting that your picture of AI vs. FAI is correct, and there are these particular theoretical problems that need to be solved, it is completely unclear that more people working in the field makes things worse. I don’t know why you think this follows from 3 or can be sensibly lumped with 3, and you don’t provide an argument. Suppose I said “The most important thing about dam safety is whether you have a good theoretical understanding the dam before building it” and you said “Yes, and if you increase the number of people working on the dam you are less likely to understand it by the time it gets built, because someone will stumble across an ad hoc way to build a dam.” This seems ridiculous both a priori and based on the empirical evidence. There are many possible models for the way that important problems in AI get solved, and you seem to be assuming a particular one.
Suppose that I airdrop in a million knowledge workers this year and they leave next year, corresponding to an exogenous boost in productivity this year. You are claiming that this obviously increases the degree of parallelization on relevant AI work. This isn’t obvious, unless a big part of the relevant work is being done today (which seems unlikely, casually?)
I agree that I’ve only argued that your argument has a tiny impact; it could still dominate if there was literally nothing else going on. But even granting 1-5 there seem to be other big effects from economic growth.
The case in favor of growth seems to be pretty straightforward; I linked to a blog post in the last comment. Let me try to make the point more clearly:
Increasing economic activity speeds up a lot of things. Speeding up everything is neutral, so the important point is the difference between what it speeds up and what it doesn’t speed up. Most things people are actually trying to do get sped up, while a bunch of random things (aging and disease, natural disasters, mood changes) don’t get sped up. Lots of other things get sped up but significantly less than 1-for-1, because they have some inputs that get sped up and some that don’t (accidents of all kinds, conflicts of all kinds, resource depletion). Given that things people are trying to do get sped up, and the things that happen which they aren’t trying to do get sped up less, we should expect the effect to be to positive, as long as people are trying to do good things.
What’s a specific relevant example of something people are trying to speed up / not speed up besides AGI (= UFAI) and FAI? You pick out aging, disease, and natural disasters as not-sped-up but these seem very loosely coupled to astronomical benefits.
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
I picked out conflict, accidents, and resource depletion as not being sped up 1-for-1, i.e. such that a 1% boost in economic activity corresponds to a <1% boost in those processes. Most people would say that war and accidents account for many bad things that happen. War is basically defined by people making decisions that are unusually misaligned with aggregate welfare. Accidents are basically defined by people not getting what they want. I could have lumped in terrorism, and then accounted for basically all of the ways that we can see things going really badly in the present day.
You have a particular story about how a bad thing might happen in the future. Maybe that’s enough to conclude the future will be entirely unlike the present. But it seems like (1) that’s a really brittle way to reason, however much you want to accuse its detractors of the “unpacking fallacy,” and most smart people take this view, and (2) even granting almost all of your assumptions, it’s pretty easy to think of scenarios where war, terrorism, or accidents are inputs into AI going badly, or where better education, more social stability, or better decision-making are inputs into AI going well. People promoting these positive changes are also working against forces that wouldn’t be accelerated, like people growing old and dying and thereby throwing away their accumulated human capital, or infrastructure being stressed to keep people alive, etc. etc.
How is an increased capital stock supposed to improve our x-risk / astronomical benefit profile except by being an input into something else? Yes, computer science benefits, that’s putatively the problem. We need certain types of math for FAI but does math benefit more from increased capital stocks compared to, say, computing power? Which of these other things are supposed to save the world faster than computer science destroys it, and how? How the heck would terrorism be a plausible input into AI going badly? Terrorists are not going to be the most-funded organizations with the smartest researchers working on AGI (= UFAI) as opposed to MIT, Google or Goldman Sachs.
Does your argument primarily reduce to “If there’s no local FOOM then economic growth is a good thing, and I believe much less than you do in local FOOM”? Or do you also think that in local FOOM scenarios higher economic growth now expectedly results in a better local FOOM? And if so is there at least one plausible specific scenario that we can sketch out now for how that works, as opposed to general hopes that a higher economic growth exponent has vague nice effects which will outweigh the shortening of time until the local FOOM with a correspondingly reduced opportunity to get FAI research done in time? When you sketch out a specific scenario, this makes it possible to point out fragile links which conjunctively decrease the probability of that scenario, and often these fragile links generalize, which is why it’s a bad idea to keep things vague and not sketch out any concrete scenarios for fear of the conjunction fallacy.
It seems to me that a lot of your reply, going by the mention of things like terrorism and poverty, must be either prioritizing near-term benefits over the astronomical future, or else being predicated on a very different model from local FOOM. We already have a known persistent disagreement on local FOOM. This is an important modular part of the disagreement on which other MIRIfolk do not all line up on one side or another. Thus I would like to know how much we disagree about expected goodness of higher econ growth exponents given local FOOM, and whether there’s a big left over factor where “Paul Christiano thinks you’re just being silly even assuming that a FOOM is local”, especially if this factor is not further traceable to a persistent disagreement about competence of elites. It would then be helpful to sketch out a concrete scenario corresponding to this disagreement to see if it looks even more fragile and conjunctive.
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the ‘obvious’ conclusion.)
I’m confused by the logic of this sentence (in particular how the ‘though’ and ‘like me’ fit together). Are you saying that you and Wei both at first accepted that faster econ growth meant less chance of FAI, but then were both caused to doubt this conclusion by the fact that others debated the claim?
Yep.
This was one of those cases where precisely stating the question helps you get to the answer. Thanks for the confirmation!
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual? I don’t understand where you are coming from there. The secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000. You just shouldn’t expect to be able to think of detailed situations that determine the whole value of the universe, unless you are in an anomalous situation, but that doesn’t mean that your actions have no effect and that you should condition on being in an anomalous situation.
How did this happen as a result of economic growth having a marginally greater exponent? Doesn’t that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly?
Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don’t have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute.
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
I could make analogies about smart-people-will-then-decide and don’t-worry-the-elite-wouldn’t-be-that-stupid reasoning to various historical projections that failed, but I don’t think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don’t trust your style of reasoning is that I think it wouldn’t have worked historically, not that I think your reasoning mode would have worked well historically but I’ve decided to reject it because I’m stubborn. (If I were to be more specific, when I listen to your projections of future events they don’t sound very much like recollections of past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.)
I think an important thing to keep in mind throughout is that we’re not asking whether this present world would be stronger and wiser if it were economically poorer. I think it’s much better to frame the question as whether we would be in a marginally better or worse position with respect to FAI today if we had the present level of economic development but the past century from 1913-2013 had taken ten fewer years to get there so that the current date were 2003. This seems a lot more subtle.
How sure are you that this isn’t hindsight bias, that if various involved historical figures had been smarter they would have understood the situation and not done things that look unbelievably stupid looking back?
Do you have particular historical events in mind?
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else.
In the case of the negative scenarios I outlined this is hopefully clear: wars aren’t sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc.
Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn’t increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want).
Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction.
This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.