Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
I picked out conflict, accidents, and resource depletion as not being sped up 1-for-1, i.e. such that a 1% boost in economic activity corresponds to a <1% boost in those processes. Most people would say that war and accidents account for many bad things that happen. War is basically defined by people making decisions that are unusually misaligned with aggregate welfare. Accidents are basically defined by people not getting what they want. I could have lumped in terrorism, and then accounted for basically all of the ways that we can see things going really badly in the present day.
You have a particular story about how a bad thing might happen in the future. Maybe that’s enough to conclude the future will be entirely unlike the present. But it seems like (1) that’s a really brittle way to reason, however much you want to accuse its detractors of the “unpacking fallacy,” and most smart people take this view, and (2) even granting almost all of your assumptions, it’s pretty easy to think of scenarios where war, terrorism, or accidents are inputs into AI going badly, or where better education, more social stability, or better decision-making are inputs into AI going well. People promoting these positive changes are also working against forces that wouldn’t be accelerated, like people growing old and dying and thereby throwing away their accumulated human capital, or infrastructure being stressed to keep people alive, etc. etc.
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
How is an increased capital stock supposed to improve our x-risk / astronomical benefit profile except by being an input into something else? Yes, computer science benefits, that’s putatively the problem. We need certain types of math for FAI but does math benefit more from increased capital stocks compared to, say, computing power? Which of these other things are supposed to save the world faster than computer science destroys it, and how? How the heck would terrorism be a plausible input into AI going badly? Terrorists are not going to be the most-funded organizations with the smartest researchers working on AGI (= UFAI) as opposed to MIT, Google or Goldman Sachs.
Does your argument primarily reduce to “If there’s no local FOOM then economic growth is a good thing, and I believe much less than you do in local FOOM”? Or do you also think that in local FOOM scenarios higher economic growth now expectedly results in a better local FOOM? And if so is there at least one plausible specific scenario that we can sketch out now for how that works, as opposed to general hopes that a higher economic growth exponent has vague nice effects which will outweigh the shortening of time until the local FOOM with a correspondingly reduced opportunity to get FAI research done in time? When you sketch out a specific scenario, this makes it possible to point out fragile links which conjunctively decrease the probability of that scenario, and often these fragile links generalize, which is why it’s a bad idea to keep things vague and not sketch out any concrete scenarios for fear of the conjunction fallacy.
It seems to me that a lot of your reply, going by the mention of things like terrorism and poverty, must be either prioritizing near-term benefits over the astronomical future, or else being predicated on a very different model from local FOOM. We already have a known persistent disagreement on local FOOM. This is an important modular part of the disagreement on which other MIRIfolk do not all line up on one side or another. Thus I would like to know how much we disagree about expected goodness of higher econ growth exponents given local FOOM, and whether there’s a big left over factor where “Paul Christiano thinks you’re just being silly even assuming that a FOOM is local”, especially if this factor is not further traceable to a persistent disagreement about competence of elites. It would then be helpful to sketch out a concrete scenario corresponding to this disagreement to see if it looks even more fragile and conjunctive.
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the ‘obvious’ conclusion.)
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the ‘obvious’ conclusion.)
I’m confused by the logic of this sentence (in particular how the ‘though’ and ‘like me’ fit together). Are you saying that you and Wei both at first accepted that faster econ growth meant less chance of FAI, but then were both caused to doubt this conclusion by the fact that others debated the claim?
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual? I don’t understand where you are coming from there. The secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000. You just shouldn’t expect to be able to think of detailed situations that determine the whole value of the universe, unless you are in an anomalous situation, but that doesn’t mean that your actions have no effect and that you should condition on being in an anomalous situation.
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
How did this happen as a result of economic growth having a marginally greater exponent? Doesn’t that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly?
Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don’t have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual?
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
My secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000.
I could make analogies about smart-people-will-then-decide and don’t-worry-the-elite-wouldn’t-be-that-stupid reasoning to various historical projections that failed, but I don’t think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don’t trust your style of reasoning is that I think it wouldn’t have worked historically, not that I think your reasoning mode would have worked well historically but I’ve decided to reject it because I’m stubborn. (If I were to be more specific, when I listen to your projections of future events they don’t sound very much like recollections of past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.)
I think an important thing to keep in mind throughout is that we’re not asking whether this present world would be stronger and wiser if it were economically poorer. I think it’s much better to frame the question as whether we would be in a marginally better or worse position with respect to FAI today if we had the present level of economic development but the past century from 1913-2013 had taken ten fewer years to get there so that the current date were 2003. This seems a lot more subtle.
past events as I have read about them in history books, where jaw-dropping stupidity
usually plays a much stronger role.
How sure are you that this isn’t hindsight bias, that if various involved historical
figures had been smarter they would have understood the situation and not done things
that look unbelievably stupid looking back?
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else.
In the case of the negative scenarios I outlined this is hopefully clear: wars aren’t sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc.
Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn’t increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want).
Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction.
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world.
I picked out conflict, accidents, and resource depletion as not being sped up 1-for-1, i.e. such that a 1% boost in economic activity corresponds to a <1% boost in those processes. Most people would say that war and accidents account for many bad things that happen. War is basically defined by people making decisions that are unusually misaligned with aggregate welfare. Accidents are basically defined by people not getting what they want. I could have lumped in terrorism, and then accounted for basically all of the ways that we can see things going really badly in the present day.
You have a particular story about how a bad thing might happen in the future. Maybe that’s enough to conclude the future will be entirely unlike the present. But it seems like (1) that’s a really brittle way to reason, however much you want to accuse its detractors of the “unpacking fallacy,” and most smart people take this view, and (2) even granting almost all of your assumptions, it’s pretty easy to think of scenarios where war, terrorism, or accidents are inputs into AI going badly, or where better education, more social stability, or better decision-making are inputs into AI going well. People promoting these positive changes are also working against forces that wouldn’t be accelerated, like people growing old and dying and thereby throwing away their accumulated human capital, or infrastructure being stressed to keep people alive, etc. etc.
How is an increased capital stock supposed to improve our x-risk / astronomical benefit profile except by being an input into something else? Yes, computer science benefits, that’s putatively the problem. We need certain types of math for FAI but does math benefit more from increased capital stocks compared to, say, computing power? Which of these other things are supposed to save the world faster than computer science destroys it, and how? How the heck would terrorism be a plausible input into AI going badly? Terrorists are not going to be the most-funded organizations with the smartest researchers working on AGI (= UFAI) as opposed to MIT, Google or Goldman Sachs.
Does your argument primarily reduce to “If there’s no local FOOM then economic growth is a good thing, and I believe much less than you do in local FOOM”? Or do you also think that in local FOOM scenarios higher economic growth now expectedly results in a better local FOOM? And if so is there at least one plausible specific scenario that we can sketch out now for how that works, as opposed to general hopes that a higher economic growth exponent has vague nice effects which will outweigh the shortening of time until the local FOOM with a correspondingly reduced opportunity to get FAI research done in time? When you sketch out a specific scenario, this makes it possible to point out fragile links which conjunctively decrease the probability of that scenario, and often these fragile links generalize, which is why it’s a bad idea to keep things vague and not sketch out any concrete scenarios for fear of the conjunction fallacy.
It seems to me that a lot of your reply, going by the mention of things like terrorism and poverty, must be either prioritizing near-term benefits over the astronomical future, or else being predicated on a very different model from local FOOM. We already have a known persistent disagreement on local FOOM. This is an important modular part of the disagreement on which other MIRIfolk do not all line up on one side or another. Thus I would like to know how much we disagree about expected goodness of higher econ growth exponents given local FOOM, and whether there’s a big left over factor where “Paul Christiano thinks you’re just being silly even assuming that a FOOM is local”, especially if this factor is not further traceable to a persistent disagreement about competence of elites. It would then be helpful to sketch out a concrete scenario corresponding to this disagreement to see if it looks even more fragile and conjunctive.
(Note that e.g. Wei Dai also thought it was obviously true that faster econ growth exponents had a negative-sign effect on FAI, though, like me, this debate made him question (but not yet reject) the ‘obvious’ conclusion.)
I’m confused by the logic of this sentence (in particular how the ‘though’ and ‘like me’ fit together). Are you saying that you and Wei both at first accepted that faster econ growth meant less chance of FAI, but then were both caused to doubt this conclusion by the fact that others debated the claim?
Yep.
This was one of those cases where precisely stating the question helps you get to the answer. Thanks for the confirmation!
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual? I don’t understand where you are coming from there. The secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000. You just shouldn’t expect to be able to think of detailed situations that determine the whole value of the universe, unless you are in an anomalous situation, but that doesn’t mean that your actions have no effect and that you should condition on being in an anomalous situation.
How did this happen as a result of economic growth having a marginally greater exponent? Doesn’t that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly?
Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don’t have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute.
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
I could make analogies about smart-people-will-then-decide and don’t-worry-the-elite-wouldn’t-be-that-stupid reasoning to various historical projections that failed, but I don’t think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don’t trust your style of reasoning is that I think it wouldn’t have worked historically, not that I think your reasoning mode would have worked well historically but I’ve decided to reject it because I’m stubborn. (If I were to be more specific, when I listen to your projections of future events they don’t sound very much like recollections of past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.)
I think an important thing to keep in mind throughout is that we’re not asking whether this present world would be stronger and wiser if it were economically poorer. I think it’s much better to frame the question as whether we would be in a marginally better or worse position with respect to FAI today if we had the present level of economic development but the past century from 1913-2013 had taken ten fewer years to get there so that the current date were 2003. This seems a lot more subtle.
How sure are you that this isn’t hindsight bias, that if various involved historical figures had been smarter they would have understood the situation and not done things that look unbelievably stupid looking back?
Do you have particular historical events in mind?
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else.
In the case of the negative scenarios I outlined this is hopefully clear: wars aren’t sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc.
Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn’t increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want).
Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction.
This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.