Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual? I don’t understand where you are coming from there. The secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000. You just shouldn’t expect to be able to think of detailed situations that determine the whole value of the universe, unless you are in an anomalous situation, but that doesn’t mean that your actions have no effect and that you should condition on being in an anomalous situation.
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
How did this happen as a result of economic growth having a marginally greater exponent? Doesn’t that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly?
Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don’t have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual?
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
My secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000.
I could make analogies about smart-people-will-then-decide and don’t-worry-the-elite-wouldn’t-be-that-stupid reasoning to various historical projections that failed, but I don’t think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don’t trust your style of reasoning is that I think it wouldn’t have worked historically, not that I think your reasoning mode would have worked well historically but I’ve decided to reject it because I’m stubborn. (If I were to be more specific, when I listen to your projections of future events they don’t sound very much like recollections of past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.)
I think an important thing to keep in mind throughout is that we’re not asking whether this present world would be stronger and wiser if it were economically poorer. I think it’s much better to frame the question as whether we would be in a marginally better or worse position with respect to FAI today if we had the present level of economic development but the past century from 1913-2013 had taken ten fewer years to get there so that the current date were 2003. This seems a lot more subtle.
past events as I have read about them in history books, where jaw-dropping stupidity
usually plays a much stronger role.
How sure are you that this isn’t hindsight bias, that if various involved historical
figures had been smarter they would have understood the situation and not done things
that look unbelievably stupid looking back?
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else.
In the case of the negative scenarios I outlined this is hopefully clear: wars aren’t sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc.
Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn’t increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want).
Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction.
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It’s weird to cash this out as a concrete scenario, because that just doesn’t seem like how reasonable reasoning works.
But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc.
Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on.
But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual? I don’t understand where you are coming from there. The secondary objection is to your epistemic framework. I have no idea how you would have thought about the future if you lived in 1800 or even 1900; it seems almost certain that this framework reasoning would have led you to crazy conclusions, and I’m afraid that the same thing is true in 2000. You just shouldn’t expect to be able to think of detailed situations that determine the whole value of the universe, unless you are in an anomalous situation, but that doesn’t mean that your actions have no effect and that you should condition on being in an anomalous situation.
How did this happen as a result of economic growth having a marginally greater exponent? Doesn’t that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly?
Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don’t have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute.
Procedurally, we’re not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out.
I could make analogies about smart-people-will-then-decide and don’t-worry-the-elite-wouldn’t-be-that-stupid reasoning to various historical projections that failed, but I don’t think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don’t trust your style of reasoning is that I think it wouldn’t have worked historically, not that I think your reasoning mode would have worked well historically but I’ve decided to reject it because I’m stubborn. (If I were to be more specific, when I listen to your projections of future events they don’t sound very much like recollections of past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.)
I think an important thing to keep in mind throughout is that we’re not asking whether this present world would be stronger and wiser if it were economically poorer. I think it’s much better to frame the question as whether we would be in a marginally better or worse position with respect to FAI today if we had the present level of economic development but the past century from 1913-2013 had taken ten fewer years to get there so that the current date were 2003. This seems a lot more subtle.
How sure are you that this isn’t hindsight bias, that if various involved historical figures had been smarter they would have understood the situation and not done things that look unbelievably stupid looking back?
Do you have particular historical events in mind?
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else.
In the case of the negative scenarios I outlined this is hopefully clear: wars aren’t sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc.
Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn’t increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want).
Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction.
This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.