How did you arrive at the conclusion that we’re not facing big expected costs with these questions?
There are lots of things we don’t know, and my default presumption is for errors to be non-astronomically-costly, until there are arguments otherwise.
I agree that philosophical problems have some stronger claim to causing astronomical damage, and so I am more scared of philosophical errors than e.g. our lack of effective public policy, our weak coordination mechanisms, global warming, the dismal state of computer security.
But I don’t see really strong arguments for philosophical errors causing great damage, and so I’m skeptical that we are facing big expected costs (big compared to the biggest costs we can identify and intervene on, amongst them AI safety).
That is, there seems to be a pretty good case that AI may be built soon, and that we lack the understanding to build AI systems that do what we want, that we will nevertheless build AI systems to help us get what we want in the short term, and that in the long run this will radically reduce the value of the universe. The cases for philosophical errors causing damage are overall much more speculative, have lower stakes, and are less urgent.
the construction of large nuclear arsenals and lack of sufficient safeguards against nuclear war has already caused a large expected cost, and may have been based on one or more incorrect philosophical understandings
I agree that philosophical progress would very slightly decrease the probability of nuclear trouble, but this looks like a very small effect. (Orders of magnitude smaller than the effects from say increased global peace and stability, which I’d probably list as a higher priority right now than resolving philosophical uncertainty.) It’s possible we disagree about the mechanics of this particular situation.
Do you expect technological development to have plateaued by then (i.e., AIs will have invented essentially all technologies feasible in this universe)?
No. I think that 200 years of subjective time probably amounts 5-10 more doublings of the economy, and that technological change is a plausible reason that philosophical error would eventually become catastrophic.
I said “best guess” but this really is a pretty wild guess about the relevant timescales.
intentionally or accidentally destroy civilization
As with the special case of nuclear weapons, I think that philosophical error is a relatively small input into world-destruction.
win a decisive war against the rest of the world
I don’t expect this to cause philosophical errors to become catastrophic. I guess the concern is that the war will be won by someone who doesn’t much care about the future, thereby increasing the probability that resources are controlled by someone who prefers not undergo any further reflection? I’m willing to talk about this scenario more, but at face value the prospect of a decisive military victory wouldn’t bump philosophical error above AI risk as a concern for me.
I’m open to ending up with a more pessimistic view about the consequences of philosophical error, either by thinking through more possible scenarios in which it causes damage or by considering more abstract arguments.
But if I end up with a view more like yours, I don’t know if it would change my view on AI safety. It still feels like the AI control problem is a different issue which can be considered separately.
There are lots of things we don’t know, and my default presumption is for errors to be non-astronomically-costly, until there are arguments otherwise.
I agree that philosophical problems have some stronger claim to causing astronomical damage, and so I am more scared of philosophical errors than e.g. our lack of effective public policy, our weak coordination mechanisms, global warming, the dismal state of computer security.
But I don’t see really strong arguments for philosophical errors causing great damage, and so I’m skeptical that we are facing big expected costs (big compared to the biggest costs we can identify and intervene on, amongst them AI safety).
That is, there seems to be a pretty good case that AI may be built soon, and that we lack the understanding to build AI systems that do what we want, that we will nevertheless build AI systems to help us get what we want in the short term, and that in the long run this will radically reduce the value of the universe. The cases for philosophical errors causing damage are overall much more speculative, have lower stakes, and are less urgent.
I agree that philosophical progress would very slightly decrease the probability of nuclear trouble, but this looks like a very small effect. (Orders of magnitude smaller than the effects from say increased global peace and stability, which I’d probably list as a higher priority right now than resolving philosophical uncertainty.) It’s possible we disagree about the mechanics of this particular situation.
No. I think that 200 years of subjective time probably amounts 5-10 more doublings of the economy, and that technological change is a plausible reason that philosophical error would eventually become catastrophic.
I said “best guess” but this really is a pretty wild guess about the relevant timescales.
As with the special case of nuclear weapons, I think that philosophical error is a relatively small input into world-destruction.
I don’t expect this to cause philosophical errors to become catastrophic. I guess the concern is that the war will be won by someone who doesn’t much care about the future, thereby increasing the probability that resources are controlled by someone who prefers not undergo any further reflection? I’m willing to talk about this scenario more, but at face value the prospect of a decisive military victory wouldn’t bump philosophical error above AI risk as a concern for me.
I’m open to ending up with a more pessimistic view about the consequences of philosophical error, either by thinking through more possible scenarios in which it causes damage or by considering more abstract arguments.
But if I end up with a view more like yours, I don’t know if it would change my view on AI safety. It still feels like the AI control problem is a different issue which can be considered separately.