I think we should try to form correct beliefs about p(x-risk caused by AIs), and we should also try to form correct beliefs about p(x-risk avoided by AIs), and then we should make sensible decisions in light of those beliefs. I don’t see any reason to combine those two probabilities into a single “yay AI versus boo AI” axis—see Section 3 above! :)
For example, if p(x-risk caused by AIs) is high, and p(x-risk avoided by AIs) is even higher, then we should brainstorm ways to lower AI x-risk other than “stop doing AI altogether forever” (as if that were a feasible option in the first place!); and we should also talk about what those other x-risks are, and whether there are non-AI ways to mitigate them; and we should also talk about how both those probabilities might change if we somehow make AGI happen N years later (or N years sooner), etc. Whereas if p(x-risk caused by AIs) is ≈0, we would be asking different questions and facing different tradeoffs. Right?
[on second thought some material removed for better usage]
I think we should try to form correct beliefs about p(x-risk caused by AIs), and we should also try to form correct beliefs about p(x-risk avoided by AIs), and then we should make sensible decisions in light of those beliefs.
Hear! Hear!
Whereas if p(x-risk caused by AIs) is ≈0, we would be asking different questions and facing different tradeoffs. Right?
I don’t know. It doesn’t feel like that’s why I choose to spend some time checking these questions in the first place.
You know what it feels like when your plane is accelerating and you start feel like something is going to happen, then you remember: of course, it’s a take off.
Well, Imagine you still feel acceleration after the take off. First, you guess it’s to hard to evaluate the distance but you know it must stop when you’re at cruising speed. Right? After one hour, you keep feeling acceleration, but that must be a mistake because a Fermi estimate says you can’t be accelerating at this point. Right? Then it accelerates more. Right? This is how I feel after I found out about convolution, relu, alphago,, alphazero. « Attention is all you need » and children.
Thanks!
I think we should try to form correct beliefs about p(x-risk caused by AIs), and we should also try to form correct beliefs about p(x-risk avoided by AIs), and then we should make sensible decisions in light of those beliefs. I don’t see any reason to combine those two probabilities into a single “yay AI versus boo AI” axis—see Section 3 above! :)
For example, if p(x-risk caused by AIs) is high, and p(x-risk avoided by AIs) is even higher, then we should brainstorm ways to lower AI x-risk other than “stop doing AI altogether forever” (as if that were a feasible option in the first place!); and we should also talk about what those other x-risks are, and whether there are non-AI ways to mitigate them; and we should also talk about how both those probabilities might change if we somehow make AGI happen N years later (or N years sooner), etc. Whereas if p(x-risk caused by AIs) is ≈0, we would be asking different questions and facing different tradeoffs. Right?
[on second thought some material removed for better usage]
Hear! Hear!
I don’t know. It doesn’t feel like that’s why I choose to spend some time checking these questions in the first place.
You know what it feels like when your plane is accelerating and you start feel like something is going to happen, then you remember: of course, it’s a take off.
Well, Imagine you still feel acceleration after the take off. First, you guess it’s to hard to evaluate the distance but you know it must stop when you’re at cruising speed. Right? After one hour, you keep feeling acceleration, but that must be a mistake because a Fermi estimate says you can’t be accelerating at this point. Right? Then it accelerates more. Right? This is how I feel after I found out about convolution, relu, alphago,, alphazero. « Attention is all you need » and children.