[on second thought some material removed for better usage]
I think we should try to form correct beliefs about p(x-risk caused by AIs), and we should also try to form correct beliefs about p(x-risk avoided by AIs), and then we should make sensible decisions in light of those beliefs.
Hear! Hear!
Whereas if p(x-risk caused by AIs) is ≈0, we would be asking different questions and facing different tradeoffs. Right?
I don’t know. It doesn’t feel like that’s why I choose to spend some time checking these questions in the first place.
You know what it feels like when your plane is accelerating and you start feel like something is going to happen, then you remember: of course, it’s a take off.
Well, Imagine you still feel acceleration after the take off. First, you guess it’s to hard to evaluate the distance but you know it must stop when you’re at cruising speed. Right? After one hour, you keep feeling acceleration, but that must be a mistake because a Fermi estimate says you can’t be accelerating at this point. Right? Then it accelerates more. Right? This is how I feel after I found out about convolution, relu, alphago,, alphazero. « Attention is all you need » and children.
[on second thought some material removed for better usage]
Hear! Hear!
I don’t know. It doesn’t feel like that’s why I choose to spend some time checking these questions in the first place.
You know what it feels like when your plane is accelerating and you start feel like something is going to happen, then you remember: of course, it’s a take off.
Well, Imagine you still feel acceleration after the take off. First, you guess it’s to hard to evaluate the distance but you know it must stop when you’re at cruising speed. Right? After one hour, you keep feeling acceleration, but that must be a mistake because a Fermi estimate says you can’t be accelerating at this point. Right? Then it accelerates more. Right? This is how I feel after I found out about convolution, relu, alphago,, alphazero. « Attention is all you need » and children.