Don’t get confused now though, not being able to simulate the universe from the beginning doesn’t mean an AI can’t take over the world unexpectedly. It does mean the diamondoid concern is probably a bit overhyped, but there’s plenty of other concerning nanoscale life that could be generated by a superplanner and would wipe out humanity.
Oh yes, I don’t deny that, I think we agree. I simply think it is a good sanity practice to call bullshit on those overhyped plans. If people were more sceptical on those SciFi scenarios they would also probability update to lower P(doom) estimates.
I do not agree, and this is coming from someone who has much lower P(doom) estimates and has serious issues with Yudkowsky’s epistemics.
The real issue I have the the nanotechnology plans and fast takeoff plans is that they require more assumptions than Yudkowsky realizes, and he has a problem of overweighting their probabilities compared to what we actually see today.
They’re not magical, just way overweighted on their probability mass, IMO.
I don’t see how we disagree here? Maybe it’s the use of the word magical? I don’t intend to use it in the sense “not allowed by the laws of physics”, I am happy to replace that by overweighted probability mass if you think that’s more accurate.
Maybe it’s the use of the word magical? I don’t intend to use it in the sense “not allowed by the laws of physics”
Yes, that was my issue. From my vantage point, the tone of the comment implied that Yudkowskian foom scenarios were downright impossible, which wasn’t the case.
That stated, it looks like we came to an agreement here, so thanks for talking.
Don’t get confused now though, not being able to simulate the universe from the beginning doesn’t mean an AI can’t take over the world unexpectedly. It does mean the diamondoid concern is probably a bit overhyped, but there’s plenty of other concerning nanoscale life that could be generated by a superplanner and would wipe out humanity.
Oh yes, I don’t deny that, I think we agree. I simply think it is a good sanity practice to call bullshit on those overhyped plans. If people were more sceptical on those SciFi scenarios they would also probability update to lower P(doom) estimates.
I do not agree, and this is coming from someone who has much lower P(doom) estimates and has serious issues with Yudkowsky’s epistemics.
The real issue I have the the nanotechnology plans and fast takeoff plans is that they require more assumptions than Yudkowsky realizes, and he has a problem of overweighting their probabilities compared to what we actually see today.
They’re not magical, just way overweighted on their probability mass, IMO.
I don’t see how we disagree here? Maybe it’s the use of the word magical? I don’t intend to use it in the sense “not allowed by the laws of physics”, I am happy to replace that by overweighted probability mass if you think that’s more accurate.
Yes, that was my issue. From my vantage point, the tone of the comment implied that Yudkowskian foom scenarios were downright impossible, which wasn’t the case.
That stated, it looks like we came to an agreement here, so thanks for talking.