I attribute this behavior in part to the desire to preserve the possibility of universal provably Friendly AI
Well that seems like the most dangerous instance of motivated cognition ever.
It seems like an issue that’s important to get right. Is there a test we could run to see whether it’s true?
Yes, but only once. ;)
Did you mean to link to this comment?
Thanks, fixed.
Well that seems like the most dangerous instance of motivated cognition ever.
It seems like an issue that’s important to get right. Is there a test we could run to see whether it’s true?
Yes, but only once. ;)
Did you mean to link to this comment?
Thanks, fixed.