I’m not sure this rule is sufficient. When you have a cool idea about AGI, there is strong emotional motivation to rationalize the notion that your idea decreases risk. “Think hard” might be a procedure that is not sufficiently trustworthy, since you’re running on corrupted hardware.
Hi Abram, thx for commenting!
I’m not sure this rule is sufficient. When you have a cool idea about AGI, there is strong emotional motivation to rationalize the notion that your idea decreases risk. “Think hard” might be a procedure that is not sufficiently trustworthy, since you’re running on corrupted hardware.