I don’t believe so. The policy we will be following will attempt to line up with how it was explained at the workshop I attended. The official policy is to think hard about whether publication increases or decreases risk. The first approximation to this policy is to not publish information which is useful for AI generally, but do publish things which have specific application to FAI and don’t help other approaches.
I’m not sure this rule is sufficient. When you have a cool idea about AGI, there is strong emotional motivation to rationalize the notion that your idea decreases risk. “Think hard” might be a procedure that is not sufficiently trustworthy, since you’re running on corrupted hardware.
I don’t believe so. The policy we will be following will attempt to line up with how it was explained at the workshop I attended. The official policy is to think hard about whether publication increases or decreases risk. The first approximation to this policy is to not publish information which is useful for AI generally, but do publish things which have specific application to FAI and don’t help other approaches.
Hi Abram, thx for commenting!
I’m not sure this rule is sufficient. When you have a cool idea about AGI, there is strong emotional motivation to rationalize the notion that your idea decreases risk. “Think hard” might be a procedure that is not sufficiently trustworthy, since you’re running on corrupted hardware.