I upvoted to zero because this is a reasonable idea, but I wouldn’t upvote more right now because I don’t see how this strongly impacts superintelligence safety timeline; this doesn’t seem like a particularly high impact path to me from my current context, since it isn’t reliable in any meaningful sense even at current models’ capability level.
I actually don’t think it has much impact on superintelligence. I shared this mostly because I thought it’s a cool idea that we can implement now and can later be turned into a policy. Compared to existing policy proposals that don’t limit training/usage, I think this can have a much larger impact
I upvoted to zero because this is a reasonable idea, but I wouldn’t upvote more right now because I don’t see how this strongly impacts superintelligence safety timeline; this doesn’t seem like a particularly high impact path to me from my current context, since it isn’t reliable in any meaningful sense even at current models’ capability level.
I actually don’t think it has much impact on superintelligence. I shared this mostly because I thought it’s a cool idea that we can implement now and can later be turned into a policy. Compared to existing policy proposals that don’t limit training/usage, I think this can have a much larger impact