I do not expect that any human brain would be safe if scaled up by that amount, because of lack of robustness to relative scale. My intuition is that alignment is very hard, but I don’t have an explicit reason right now.
I do not expect that any human brain would be safe if scaled up by that amount, because of lack of robustness to relative scale. My intuition is that alignment is very hard, but I don’t have an explicit reason right now.