you can argue all you want that any flying device will have to flap its wings, and that won’t constrain airplane designs
You can argue all you want that any thinking device will have to reflect on its thoughts, and that won’t constrain mind designs.
the prior is still really wide, so wide that a counting argument still more-or-less works
And it also works for arguing that GPT3 won’t happen—there are more hacks that give you low loss than there useful to humans hacks that give you low loss.
so does your whole sense of difference go out the window if we do something autogpt-ish?
I think it should be analyzed separately, but intuitively if your gpt never thinks of killing humans, it should be less likely that the plans with these thoughts would result in killing humans.
You can argue all you want that any thinking device will have to reflect on its thoughts, and that won’t constrain mind designs.
And it also works for arguing that GPT3 won’t happen—there are more hacks that give you low loss than there useful to humans hacks that give you low loss.
I think it should be analyzed separately, but intuitively if your gpt never thinks of killing humans, it should be less likely that the plans with these thoughts would result in killing humans.