In the simplest case where the errors are Gaussian, this would probably be covered by standard regression lower bounds? You’d show that exponentials and sigmoids can be made close in L² over a restricted domain, then deduce it requires many samples / low noise to distinguish them.
Or as Aaro says above, maybe better to parametrise the sigmoid, and take the Fisher information of the turning point parameter.
In the simplest case where the errors are Gaussian, this would probably be covered by standard regression lower bounds? You’d show that exponentials and sigmoids can be made close in L² over a restricted domain, then deduce it requires many samples / low noise to distinguish them.
Or as Aaro says above, maybe better to parametrise the sigmoid, and take the Fisher information of the turning point parameter.