I think I’d expect PhD biologists at good universities (or, at least, those working with evolutionary systems) to be aware that hill-climbing processes often get stuck in local optima.
I would assume the same, but unfortunately… that’s a real life thing that I heard one say in a lecture. Well, not “Global maximum!” but something with essentially identical meaning, without the subtext of big error.
People may be aware of a lesson learned from math, but not propagate it through all their belief systems.
Even without propagation of math lessons it’s generally taught that evolution doesn’t find optimal solution but just solutions that are good enough.
It’s also worth noting that various if you do an infinitive amount of minor design changes you can find global maxima. If I remember right the Metropolis–Hastings algorithm does get you a global maxima provided you turn the parameters right and wait long enough. It might take longer than trying every single possible value but if you just wait long enough you will get to your maxima.
Biologists also are often happy with solutions that aren’t 100% perfect. The standard for truth is often statistical significance.
Yes, I agree with everything you say (- well, I don’t know the M-H algorithm, but I’ll take that on faith).
I mentioned this explicitly because it’s mindblowingly bad to see someone saying this, with this background, when he says so many other smart things that clearly imply he understands the general principle of local optimizations not being global optimizations.
What he didn’t say is, “This enzyme works really well, and we can be pretty confident evolution has tried out most of the easy modifications on the current structure. It’s not perfect (admittedly), but it’s locally pretty good.”
It was more along the lines of, “We can be confident this is the best possible version of this enzyme.”
Anyway, a single human biologist isn’t the point. I’m much more interested in questions like, how often can I use local optima in an argument, and people will know what I mean / not think I’m crazy for suggesting there are other hills that might be stood upon.
It was more along the lines of, “We can be confident this is the best possible version of this enzyme.”
That’s really bad. If you take any random enzyme shared by humans and chimpanzees both version are going to differ slightly and there no reason to strongly assume that the version of the chimpanzee is optimal for chimpanzees while the version for humans is optimal for humans.
There no reason that a random enzyme without very strong selection pressure is at a local maxima.
I think I’d expect PhD biologists at good universities (or, at least, those working with evolutionary systems) to be aware that hill-climbing processes often get stuck in local optima.
I would assume the same, but unfortunately… that’s a real life thing that I heard one say in a lecture. Well, not “Global maximum!” but something with essentially identical meaning, without the subtext of big error.
People may be aware of a lesson learned from math, but not propagate it through all their belief systems.
Even without propagation of math lessons it’s generally taught that evolution doesn’t find optimal solution but just solutions that are good enough.
It’s also worth noting that various if you do an infinitive amount of minor design changes you can find global maxima. If I remember right the Metropolis–Hastings algorithm does get you a global maxima provided you turn the parameters right and wait long enough. It might take longer than trying every single possible value but if you just wait long enough you will get to your maxima.
Biologists also are often happy with solutions that aren’t 100% perfect. The standard for truth is often statistical significance.
Yes, I agree with everything you say (- well, I don’t know the M-H algorithm, but I’ll take that on faith).
I mentioned this explicitly because it’s mindblowingly bad to see someone saying this, with this background, when he says so many other smart things that clearly imply he understands the general principle of local optimizations not being global optimizations.
What he didn’t say is, “This enzyme works really well, and we can be pretty confident evolution has tried out most of the easy modifications on the current structure. It’s not perfect (admittedly), but it’s locally pretty good.”
It was more along the lines of, “We can be confident this is the best possible version of this enzyme.”
Anyway, a single human biologist isn’t the point. I’m much more interested in questions like, how often can I use local optima in an argument, and people will know what I mean / not think I’m crazy for suggesting there are other hills that might be stood upon.
That’s really bad. If you take any random enzyme shared by humans and chimpanzees both version are going to differ slightly and there no reason to strongly assume that the version of the chimpanzee is optimal for chimpanzees while the version for humans is optimal for humans.
There no reason that a random enzyme without very strong selection pressure is at a local maxima.