Because higher class humans know it and so it would “differentiate” them from less successful humans? Even though it is likely not causative? Yeah you would need some method to detect causation.
It’s worse than that. If everyone is accurately judging which information is valuable and studying things in descending order of priority, then what distinguishes successful and unsuccessful people is that the successful ones got further down the list. So if you compare, the differences will be things that people explicitly judged to be low priority. Ie, Latin classes.
Again this requires Latin classes to pay rent. That is for there to be a measurable difference in personal success not attributed to inherited opportunities and resources. And not found in people who skipped Latin but did everything else.
I think specifically they’re getting at that the “steady state” isn’t stable. As soon as everyone uses the algorithm uniformly it falls apart in a few generations tops. You’d have to never stop A/B testing the importance of various subjects; your control group for “do/don’t teach this subject” could never shrink all the way to zero for roughly the same reason that bayesian probability updates don’t work on probabilities 0 and 1.
It sounds like you’re counting on natural human variation to temper that, but to the extent the alogithm actually worked with large effect size it’s not clear that’d be sufficent. Undeniably good ideas do have a way of eventually getting fixed in a population.
It’s just a thought experiment. It is improbable to ever come up. Once we have a way to create lists of all the things humans should know in order to be the best possible humans, well...
An AI system (in the 20-100 years when we can do this) could probably consume the same list. And ‘think with’ massive chunks of digital machinery that are very close to error-free, don’t need to sleep, don’t age, don’t have agendas that aren’t inherent in their utility function, and run at 4-5ghz instead of 1 khz. And learn in parallel from thousands or millions of instances of themselves. And their algorithms can be structured where they objectively “know if they know” and “know if they don’t know” what to do to achieve the desired outcome. (so they don’t act at all if they don’t know).
Anyways with all these advantages your AI software has to be almost infinitely stupid to not beat humans at any task where it is possible to score whether or not you have succeeded.
(‘scoring’ generally requires a way to simulate the task, both the actor and the environment, and give the machine many tries to find a way to get a good score. So essentially any task on earth where it is possible for a computer to accurately and mostly completely determine if the task criterion was met or not. All tasks that involve manipulating some physical object with reasonably modelable properties fall into this class)
The tasks that don’t fit are ones with humans as either the direct target or appealing to aspects only humans can perceive. Aka art, politics, teaching, counseling, cleaning tasks where the thing being cleaned a robot can’t perceive (such as bad smells), landscaping design, architecture for appearances (but not structural or electrical or plumbing), and so on.
This is how you get Latin courses.
Because higher class humans know it and so it would “differentiate” them from less successful humans? Even though it is likely not causative? Yeah you would need some method to detect causation.
It’s worse than that. If everyone is accurately judging which information is valuable and studying things in descending order of priority, then what distinguishes successful and unsuccessful people is that the successful ones got further down the list. So if you compare, the differences will be things that people explicitly judged to be low priority. Ie, Latin classes.
Again this requires Latin classes to pay rent. That is for there to be a measurable difference in personal success not attributed to inherited opportunities and resources. And not found in people who skipped Latin but did everything else.
I think specifically they’re getting at that the “steady state” isn’t stable. As soon as everyone uses the algorithm uniformly it falls apart in a few generations tops. You’d have to never stop A/B testing the importance of various subjects; your control group for “do/don’t teach this subject” could never shrink all the way to zero for roughly the same reason that bayesian probability updates don’t work on probabilities 0 and 1.
It sounds like you’re counting on natural human variation to temper that, but to the extent the alogithm actually worked with large effect size it’s not clear that’d be sufficent. Undeniably good ideas do have a way of eventually getting fixed in a population.
It’s just a thought experiment. It is improbable to ever come up. Once we have a way to create lists of all the things humans should know in order to be the best possible humans, well...
An AI system (in the 20-100 years when we can do this) could probably consume the same list. And ‘think with’ massive chunks of digital machinery that are very close to error-free, don’t need to sleep, don’t age, don’t have agendas that aren’t inherent in their utility function, and run at 4-5ghz instead of 1 khz. And learn in parallel from thousands or millions of instances of themselves. And their algorithms can be structured where they objectively “know if they know” and “know if they don’t know” what to do to achieve the desired outcome. (so they don’t act at all if they don’t know).
Anyways with all these advantages your AI software has to be almost infinitely stupid to not beat humans at any task where it is possible to score whether or not you have succeeded.
(‘scoring’ generally requires a way to simulate the task, both the actor and the environment, and give the machine many tries to find a way to get a good score. So essentially any task on earth where it is possible for a computer to accurately and mostly completely determine if the task criterion was met or not. All tasks that involve manipulating some physical object with reasonably modelable properties fall into this class)
The tasks that don’t fit are ones with humans as either the direct target or appealing to aspects only humans can perceive. Aka art, politics, teaching, counseling, cleaning tasks where the thing being cleaned a robot can’t perceive (such as bad smells), landscaping design, architecture for appearances (but not structural or electrical or plumbing), and so on.