Suppose it were possible to write down a list of every fact or algorithm known by a living human being. This isn’t impossible, if you could use an AI system to translate audio recordings of someone’s entire life to text, and everything they ever read, you would have it in a file. Then you would map from [text] to [common fact or algorithm] by comparing thousands of these files and to fact or algorithm written in sources like encyclopedias. Or more likely you would find commonality with a clustering algorithm.
The “knowledge that pays the most rent” is either the most common fact or algorithm known by all humans, or the most common one that separates successful humans from failures. (if there is a measurable difference)
Then an education system adds the most value by teaching the or algorithms in order of greatest value to least value, or reordering in complexity tiers, and in each tier teaching the most value elements first until the time allotted for education is over.
If civics has any value at all, this algorithm would find it. (though as described it is subject agnostic)
Because higher class humans know it and so it would “differentiate” them from less successful humans? Even though it is likely not causative? Yeah you would need some method to detect causation.
It’s worse than that. If everyone is accurately judging which information is valuable and studying things in descending order of priority, then what distinguishes successful and unsuccessful people is that the successful ones got further down the list. So if you compare, the differences will be things that people explicitly judged to be low priority. Ie, Latin classes.
Again this requires Latin classes to pay rent. That is for there to be a measurable difference in personal success not attributed to inherited opportunities and resources. And not found in people who skipped Latin but did everything else.
I think specifically they’re getting at that the “steady state” isn’t stable. As soon as everyone uses the algorithm uniformly it falls apart in a few generations tops. You’d have to never stop A/B testing the importance of various subjects; your control group for “do/don’t teach this subject” could never shrink all the way to zero for roughly the same reason that bayesian probability updates don’t work on probabilities 0 and 1.
It sounds like you’re counting on natural human variation to temper that, but to the extent the alogithm actually worked with large effect size it’s not clear that’d be sufficent. Undeniably good ideas do have a way of eventually getting fixed in a population.
It’s just a thought experiment. It is improbable to ever come up. Once we have a way to create lists of all the things humans should know in order to be the best possible humans, well...
An AI system (in the 20-100 years when we can do this) could probably consume the same list. And ‘think with’ massive chunks of digital machinery that are very close to error-free, don’t need to sleep, don’t age, don’t have agendas that aren’t inherent in their utility function, and run at 4-5ghz instead of 1 khz. And learn in parallel from thousands or millions of instances of themselves. And their algorithms can be structured where they objectively “know if they know” and “know if they don’t know” what to do to achieve the desired outcome. (so they don’t act at all if they don’t know).
Anyways with all these advantages your AI software has to be almost infinitely stupid to not beat humans at any task where it is possible to score whether or not you have succeeded.
(‘scoring’ generally requires a way to simulate the task, both the actor and the environment, and give the machine many tries to find a way to get a good score. So essentially any task on earth where it is possible for a computer to accurately and mostly completely determine if the task criterion was met or not. All tasks that involve manipulating some physical object with reasonably modelable properties fall into this class)
The tasks that don’t fit are ones with humans as either the direct target or appealing to aspects only humans can perceive. Aka art, politics, teaching, counseling, cleaning tasks where the thing being cleaned a robot can’t perceive (such as bad smells), landscaping design, architecture for appearances (but not structural or electrical or plumbing), and so on.
May want a further filter, to look specifically for facts/algorithms that people know because they received explicit instruction or training (or some measure of knowing it better and more deeply because of explicit instruction)
Otherwise you end up duplicating things that people were already learning informally. Potentially taking those things into the “ownership” of formal teaching and convincing people you need to be taught them in a classroom for it to count.
Suppose it were possible to write down a list of every fact or algorithm known by a living human being. This isn’t impossible, if you could use an AI system to translate audio recordings of someone’s entire life to text, and everything they ever read, you would have it in a file. Then you would map from [text] to [common fact or algorithm] by comparing thousands of these files and to fact or algorithm written in sources like encyclopedias. Or more likely you would find commonality with a clustering algorithm.
The “knowledge that pays the most rent” is either the most common fact or algorithm known by all humans, or the most common one that separates successful humans from failures. (if there is a measurable difference)
Then an education system adds the most value by teaching the or algorithms in order of greatest value to least value, or reordering in complexity tiers, and in each tier teaching the most value elements first until the time allotted for education is over.
If civics has any value at all, this algorithm would find it. (though as described it is subject agnostic)
This is how you get Latin courses.
Because higher class humans know it and so it would “differentiate” them from less successful humans? Even though it is likely not causative? Yeah you would need some method to detect causation.
It’s worse than that. If everyone is accurately judging which information is valuable and studying things in descending order of priority, then what distinguishes successful and unsuccessful people is that the successful ones got further down the list. So if you compare, the differences will be things that people explicitly judged to be low priority. Ie, Latin classes.
Again this requires Latin classes to pay rent. That is for there to be a measurable difference in personal success not attributed to inherited opportunities and resources. And not found in people who skipped Latin but did everything else.
I think specifically they’re getting at that the “steady state” isn’t stable. As soon as everyone uses the algorithm uniformly it falls apart in a few generations tops. You’d have to never stop A/B testing the importance of various subjects; your control group for “do/don’t teach this subject” could never shrink all the way to zero for roughly the same reason that bayesian probability updates don’t work on probabilities 0 and 1.
It sounds like you’re counting on natural human variation to temper that, but to the extent the alogithm actually worked with large effect size it’s not clear that’d be sufficent. Undeniably good ideas do have a way of eventually getting fixed in a population.
It’s just a thought experiment. It is improbable to ever come up. Once we have a way to create lists of all the things humans should know in order to be the best possible humans, well...
An AI system (in the 20-100 years when we can do this) could probably consume the same list. And ‘think with’ massive chunks of digital machinery that are very close to error-free, don’t need to sleep, don’t age, don’t have agendas that aren’t inherent in their utility function, and run at 4-5ghz instead of 1 khz. And learn in parallel from thousands or millions of instances of themselves. And their algorithms can be structured where they objectively “know if they know” and “know if they don’t know” what to do to achieve the desired outcome. (so they don’t act at all if they don’t know).
Anyways with all these advantages your AI software has to be almost infinitely stupid to not beat humans at any task where it is possible to score whether or not you have succeeded.
(‘scoring’ generally requires a way to simulate the task, both the actor and the environment, and give the machine many tries to find a way to get a good score. So essentially any task on earth where it is possible for a computer to accurately and mostly completely determine if the task criterion was met or not. All tasks that involve manipulating some physical object with reasonably modelable properties fall into this class)
The tasks that don’t fit are ones with humans as either the direct target or appealing to aspects only humans can perceive. Aka art, politics, teaching, counseling, cleaning tasks where the thing being cleaned a robot can’t perceive (such as bad smells), landscaping design, architecture for appearances (but not structural or electrical or plumbing), and so on.
May want a further filter, to look specifically for facts/algorithms that people know because they received explicit instruction or training (or some measure of knowing it better and more deeply because of explicit instruction)
Otherwise you end up duplicating things that people were already learning informally. Potentially taking those things into the “ownership” of formal teaching and convincing people you need to be taught them in a classroom for it to count.