It’s funny that this has been recently shown in a paper. I’ve been thinking a lot about this phenomenon regarding fields with little to no capacity for testable predictions like history.
I got very into history over the last few years, and found there was a significant advantage to being unknowledgeable that was not available to the knowledged, and it was exactly what this paper is talking about.
By not knowing anything, I could entertain multiple bizarre ideas without immediately thinking “but no, that doesn’t make sense because of X.” And then, each of those ideas becomes in effect its own testable prediction. If there’s something to it, as I learn more about the topic I’m going to see significantly more samples of indications it could be true and few convincing to the contrary. But if it probably isn’t accurate, I’ll see few supporting samples and likely a number of counterfactual examples.
You kind of get to throw everything at the wall and see what sticks over time.
In particular, I found that it was especially powerful at identifying clustering trends in cross-discipline emerging research in things that were testable, such as archeological finds and DNA results, all within just the past decade, which despite being relevant to the field of textual history is still largely ignored in the face of consensus built on conviction.
It reminds me a lot of science historian John Helibron’s quote, “The myth you slay today may contain a truth you need tomorrow.”
If you haven’t had the chance to slay any myths, you also haven’t preemptively killed off any truths along with it.
One of the interesting thing about AI minds (such as LLMs) is that in theory, you can turn many topics into testable science while avoiding the ‘problem of old evidence’, because you can now construct artificial minds and mold them like putty. They know what you want them to know, and so you can see what they would predict in the absence of knowledge, or you can install in them false beliefs to test out counterfactual intellectual histories, or you can expose them to real evidence in different orders to measure biases or path dependency in reasoning.
With humans, you can’t do that because they are so uncontrolled: even if someone says they didn’t know about crucial piece of evidence X, there is no way for them to prove that, and they may be honestly mistaken and have already read about X and forgotten it (but humans never really forget so X has already changed their “priors”, leading to double-counting), or there is leakage. And you can’t get people to really believe things at the drop of a hat, so you can’t make people imagine, “suppose Napoleon had won Waterloo, how do you predict history would have changed?” because no matter how you try to participate in the spirit of the exercise, you always know that Napoleon lost and you have various opinions on that contaminating your retrodictions, and even if you have never read a single book or paper on Napoleon, you are still contaminated by expressions like “his Waterloo” (‘Hm, the general in this imaginary story is going to fight at someplace called Waterloo? Bad vibes. I think he’s gonna lose.’)
But with a LLM, say, you could simply train it with all timestamped texts up to Waterloo, like all surviving newspapers, and then simply have one version generate a bunch of texts about how ‘Napoleon won Waterloo’, train the other version on these definitely-totally-real French newspaper reports about his stunning victory over the monarchist invaders, and then ask it to make forecasts about Europe.
Similarly, you can do ‘deep exploration’ of claims that human researchers struggle to take seriously. It is a common trope in stories of breakthroughs, particularly in math, that someone got stuck for a long time proving X is true and one day decides on a whim to try to instead prove X is false and does so in hours; this would never happen with LLMs, because you would simply have a search process which tries both equally. This can take an extreme form for really difficult outstanding problems: if a problem like the continuum hypothesis defies all efforts, you could spin up 1000 von Neumann AGIs which have been brainwashed into believing it is false, and then a parallel effort by 1000 brainwashed to believing it is as true as 2+2=4, and let them pursue their research agenda for subjective centuries, and then bring them together to see what important new results they find and how they tear apart the hated enemies’ work, for seeding the next iteration.
(These are the sorts of experiments which are why one might wind up running tons of ‘ancestor simulations’… There’s many more reasons to be simulating past minds than simply very fancy versions of playing The Sims. Perhaps we are now just distant LLM personae being tested about reasoning about the Singularity in one particular scenario involving deep learning counterfactuals, where DL worked, although in the real reality it was Bayesian program synthesis & search.)
A variant of what you are saying is that AI may once and for all allow us to calculate the true counterfactual Shapley value of scientific contributions.
( re: ancestor simulations
I think you are onto something here. Compare the Q hypothesis:
Yup. Who knows but we are all part of a giant leave-one-out cross-validation computing counterfactual credit assignment on human history? Schmidhuber-em will be crushed by the results.
While I agree that the potential for AI (we probably need a better term than LLMs or transformers as multimodal models with evolving architectures grow beyond those terms) in exploring less testable topics as more testable is quite high, I’m not sure the air gapping on information can be as clean as you might hope.
Does the AI generating the stories of Napoleon’s victory know about the historical reality of Waterloo? Is it using something like SynthID where the other AI might inadvertently pick up on a pattern across the stories of victories distinct from the stories preceding it?
You end up with a turtles all the way down scenario in trying to control for information leakage with the hopes of achieving a threshold that no longer has impact on the result, but given we’re probably already seriously underestimating the degree to which correlations are mapped even in today’s models I don’t have high hopes for tomorrow’s.
I think the way in which there’s most impact on fields like history is the property by which truth clusters across associated samples whereas fictions have counterfactual clusters. An AI mind that is not inhibited by specialization blindness or the rule of seven plus or minus two and better trained at correcting for analytical biases may be able to see patterns in the data, particularly cross-domain, that have eluded human academics to date (this has been my personal research interest in the area, and it does seem like there’s significant room for improvement).
And yes, we certainly could be. If you’re a fan of cosmology at all, I’ve been following Neil Turok’s CPT symmetric universe theory closely, which started with the Baryonic asymmetry problem and has tackled a number of the open cosmology questions since. That, paired with a QM interpretation like Everett’s ends up starting to look like the symmetric universe is our reference and the MWI branches are variations of its modeling around quantization uncertainties.
(I’ve found myself thinking often lately about how given our universe at cosmic scales and pre-interaction at micro scales emulates a mathematically real universe, just what kind of simulation and at what scale might be able to be run on a real computing neural network.)
It’s funny that this has been recently shown in a paper. I’ve been thinking a lot about this phenomenon regarding fields with little to no capacity for testable predictions like history.
I got very into history over the last few years, and found there was a significant advantage to being unknowledgeable that was not available to the knowledged, and it was exactly what this paper is talking about.
By not knowing anything, I could entertain multiple bizarre ideas without immediately thinking “but no, that doesn’t make sense because of X.” And then, each of those ideas becomes in effect its own testable prediction. If there’s something to it, as I learn more about the topic I’m going to see significantly more samples of indications it could be true and few convincing to the contrary. But if it probably isn’t accurate, I’ll see few supporting samples and likely a number of counterfactual examples.
You kind of get to throw everything at the wall and see what sticks over time.
In particular, I found that it was especially powerful at identifying clustering trends in cross-discipline emerging research in things that were testable, such as archeological finds and DNA results, all within just the past decade, which despite being relevant to the field of textual history is still largely ignored in the face of consensus built on conviction.
It reminds me a lot of science historian John Helibron’s quote, “The myth you slay today may contain a truth you need tomorrow.”
If you haven’t had the chance to slay any myths, you also haven’t preemptively killed off any truths along with it.
One of the interesting thing about AI minds (such as LLMs) is that in theory, you can turn many topics into testable science while avoiding the ‘problem of old evidence’, because you can now construct artificial minds and mold them like putty. They know what you want them to know, and so you can see what they would predict in the absence of knowledge, or you can install in them false beliefs to test out counterfactual intellectual histories, or you can expose them to real evidence in different orders to measure biases or path dependency in reasoning.
With humans, you can’t do that because they are so uncontrolled: even if someone says they didn’t know about crucial piece of evidence X, there is no way for them to prove that, and they may be honestly mistaken and have already read about X and forgotten it (but humans never really forget so X has already changed their “priors”, leading to double-counting), or there is leakage. And you can’t get people to really believe things at the drop of a hat, so you can’t make people imagine, “suppose Napoleon had won Waterloo, how do you predict history would have changed?” because no matter how you try to participate in the spirit of the exercise, you always know that Napoleon lost and you have various opinions on that contaminating your retrodictions, and even if you have never read a single book or paper on Napoleon, you are still contaminated by expressions like “his Waterloo” (‘Hm, the general in this imaginary story is going to fight at someplace called Waterloo? Bad vibes. I think he’s gonna lose.’)
But with a LLM, say, you could simply train it with all timestamped texts up to Waterloo, like all surviving newspapers, and then simply have one version generate a bunch of texts about how ‘Napoleon won Waterloo’, train the other version on these definitely-totally-real French newspaper reports about his stunning victory over the monarchist invaders, and then ask it to make forecasts about Europe.
Similarly, you can do ‘deep exploration’ of claims that human researchers struggle to take seriously. It is a common trope in stories of breakthroughs, particularly in math, that someone got stuck for a long time proving X is true and one day decides on a whim to try to instead prove X is false and does so in hours; this would never happen with LLMs, because you would simply have a search process which tries both equally. This can take an extreme form for really difficult outstanding problems: if a problem like the continuum hypothesis defies all efforts, you could spin up 1000 von Neumann AGIs which have been brainwashed into believing it is false, and then a parallel effort by 1000 brainwashed to believing it is as true as 2+2=4, and let them pursue their research agenda for subjective centuries, and then bring them together to see what important new results they find and how they tear apart the hated enemies’ work, for seeding the next iteration.
(These are the sorts of experiments which are why one might wind up running tons of ‘ancestor simulations’… There’s many more reasons to be simulating past minds than simply very fancy versions of playing The Sims. Perhaps we are now just distant LLM personae being tested about reasoning about the Singularity in one particular scenario involving deep learning counterfactuals, where DL worked, although in the real reality it was Bayesian program synthesis & search.)
Beautifully illustrated and amusingly put, sir!
A variant of what you are saying is that AI may once and for all allow us to calculate the true
counterfactualShapley value of scientific contributions.( re: ancestor simulations
I think you are onto something here. Compare the Q hypothesis:
https://twitter.com/dalcy_me/status/1780571900957339771
see also speculations about Zhuangzi hypothesis here )
Yup. Who knows but we are all part of a giant leave-one-out cross-validation computing counterfactual credit assignment on human history? Schmidhuber-em will be crushed by the results.
While I agree that the potential for AI (we probably need a better term than LLMs or transformers as multimodal models with evolving architectures grow beyond those terms) in exploring less testable topics as more testable is quite high, I’m not sure the air gapping on information can be as clean as you might hope.
Does the AI generating the stories of Napoleon’s victory know about the historical reality of Waterloo? Is it using something like SynthID where the other AI might inadvertently pick up on a pattern across the stories of victories distinct from the stories preceding it?
You end up with a turtles all the way down scenario in trying to control for information leakage with the hopes of achieving a threshold that no longer has impact on the result, but given we’re probably already seriously underestimating the degree to which correlations are mapped even in today’s models I don’t have high hopes for tomorrow’s.
I think the way in which there’s most impact on fields like history is the property by which truth clusters across associated samples whereas fictions have counterfactual clusters. An AI mind that is not inhibited by specialization blindness or the rule of seven plus or minus two and better trained at correcting for analytical biases may be able to see patterns in the data, particularly cross-domain, that have eluded human academics to date (this has been my personal research interest in the area, and it does seem like there’s significant room for improvement).
And yes, we certainly could be. If you’re a fan of cosmology at all, I’ve been following Neil Turok’s CPT symmetric universe theory closely, which started with the Baryonic asymmetry problem and has tackled a number of the open cosmology questions since. That, paired with a QM interpretation like Everett’s ends up starting to look like the symmetric universe is our reference and the MWI branches are variations of its modeling around quantization uncertainties.
(I’ve found myself thinking often lately about how given our universe at cosmic scales and pre-interaction at micro scales emulates a mathematically real universe, just what kind of simulation and at what scale might be able to be run on a real computing neural network.)