See “good regulator theorem,” and various LW discussion (esp. John Wentworth trying to fix it). For practical purposes, yes, you can predict things without simulating them. The more revealing of the subject your prediction has to get, though, the more of an isomorphism to a simulation you have to contain.
But when you say Simulator, with caps, people will generally take you to be talking about janus’ Simulators post, which is not about the AI predicting people by simulating them in detail, but is instead about the AI learning dynamics of text (analogous to how the laws of physics are dynamics of the state of the world), and predicting text by stepping forward these dynamics.
What you are probably asking is “can you predict something without simulating it faithfully?” The answer is yes and no and worse than no.
A generic sequence of symbols is not losslessly compressible. Lossy compression is relative to the set of salient features one wants to predict. For example, white noise is unpredictable if you want every point, but very predictable if you want its spectrum to a reasonable accuracy. There are special sequences masquerading as generic, such as pseudorandom number generators, which can be losslessly “predicted.” Whether it counts as a “simulation” depends on the definition, I guess. There are also sequences whose end state can be predicted without having to calculate every intermediate state. This probably unambiguously counts as “predicting without simulating”.
Again, most finite sequences (i.e. numbers) are not like that. They cannot be predicted or even simulated without knowing the whole sequence first. That’s the “worse than no” part.
There are also sequences whose end state can be predicted without having to calculate every intermediate state. This probably unambiguously counts as “predicting without simulating”.
Depends on what it is your are predicting, and what you mean by simulating. I am going to take “simulating” to mean “running a comparable computation to the one that produced the result in the other entity”.
You cannot reliably predict genuinely novel (!), intelligent actions without being intelligent. (If you can reliably solve novel math problems, this means you can do math.) But you can predict the repetition of an intelligent action you have seen before, or something very similar, even if you are not quite intelligent enough to understand why it is so common. This is especially plausible if there is a relatively small range of intelligent responses. (E.g. I can imagine someone accurately predicting whether a government will initiate a covid lockdown this week, without that person having done an in depth analysis of the data that hopefully led to the government choice, if they have experienced lockdowns and the data and statements that preceded them before).
You can predict what a person with empathy would say, even if you have no empathy, provided you can still model other minds relatively accurately and have observed people with empathy. Running emotions is a very complex affair, but the range of results is still relatively predictable from the outside based on the input, even if you never run through those internal states. If I’ve seen a roomful of toddlers cry while watching Bambi, and then show them 100 other TV shows with parental deaths, I as a machine will likely be able to predict that they will cry again even if I don’t feel sad myself.
Naive question: can you predict something without simulating it?
See “good regulator theorem,” and various LW discussion (esp. John Wentworth trying to fix it). For practical purposes, yes, you can predict things without simulating them. The more revealing of the subject your prediction has to get, though, the more of an isomorphism to a simulation you have to contain.
But when you say Simulator, with caps, people will generally take you to be talking about janus’ Simulators post, which is not about the AI predicting people by simulating them in detail, but is instead about the AI learning dynamics of text (analogous to how the laws of physics are dynamics of the state of the world), and predicting text by stepping forward these dynamics.
What you are probably asking is “can you predict something without simulating it faithfully?” The answer is yes and no and worse than no.
A generic sequence of symbols is not losslessly compressible. Lossy compression is relative to the set of salient features one wants to predict. For example, white noise is unpredictable if you want every point, but very predictable if you want its spectrum to a reasonable accuracy. There are special sequences masquerading as generic, such as pseudorandom number generators, which can be losslessly “predicted.” Whether it counts as a “simulation” depends on the definition, I guess. There are also sequences whose end state can be predicted without having to calculate every intermediate state. This probably unambiguously counts as “predicting without simulating”.
Again, most finite sequences (i.e. numbers) are not like that. They cannot be predicted or even simulated without knowing the whole sequence first. That’s the “worse than no” part.
Could give an example of this?
say, f(n) = exp(-n)
Thanks!
The Bailey-Borwein-Plouffe formula is a nice one.
Depends on what it is your are predicting, and what you mean by simulating. I am going to take “simulating” to mean “running a comparable computation to the one that produced the result in the other entity”.
You cannot reliably predict genuinely novel (!), intelligent actions without being intelligent. (If you can reliably solve novel math problems, this means you can do math.) But you can predict the repetition of an intelligent action you have seen before, or something very similar, even if you are not quite intelligent enough to understand why it is so common. This is especially plausible if there is a relatively small range of intelligent responses. (E.g. I can imagine someone accurately predicting whether a government will initiate a covid lockdown this week, without that person having done an in depth analysis of the data that hopefully led to the government choice, if they have experienced lockdowns and the data and statements that preceded them before).
You can predict what a person with empathy would say, even if you have no empathy, provided you can still model other minds relatively accurately and have observed people with empathy. Running emotions is a very complex affair, but the range of results is still relatively predictable from the outside based on the input, even if you never run through those internal states. If I’ve seen a roomful of toddlers cry while watching Bambi, and then show them 100 other TV shows with parental deaths, I as a machine will likely be able to predict that they will cry again even if I don’t feel sad myself.