build any computational system which generates a range of actions, predicts the consequences of those actions relative to some ontology and world-model, and then selects among probable consequences using criterion X.
Nothing mysterious here: this naive approach has incredibly low payoff per computation, and even if you start with such system, and get it to be smart enough to make improvements, the first thing it’ll be improving is changing it’s architecture.
If I gave you 10^40 flops, which probably can support ‘super intelligent’ mind, your naive approach would still be dumber than a housecat on many tasks. For some world evolution & utility, you can do inverse of the ‘simulate and choose’ much better (think towering exponents times better) than brute-force ‘try different actions’. In general you can’t. Some functions are easier to find inverse of, than others. A lot easier.
Nothing mysterious here: this naive approach has incredibly low payoff per computation, and even if you start with such system, and get it to be smart enough to make improvements, the first thing it’ll be improving is changing it’s architecture.
If I gave you 10^40 flops, which probably can support ‘super intelligent’ mind, your naive approach would still be dumber than a housecat on many tasks. For some world evolution & utility, you can do inverse of the ‘simulate and choose’ much better (think towering exponents times better) than brute-force ‘try different actions’. In general you can’t. Some functions are easier to find inverse of, than others. A lot easier.