A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.
Given the lack of necessary experimental data (speed of light being constant and mass-independence of free fall), I suspect that the required power of this “Bayesian superintelligence” throwing oodles of Solomonoff induction at the problem would result in such a superintelligence spawning a UFAI more powerful than this “superintelligence” and killing it in the process.
To quote EY:
Solomonoff induction, taken literally, would create countably infinitely many sentient beings, trapped inside the computations. All possible computable sentient beings, in fact. Which scarcely seems ethical. So let us be glad this is only a formalism.
What are the odds of them staying inside the computation?
Given the lack of necessary experimental data (speed of light being constant and mass-independence of free fall), I suspect that the required power of this “Bayesian superintelligence” throwing oodles of Solomonoff induction at the problem would result in such a superintelligence spawning a UFAI more powerful than this “superintelligence” and killing it in the process.
To quote EY:
What are the odds of them staying inside the computation?
Meta-computations would arise inside. Into which will some of those upload themselves. And meta-meta and so on. On the expense of slowing down.
Can they come here? Entirely depends if we have a hardware connection to that machine, which runs said process.
Anyway, some of us are already here, no matter of this SI runs or not.