I think it is more that you can’t build a genuine General Intelligence before you have solved some intractable mathematical problems, like practical algorithms for solomonoff induction.
Why think that you need to use a Solomonoff Induction (SI) approximation to get AGI? Do you mean to take it so loosely that any ability to do wide-ranging sequence prediction count as a practical algorithm for SI?
Well, solomonoff induction is the general principle of occamian priors in a mathematically simple universe. I would say that “wide-ranging sequence prediction” would mean you had already solved it with some elegant algorithm. I highly doubt something as difficult as AGI can be achieved with hacks alone.
But, humans can’t reliably do that, either, and we get by okay. I mean, it’ll need to be solved at some point, but we know for sure that something at least human equivalent can exist without solving that particular problem.
When I say information sequence prediction, i mean not some abstract and strange mathematics.
I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor.
You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby.
From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex’s interaction with your bio-kinetic system of muscle, bone and sinew.
Human brains are one hell of a really powerful prediction engine.
So… you just mean that in order to build AI, we’re going to have to solve AI, and it’s hard? I’m not sure the weakened version you’re stating here is useful.
We certainly don’t have to actually, formally solve the SI problem in order to build AI.
I really doubt an AI-like hack even looks like one, if you don’t arrive on it by way of maths.
I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn’t as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems.
Why think that you need to use a Solomonoff Induction (SI) approximation to get AGI? Do you mean to take it so loosely that any ability to do wide-ranging sequence prediction count as a practical algorithm for SI?
Well, solomonoff induction is the general principle of occamian priors in a mathematically simple universe. I would say that “wide-ranging sequence prediction” would mean you had already solved it with some elegant algorithm. I highly doubt something as difficult as AGI can be achieved with hacks alone.
But, humans can’t reliably do that, either, and we get by okay. I mean, it’ll need to be solved at some point, but we know for sure that something at least human equivalent can exist without solving that particular problem.
Humans can’t reliably do what?
When I say information sequence prediction, i mean not some abstract and strange mathematics.
I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor.
You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby.
From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex’s interaction with your bio-kinetic system of muscle, bone and sinew.
Human brains are one hell of a really powerful prediction engine.
So… you just mean that in order to build AI, we’re going to have to solve AI, and it’s hard? I’m not sure the weakened version you’re stating here is useful.
We certainly don’t have to actually, formally solve the SI problem in order to build AI.
I really doubt an AI-like hack even looks like one, if you don’t arrive on it by way of maths.
I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn’t as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems.
Only Uploads offer a true alternative.