When I say information sequence prediction, i mean not some abstract and strange mathematics.
I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor.
You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby.
From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex’s interaction with your bio-kinetic system of muscle, bone and sinew.
Human brains are one hell of a really powerful prediction engine.
So… you just mean that in order to build AI, we’re going to have to solve AI, and it’s hard? I’m not sure the weakened version you’re stating here is useful.
We certainly don’t have to actually, formally solve the SI problem in order to build AI.
I really doubt an AI-like hack even looks like one, if you don’t arrive on it by way of maths.
I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn’t as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems.
Humans can’t reliably do what?
When I say information sequence prediction, i mean not some abstract and strange mathematics.
I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor.
You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby.
From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex’s interaction with your bio-kinetic system of muscle, bone and sinew.
Human brains are one hell of a really powerful prediction engine.
So… you just mean that in order to build AI, we’re going to have to solve AI, and it’s hard? I’m not sure the weakened version you’re stating here is useful.
We certainly don’t have to actually, formally solve the SI problem in order to build AI.
I really doubt an AI-like hack even looks like one, if you don’t arrive on it by way of maths.
I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn’t as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems.
Only Uploads offer a true alternative.