SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn’t even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
I think it is more that you can’t build a genuine General Intelligence before you have solved some intractable mathematical problems, like practical algorithms for solomonoff induction. The fact that you would like goal stability and human-values is just two of many difficult theoretical problems that are not going to be solved by using black-box models, nor using simple models.
Practical AI researchers do a lot of good work. But remember that a neural network is a statistical classification tool.
(I am aware that neural networks isn’t the bleeding edge, but i cannot come to mind of a more resent trend. My point stands that AI is fiendishly hard and you are not going to be able to build anything concise and efficient by solving gritty engineering problems. This is even visible in simple programming where the mathematically elegant functional programming languages are gaining on the mainstream ‘gritty engineering’ imperative languages.)
I think it is more that you can’t build a genuine General Intelligence before you have solved some intractable mathematical problems, like practical algorithms for solomonoff induction.
Why think that you need to use a Solomonoff Induction (SI) approximation to get AGI? Do you mean to take it so loosely that any ability to do wide-ranging sequence prediction count as a practical algorithm for SI?
Well, solomonoff induction is the general principle of occamian priors in a mathematically simple universe. I would say that “wide-ranging sequence prediction” would mean you had already solved it with some elegant algorithm. I highly doubt something as difficult as AGI can be achieved with hacks alone.
But, humans can’t reliably do that, either, and we get by okay. I mean, it’ll need to be solved at some point, but we know for sure that something at least human equivalent can exist without solving that particular problem.
When I say information sequence prediction, i mean not some abstract and strange mathematics.
I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor.
You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby.
From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex’s interaction with your bio-kinetic system of muscle, bone and sinew.
Human brains are one hell of a really powerful prediction engine.
So… you just mean that in order to build AI, we’re going to have to solve AI, and it’s hard? I’m not sure the weakened version you’re stating here is useful.
We certainly don’t have to actually, formally solve the SI problem in order to build AI.
I really doubt an AI-like hack even looks like one, if you don’t arrive on it by way of maths.
I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn’t as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems.
I think it is more that you can’t build a genuine General Intelligence before you have solved some intractable mathematical problems, like practical algorithms for solomonoff induction. The fact that you would like goal stability and human-values is just two of many difficult theoretical problems that are not going to be solved by using black-box models, nor using simple models.
Practical AI researchers do a lot of good work. But remember that a neural network is a statistical classification tool.
(I am aware that neural networks isn’t the bleeding edge, but i cannot come to mind of a more resent trend. My point stands that AI is fiendishly hard and you are not going to be able to build anything concise and efficient by solving gritty engineering problems. This is even visible in simple programming where the mathematically elegant functional programming languages are gaining on the mainstream ‘gritty engineering’ imperative languages.)
Why think that you need to use a Solomonoff Induction (SI) approximation to get AGI? Do you mean to take it so loosely that any ability to do wide-ranging sequence prediction count as a practical algorithm for SI?
Well, solomonoff induction is the general principle of occamian priors in a mathematically simple universe. I would say that “wide-ranging sequence prediction” would mean you had already solved it with some elegant algorithm. I highly doubt something as difficult as AGI can be achieved with hacks alone.
But, humans can’t reliably do that, either, and we get by okay. I mean, it’ll need to be solved at some point, but we know for sure that something at least human equivalent can exist without solving that particular problem.
Humans can’t reliably do what?
When I say information sequence prediction, i mean not some abstract and strange mathematics.
I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor.
You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby.
From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex’s interaction with your bio-kinetic system of muscle, bone and sinew.
Human brains are one hell of a really powerful prediction engine.
So… you just mean that in order to build AI, we’re going to have to solve AI, and it’s hard? I’m not sure the weakened version you’re stating here is useful.
We certainly don’t have to actually, formally solve the SI problem in order to build AI.
I really doubt an AI-like hack even looks like one, if you don’t arrive on it by way of maths.
I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn’t as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems.
Only Uploads offer a true alternative.