So if I have a simple AGI algorithm, then if I can predict where it will move to, and understand the final state it will move to, I am probably good, as long as I can be sure of some high-level properties of the plan. I.e. the plan should not take over the world let’s say. That seems to be property you might be able to predict of a plan, because it would make the plan so much longer, than just doing the obvious thing. This isn’t easy of cause, but I don’t think having a system that is more complex would help with this. Having a system that is simple makes it simpler to analyze the system in all regards, all else equal (assuming you don’t make it short by writing a code golf program, you still want to follow good design practices, and lay out the program in the obvious most understandable way).
As a story sidenote before I get into why I think tho Q-gap probably is wrong: That I can’t predict that it will rain tomorrow if I have the perfect model of low-level dynamics in the universe, has more to do with how much compute I have available. I might be able to predict if it would rain tomorrow would I know the initial conditions of the universe and some very large but finite amount of compute, if the universe is not infinite?
I am not sure the Q-gap makes sense. I can have a 2D double pendulum. This is very easy to describe and hard to predict. I can make a chaotic system more complex, and then it becomes a bit harder to predict but not really by much. It’s not analytically solvable for 2 joints already (according to Google).
That describing the functioning of complex mechanisms seems harder than saying what they do, might be an illusion. We as humans have a lot of abstractions in our heads to think about the real world. A lot of the things that we build mechanisms to do are expressible in these concepts. So they seem simple to us. This is true for most mechanisms we build that produce some observable output.
If we ask “What does this game program running of a computer do?” We can say something like “It creates the world that I see on the screen.” That is a simple explanation in terms of observed effects. We care about things in the world, and for those things we normally have concepts, and then machines that manipulate the world in ways we want have interpretable output.
There is also the factor that we need complex programs for things where we have not figured out a good general solution, which would then be simple. If we have a complex program in the world, it might be complex because the creators have not figured out how to do it the right way.
So I guess I am saying that there are two properties of a program. Caoticness, and Kolmogorov complexity. Increasing one always makes the program less interpretable, if the other stays fixed, if we assume that we are only considering optimal algorithms, and not a bunch of halfhazard heuristics we use because we have not figured out the best algorithm yet.
That is an interesting analogy.
So if I have a simple AGI algorithm, then if I can predict where it will move to, and understand the final state it will move to, I am probably good, as long as I can be sure of some high-level properties of the plan. I.e. the plan should not take over the world let’s say. That seems to be property you might be able to predict of a plan, because it would make the plan so much longer, than just doing the obvious thing. This isn’t easy of cause, but I don’t think having a system that is more complex would help with this. Having a system that is simple makes it simpler to analyze the system in all regards, all else equal (assuming you don’t make it short by writing a code golf program, you still want to follow good design practices, and lay out the program in the obvious most understandable way).
As a story sidenote before I get into why I think tho Q-gap probably is wrong: That I can’t predict that it will rain tomorrow if I have the perfect model of low-level dynamics in the universe, has more to do with how much compute I have available. I might be able to predict if it would rain tomorrow would I know the initial conditions of the universe and some very large but finite amount of compute, if the universe is not infinite?
I am not sure the Q-gap makes sense. I can have a 2D double pendulum. This is very easy to describe and hard to predict. I can make a chaotic system more complex, and then it becomes a bit harder to predict but not really by much. It’s not analytically solvable for 2 joints already (according to Google).
That describing the functioning of complex mechanisms seems harder than saying what they do, might be an illusion. We as humans have a lot of abstractions in our heads to think about the real world. A lot of the things that we build mechanisms to do are expressible in these concepts. So they seem simple to us. This is true for most mechanisms we build that produce some observable output.
If we ask “What does this game program running of a computer do?” We can say something like “It creates the world that I see on the screen.” That is a simple explanation in terms of observed effects. We care about things in the world, and for those things we normally have concepts, and then machines that manipulate the world in ways we want have interpretable output.
There is also the factor that we need complex programs for things where we have not figured out a good general solution, which would then be simple. If we have a complex program in the world, it might be complex because the creators have not figured out how to do it the right way.
So I guess I am saying that there are two properties of a program. Caoticness, and Kolmogorov complexity. Increasing one always makes the program less interpretable, if the other stays fixed, if we assume that we are only considering optimal algorithms, and not a bunch of halfhazard heuristics we use because we have not figured out the best algorithm yet.