I share your intuition. Turing already conjectured how much computing power an AGI needs and he said something little. I think the hardest part was getting to computers and AGI is just making a program that is a bit more dynamic.
I can recommend all of Marvin Minskies work. The Society Of Mind is very accessible and has an online version. In short, the mind is made of smaller sub-pieces. The important aspects are the orchestration and the architecture of these resources. And Minsky also has some stuff on how you put that into programs.
is very concrete with code implementation. Implementing some of the layers of critics that Minsky described in “The Emotion Machine” that are a hypothesis of how common sense could be built.
Read by Aaron Sloman and Gerald Sussman isn’t this super cool?
It is useful to first think of the concepts before programming something. We might be thinking of slightly different things with the word algorithm. It sounds very low level to me. While the important things are the architecture of a program, not the bricks it is made out of.
I think the problem with the things you mention is that they are just super vague, where you don’t even know what is the thing that you are talking about. What does it mean that:
Most important of all, perhaps, is making such machines learn from their own experience.
Finally, we’ll get machines that think about themselves and make up theories, good or bad, of how they, themselves might work.
Also, all of this seems to be some sort of vague stuff about imagining how AI systems could be. I’m actually interested in just making the AI systems and making them in a very specific way such that they have good alignment properties and not vaguely philosophizing about what could happen. The whole point of writing down algorithms explicitly, which is one non-dumb way to build AGI, is that you can just see what’s going on in the algorithm and understand it and design the algorithm in such a way that it would think in a very particular way.
So it’s not like, oh yes, these machines will think for themselves some stuff and it will be good or bad, it’s more like, I make these machines think, how do I make them think, what’s the actual algorithm to make them think, how can I make this algorithm such that it will actually be aligned. And I am controlling what they are thinking, I am controlling if it’s good or bad, I am controlling if they are going to build a model of themselves, maybe that’s dangerous for alignment purposes in some context and then I would want the algorithm to not want the system to build a model of themselves.
For, at that point, they’ll probably object to being called machines.
I think it’s pretty accurate to say that I am a machine.
(Also, as a meta note, it would be very good, I think, if you do not break the lines as you did in this big text block because that’s pretty annoying to block quote.)
I share your intuition. Turing already conjectured how much computing power an AGI needs and he said something little. I think the hardest part was getting to computers and AGI is just making a program that is a bit more dynamic.
I can recommend all of Marvin Minskies work. The Society Of Mind is very accessible and has an online version. In short, the mind is made of smaller sub-pieces. The important aspects are the orchestration and the architecture of these resources. And Minsky also has some stuff on how you put that into programs.
The most concrete stuff I know of:
EM-ONE: An Architecture for Reflective Commonsense Thinking Push Singh
is very concrete with code implementation. Implementing some of the layers of critics that Minsky described in “The Emotion Machine” that are a hypothesis of how common sense could be built.
Read by Aaron Sloman and Gerald Sussman isn’t this super cool?
It is useful to first think of the concepts before programming something. We might be thinking of slightly different things with the word algorithm. It sounds very low level to me. While the important things are the architecture of a program, not the bricks it is made out of.
I think the problem with the things you mention is that they are just super vague, where you don’t even know what is the thing that you are talking about. What does it mean that:
Also, all of this seems to be some sort of vague stuff about imagining how AI systems could be. I’m actually interested in just making the AI systems and making them in a very specific way such that they have good alignment properties and not vaguely philosophizing about what could happen. The whole point of writing down algorithms explicitly, which is one non-dumb way to build AGI, is that you can just see what’s going on in the algorithm and understand it and design the algorithm in such a way that it would think in a very particular way.
So it’s not like, oh yes, these machines will think for themselves some stuff and it will be good or bad, it’s more like, I make these machines think, how do I make them think, what’s the actual algorithm to make them think, how can I make this algorithm such that it will actually be aligned. And I am controlling what they are thinking, I am controlling if it’s good or bad, I am controlling if they are going to build a model of themselves, maybe that’s dangerous for alignment purposes in some context and then I would want the algorithm to not want the system to build a model of themselves.
(Also, as a meta note, it would be very good, I think, if you do not break the lines as you did in this big text block because that’s pretty annoying to block quote.)