I believe what DanielFilan is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.
If he used the concept of a Go playing AI to inspire discussion along those lines, then Ok, I did get that. I guess I’m not sure where the misunderstanding came from then.
So let me step back and try to approach this in a slightly different manner.
I understand that overall what Daniel ”...is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.” from a position of concern with existential threats from AGI (that is a concern of most people on this forum, one which I share as well).
In this particular post, Daniel put forward a thought experiment which uses the concept of attempting to ‘know’ what a neural network/AI ‘knows’ by using the idea of programming a Go playing AI; the idea being if you could program a Go playing AI and knew what the AI was doing because you programmed it, might that constitute understanding what an AI ‘knew?’
Seeing as how understanding everything that went into programming the Go playing AI would be a lot to ‘know’, it follows that a very efficient program of a Go playing AI would be easier to ‘know’ as there would be less to ‘know’ than if it was a very inefficient program.
Which brings me back to my point which Daniel was responding to:
I suppose this gets back to Daniels’ (OP’s) desire to program a Go Bot in the most efficient manner possible. I think the domain of Go would still be too large for a human to ‘know’ Go the way even the most efficient Go Bot would/will eventually ‘know’ Go.
I think my point still stands that even an efficient and compact Go playing AI would still be too much for a single person to ‘know’, while they may understand the whole program they programmed, that would not allow them to play Go at a professional level.
Because this part of the thread isn’t involved directly with the idea of existential threat from an out of control AGI, I’ll leave my thoughts on how this relates for a different post.
If he used the concept of a Go playing AI to inspire discussion along those lines, then Ok, I did get that. I guess I’m not sure where the misunderstanding came from then.
So let me step back and try to approach this in a slightly different manner.
I understand that overall what Daniel ”...is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.” from a position of concern with existential threats from AGI (that is a concern of most people on this forum, one which I share as well).
In this particular post, Daniel put forward a thought experiment which uses the concept of attempting to ‘know’ what a neural network/AI ‘knows’ by using the idea of programming a Go playing AI; the idea being if you could program a Go playing AI and knew what the AI was doing because you programmed it, might that constitute understanding what an AI ‘knew?’
Seeing as how understanding everything that went into programming the Go playing AI would be a lot to ‘know’, it follows that a very efficient program of a Go playing AI would be easier to ‘know’ as there would be less to ‘know’ than if it was a very inefficient program.
Which brings me back to my point which Daniel was responding to:
I think my point still stands that even an efficient and compact Go playing AI would still be too much for a single person to ‘know’, while they may understand the whole program they programmed, that would not allow them to play Go at a professional level.
Because this part of the thread isn’t involved directly with the idea of existential threat from an out of control AGI, I’ll leave my thoughts on how this relates for a different post.