No offense meant Daniel, generally “OP’ stands for ‘Original Poster’. I’m uncomfortable using names until I better know people on forums, and Mr. Filan seems too formal, so I settled on OP as is the norm on forums.
Therefore, I set this challenge: know everything that the best go bot knows about go.
I guess I’m unsure now of what your post is asking, as I was operating under the understanding that the above quote from your post was the main thrust of it.
Sorry Daniel, I really didn’t mean any offense, in fact I was maybe a bit too eager to jump into an area I have interest in, but don’t really understand at a technical level. While I am pretty familiar with Go, not so much with ML or AI. In fact I really appreciated the discussion, even though I am conflicted about AI’s impact on the Go community.
It’s funny though, I recently had the opportunity to talk a very little bit with a 9 dan professional about his experience with AI, and I was a bit surprised by his response. I value his opinion very much, and so have attempted to try and change my attitude about Go playing AI slightly.
I believe what DanielFilan is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.
(Because one day we may have AIs that are much much smarter than we are, and being much smarter than us may make them much more powerful than us in various senses, and in that case it could be tremendously important that we be able to avoid having them use that power in ways that would be disastrous for us. At present, the most impressive and most human-intelligence-like AI systems are neural networks, so getting a deep understanding of neural networks might turn out to be not just very interesting for its own sake but vital for the survival of the human race.)
This is correct, altho I’m specifically interested in the case of go AI because I think it’s important to understand neural networks that ‘plan’, as well as those that merely ‘perceive’ (the latter being the main focus of most interpretability work, with some notableexceptions).
I believe what DanielFilan is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.
If he used the concept of a Go playing AI to inspire discussion along those lines, then Ok, I did get that. I guess I’m not sure where the misunderstanding came from then.
So let me step back and try to approach this in a slightly different manner.
I understand that overall what Daniel ”...is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.” from a position of concern with existential threats from AGI (that is a concern of most people on this forum, one which I share as well).
In this particular post, Daniel put forward a thought experiment which uses the concept of attempting to ‘know’ what a neural network/AI ‘knows’ by using the idea of programming a Go playing AI; the idea being if you could program a Go playing AI and knew what the AI was doing because you programmed it, might that constitute understanding what an AI ‘knew?’
Seeing as how understanding everything that went into programming the Go playing AI would be a lot to ‘know’, it follows that a very efficient program of a Go playing AI would be easier to ‘know’ as there would be less to ‘know’ than if it was a very inefficient program.
Which brings me back to my point which Daniel was responding to:
I suppose this gets back to Daniels’ (OP’s) desire to program a Go Bot in the most efficient manner possible. I think the domain of Go would still be too large for a human to ‘know’ Go the way even the most efficient Go Bot would/will eventually ‘know’ Go.
I think my point still stands that even an efficient and compact Go playing AI would still be too much for a single person to ‘know’, while they may understand the whole program they programmed, that would not allow them to play Go at a professional level.
Because this part of the thread isn’t involved directly with the idea of existential threat from an out of control AGI, I’ll leave my thoughts on how this relates for a different post.
No offense meant Daniel, generally “OP’ stands for ‘Original Poster’. I’m uncomfortable using names until I better know people on forums, and Mr. Filan seems too formal, so I settled on OP as is the norm on forums.
I guess I’m unsure now of what your post is asking, as I was operating under the understanding that the above quote from your post was the main thrust of it.
OP is a fine way to refer to me, I was just confused since I didn’t think my post indicated that my desire was to efficiently program a go bot.
Sorry Daniel, I really didn’t mean any offense, in fact I was maybe a bit too eager to jump into an area I have interest in, but don’t really understand at a technical level. While I am pretty familiar with Go, not so much with ML or AI. In fact I really appreciated the discussion, even though I am conflicted about AI’s impact on the Go community.
It’s funny though, I recently had the opportunity to talk a very little bit with a 9 dan professional about his experience with AI, and I was a bit surprised by his response. I value his opinion very much, and so have attempted to try and change my attitude about Go playing AI slightly.
I believe what DanielFilan is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.
(Because one day we may have AIs that are much much smarter than we are, and being much smarter than us may make them much more powerful than us in various senses, and in that case it could be tremendously important that we be able to avoid having them use that power in ways that would be disastrous for us. At present, the most impressive and most human-intelligence-like AI systems are neural networks, so getting a deep understanding of neural networks might turn out to be not just very interesting for its own sake but vital for the survival of the human race.)
This is correct, altho I’m specifically interested in the case of go AI because I think it’s important to understand neural networks that ‘plan’, as well as those that merely ‘perceive’ (the latter being the main focus of most interpretability work, with some notable exceptions).
If he used the concept of a Go playing AI to inspire discussion along those lines, then Ok, I did get that. I guess I’m not sure where the misunderstanding came from then.
So let me step back and try to approach this in a slightly different manner.
I understand that overall what Daniel ”...is mostly interested in here is the general project of understanding what neural networks “know” or “understand” or “want”.” from a position of concern with existential threats from AGI (that is a concern of most people on this forum, one which I share as well).
In this particular post, Daniel put forward a thought experiment which uses the concept of attempting to ‘know’ what a neural network/AI ‘knows’ by using the idea of programming a Go playing AI; the idea being if you could program a Go playing AI and knew what the AI was doing because you programmed it, might that constitute understanding what an AI ‘knew?’
Seeing as how understanding everything that went into programming the Go playing AI would be a lot to ‘know’, it follows that a very efficient program of a Go playing AI would be easier to ‘know’ as there would be less to ‘know’ than if it was a very inefficient program.
Which brings me back to my point which Daniel was responding to:
I think my point still stands that even an efficient and compact Go playing AI would still be too much for a single person to ‘know’, while they may understand the whole program they programmed, that would not allow them to play Go at a professional level.
Because this part of the thread isn’t involved directly with the idea of existential threat from an out of control AGI, I’ll leave my thoughts on how this relates for a different post.