Well, first, you are an expert in the area, someone who probably put 1000 times more effort into figuring things out, so it’s unwise for me to think that I can say anything interesting to you in an area you have thought about. I have been on the other side of such a divide in my area of expertise, and it is easy to see a dabbler’s thought processes and the basic errors they are making a mile away. Bur since you seem to be genuinely asking, I will try to clarify.
At some point a human is going to enter a command or press a button that causes code to start running. That human is going to know that an AI system has been created. (I’m not arguing that all humans will know that an AI system has been created,
Right, those who are informed, would know. Those who are not informed may or may not figure it out on their own, and with minimal effort the AI hand can probably be masked as a natural event. Maybe I misinterpreted your point. Mine was that, just like an E.coli would not recognize an agent, neither would humans if it wasn’t something we are already primed to recognize.
My other point was indeed nto a nitpick, more about a human-level AI requiring a reasonable formalization of the game of human interaction, rather than any kind of a new learning mechanism, those are already good enough. Not an AGI, but a domain AI for a specific human domain that is not obviously a game. Examples might be a news source, an emotional support bot, a science teacher, a poet, an artist…
They nonetheless contain useful information, in a way that E coli may not. See for example Inverse Reward Design.
Interesting link, thanks! Right, the information can be useful, even if not truthful, as long as the asker can evaluate the reliability of the reply.
Right, those who are informed, would know. Those who are not informed may or may not figure it out on their own, and with minimal effort the AI hand can probably be masked as a natural event. Maybe I misinterpreted your point. Mine was that, just like an E.coli would not recognize an agent, neither would humans if it wasn’t something we are already primed to recognize.
Yup, agreed. All of the “we”s in my original statement (such as “We will know that an AI system has been created”) were meant to refer to the people who created and deployed the AI system, though I now see how that was confusing.
Well, first, you are an expert in the area, someone who probably put 1000 times more effort into figuring things out, so it’s unwise for me to think that I can say anything interesting to you in an area you have thought about. I have been on the other side of such a divide in my area of expertise, and it is easy to see a dabbler’s thought processes and the basic errors they are making a mile away. Bur since you seem to be genuinely asking, I will try to clarify.
Right, those who are informed, would know. Those who are not informed may or may not figure it out on their own, and with minimal effort the AI hand can probably be masked as a natural event. Maybe I misinterpreted your point. Mine was that, just like an E.coli would not recognize an agent, neither would humans if it wasn’t something we are already primed to recognize.
My other point was indeed nto a nitpick, more about a human-level AI requiring a reasonable formalization of the game of human interaction, rather than any kind of a new learning mechanism, those are already good enough. Not an AGI, but a domain AI for a specific human domain that is not obviously a game. Examples might be a news source, an emotional support bot, a science teacher, a poet, an artist…
Interesting link, thanks! Right, the information can be useful, even if not truthful, as long as the asker can evaluate the reliability of the reply.
Yup, agreed. All of the “we”s in my original statement (such as “We will know that an AI system has been created”) were meant to refer to the people who created and deployed the AI system, though I now see how that was confusing.