Jiro: Did you read my post? I discuss whether getting an AI to ‘obey verbal instructions’ is a trivial task in the first named section. I also link to section 2 of Yudkowsky’s reply to Holden, which addresses the question of whether ‘talk to it and ask it how it will make people happy’ is generally a safe way to interact with an Unfriendly Oracle.
I also specifically quote an argument you made in section 2 that I think reflects a common mistake in this whole family of misunderstandings of the problem — the conflation of the seed AI with the artificial superintelligence it produces. Do you agree this distinction helps clarify why the problem is one of coding the right values, and not of coding the right factual knowledge or intelligence-relevant capacities?
Jiro: Did you read my post? I discuss whether getting an AI to ‘obey verbal instructions’ is a trivial task in the first named section. I also link to section 2 of Yudkowsky’s reply to Holden, which addresses the question of whether ‘talk to it and ask it how it will make people happy’ is generally a safe way to interact with an Unfriendly Oracle.
I also specifically quote an argument you made in section 2 that I think reflects a common mistake in this whole family of misunderstandings of the problem — the conflation of the seed AI with the artificial superintelligence it produces. Do you agree this distinction helps clarify why the problem is one of coding the right values, and not of coding the right factual knowledge or intelligence-relevant capacities?