I don’t disagree with much of anything you’ve said here, by the way.
Remember that I’m writing a book that, for most of its length, will systematically explain why the proposed solutions in the literature won’t work.
The problem is that SIAI is not even engaging in that discussion. Where is the detailed explanation of why these proposed solutions won’t work? I don’t get the impression someone like Yudkowsky has even read these papers, let alone explained why the proposed solutions won’t work. SIAI is just talking a different language than the professional machine ethics community is.
Most of the literature on machine ethics is not that useful, but that’s true of almost any subject. The point of a literature hunt is to find the gems here and there that genuinely contribute to the important project of Friendly AI. Another points is to interact with the existing literature and explain to people why it’s not going to be that easy.
My sentiment about the role of engaging existing literature on machine ethics is analogous to what you describe in a recent post on your blog. Particularly this:
Oh God, you think. That’s where the level of discussion is, on this planet.
You either push the boundaries, or fight the good fight. And the good fight is best fought by writing textbooks and opening schools, not by public debates with distinguished shamans. But it’s not entirely fair, since some of machine ethics addresses a reasonable problem of making good-behaving robots, which just happens to have the same surface feature of considering moral valuation of decisions of artificial reasoners, but on closer inspection is mostly unrelated to the problem of FAI.
Sure. One of the hopes of my book is, as stated earlier, to bring people up to where Eliezer Yudkowsky was circa 2004.
Also, I worry that something is being overlooked by the LW / SIAI community because the response to suggestions in the literature has been so quick and dirty. I’m on the prowl for something that’s been missed because nobody has done a thorough literature search and detailed rebuttal. We’ll see what turns up.
I don’t disagree with much of anything you’ve said here, by the way.
Remember that I’m writing a book that, for most of its length, will systematically explain why the proposed solutions in the literature won’t work.
The problem is that SIAI is not even engaging in that discussion. Where is the detailed explanation of why these proposed solutions won’t work? I don’t get the impression someone like Yudkowsky has even read these papers, let alone explained why the proposed solutions won’t work. SIAI is just talking a different language than the professional machine ethics community is.
Most of the literature on machine ethics is not that useful, but that’s true of almost any subject. The point of a literature hunt is to find the gems here and there that genuinely contribute to the important project of Friendly AI. Another points is to interact with the existing literature and explain to people why it’s not going to be that easy.
My sentiment about the role of engaging existing literature on machine ethics is analogous to what you describe in a recent post on your blog. Particularly this:
You either push the boundaries, or fight the good fight. And the good fight is best fought by writing textbooks and opening schools, not by public debates with distinguished shamans. But it’s not entirely fair, since some of machine ethics addresses a reasonable problem of making good-behaving robots, which just happens to have the same surface feature of considering moral valuation of decisions of artificial reasoners, but on closer inspection is mostly unrelated to the problem of FAI.
Sure. One of the hopes of my book is, as stated earlier, to bring people up to where Eliezer Yudkowsky was circa 2004.
Also, I worry that something is being overlooked by the LW / SIAI community because the response to suggestions in the literature has been so quick and dirty. I’m on the prowl for something that’s been missed because nobody has done a thorough literature search and detailed rebuttal. We’ll see what turns up.