The way I’m using these words, my “this latest paper as an important first step on an important sub-problem of the Friendly AI problem” is equivalent to Eliezer’s “begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty.”
Ok. I disagree that the paper is an important first step.
Because Eliezer is making an appeal based on psychological and sociological considerations, spelling out my reasoning requires discussion of what sorts of efforts are likely to impact the scientific community, and whether one can expect such research to occur by default. Discussing these requires discussion of psychology, sociology and economics, partly as related to whether the world’s elites will navigate the creation of AI just fine.
I look forward to it! Our models of how the scientific community works may be substantially different. To consider just one particularly relevant example, consider what the field of machine ethics looks like without the Yudkowskian line.
I agree that Eliezer has substantially altered the field of machine ethics. My view here is very much contingent on the belief that elites will navigate the creation of AI just fine, which, if true, is highly nonobvious.
The way I’m using these words, my “this latest paper as an important first step on an important sub-problem of the Friendly AI problem” is equivalent to Eliezer’s “begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty.”
Ok. I disagree that the paper is an important first step.
Because Eliezer is making an appeal based on psychological and sociological considerations, spelling out my reasoning requires discussion of what sorts of efforts are likely to impact the scientific community, and whether one can expect such research to occur by default. Discussing these requires discussion of psychology, sociology and economics, partly as related to whether the world’s elites will navigate the creation of AI just fine.
I’ve described a little bit of my reasoning, and will be elaborating on it in detail in future posts.
I look forward to it! Our models of how the scientific community works may be substantially different. To consider just one particularly relevant example, consider what the field of machine ethics looks like without the Yudkowskian line.
I agree that Eliezer has substantially altered the field of machine ethics. My view here is very much contingent on the belief that elites will navigate the creation of AI just fine, which, if true, is highly nonobvious.