I don’t find Eliezer that impressive, for reasons laid out in the article. I argued for animal sentient extensively in the article. Though the main point of the article wasn’t to establish nonphysicalism or animal consciousness but that Eliezer is very irrational on those subjects.
I don’t know if Eliezer is irrational about animal consciousness. There’s a bunch of reasons you can still be deeply skeptical of animal consciousness even if animals have nocioceptors (RL agents have nocioceptors! They aren’t conscious!), or integrated information theory & global workspace theory probably say animals are ‘conscious’. For example, maybe you think consciousness is a verbal phenomenon, having to do with the ability to construct novel recursive grammars. Or maybe you think its something to do with the human capacity to self-reflect, maybe defined as making new mental or physical tools via methods other than brute force or local search.
I don’t think you can show he’s irrational here, because he hasn’t made any arguments to show the rationality or irrationality of. You can maybe say he should be less confident in his claims, or criticize him for not providing his arguments. The former is well known, the latter less useful to me.
I find Eliezer impressive, because he founded the rationality community which IMO is the social movement with by far the best impact-to-community health ratio ever & has been highly influential to other social moments with similar ratios, knew AI would be a big & dangerous deal before virtually anyone, worked on & popularized that idea, and wrote two books (one nonfiction, and the other fanfiction) which changed many peoples’ lives & society for the better. This is impressive no matter how you slice it. His effect on the world will clearly be felt for long to come, if we don’t all die (possibly because we don’t all die, if alignment goes well and turns out to have been a serious worry, which I am prior to believe). And that effect will be positive almost for sure.
I don’t find Eliezer that impressive, for reasons laid out in the article. I argued for animal sentient extensively in the article. Though the main point of the article wasn’t to establish nonphysicalism or animal consciousness but that Eliezer is very irrational on those subjects.
I don’t know if Eliezer is irrational about animal consciousness. There’s a bunch of reasons you can still be deeply skeptical of animal consciousness even if animals have nocioceptors (RL agents have nocioceptors! They aren’t conscious!), or integrated information theory & global workspace theory probably say animals are ‘conscious’. For example, maybe you think consciousness is a verbal phenomenon, having to do with the ability to construct novel recursive grammars. Or maybe you think its something to do with the human capacity to self-reflect, maybe defined as making new mental or physical tools via methods other than brute force or local search.
I don’t think you can show he’s irrational here, because he hasn’t made any arguments to show the rationality or irrationality of. You can maybe say he should be less confident in his claims, or criticize him for not providing his arguments. The former is well known, the latter less useful to me.
I find Eliezer impressive, because he founded the rationality community which IMO is the social movement with by far the best impact-to-community health ratio ever & has been highly influential to other social moments with similar ratios, knew AI would be a big & dangerous deal before virtually anyone, worked on & popularized that idea, and wrote two books (one nonfiction, and the other fanfiction) which changed many peoples’ lives & society for the better. This is impressive no matter how you slice it. His effect on the world will clearly be felt for long to come, if we don’t all die (possibly because we don’t all die, if alignment goes well and turns out to have been a serious worry, which I am prior to believe). And that effect will be positive almost for sure.
What does this mean?