That presumes consciousness can only be understood or recognized from the inside. An AI doesn’t have to know what consciousness feels like (or more particularly, what “feels like” even means) in order to recognize it.
True, but it does need to recognize it, and if it is somehow irreversibly committed to computationalism and that is a mistake, it will fail to be promote consciousness correctly.
For what it’s worth, I strongly doubt Mitchell’s argument for the “irreversibly committed” step. Even an AI lacking all human-like sensation and feeling might reject computationalism, I suspect, provided that it’s false.
That presumes consciousness can only be understood or recognized from the inside. An AI doesn’t have to know what consciousness feels like (or more particularly, what “feels like” even means) in order to recognize it.
True, but it does need to recognize it, and if it is somehow irreversibly committed to computationalism and that is a mistake, it will fail to be promote consciousness correctly.
For what it’s worth, I strongly doubt Mitchell’s argument for the “irreversibly committed” step. Even an AI lacking all human-like sensation and feeling might reject computationalism, I suspect, provided that it’s false.