I might be mistaken, but it seems like you’re forwarding a theory of consciousness, as opposed to a theory of intelligence.
Two issues with that—first, that’s not necessarily the goal of AI research. Second, you’re evaluating consciousness, or possibly intelligence, from the inside, rather than the outside.
I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.
That presumes consciousness can only be understood or recognized from the inside. An AI doesn’t have to know what consciousness feels like (or more particularly, what “feels like” even means) in order to recognize it.
True, but it does need to recognize it, and if it is somehow irreversibly committed to computationalism and that is a mistake, it will fail to be promote consciousness correctly.
For what it’s worth, I strongly doubt Mitchell’s argument for the “irreversibly committed” step. Even an AI lacking all human-like sensation and feeling might reject computationalism, I suspect, provided that it’s false.
I might be mistaken, but it seems like you’re forwarding a theory of consciousness, as opposed to a theory of intelligence.
Two issues with that—first, that’s not necessarily the goal of AI research. Second, you’re evaluating consciousness, or possibly intelligence, from the inside, rather than the outside.
I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.
That presumes consciousness can only be understood or recognized from the inside. An AI doesn’t have to know what consciousness feels like (or more particularly, what “feels like” even means) in order to recognize it.
True, but it does need to recognize it, and if it is somehow irreversibly committed to computationalism and that is a mistake, it will fail to be promote consciousness correctly.
For what it’s worth, I strongly doubt Mitchell’s argument for the “irreversibly committed” step. Even an AI lacking all human-like sensation and feeling might reject computationalism, I suspect, provided that it’s false.