Consider a situation where Mary is so dexterous that she is able to perform fine-grained brain surgery on herself. In that case, she could look at what an example of a brain that has seen red looks like, and manually copy any relevant differences into her own brain. In that case, while she still never would have actually seen red through her eyes, it seems like she would know what it is like to see red as well as anyone else.
But in order to create a realistic experience she would have to create a false memory of having seen red, which is something that an agent (human or AI) that values epistemic rationality would not want to do.
Since you’d know it was a false memory, it doesn’t necessarily seem to be a problem, at least if you really need to know what red is like for some reason.
If you know that it is a false memory then the experience is not completely accurate, though it may be perhaps more accurate than what human imagination could produce.
But in order to create a realistic experience she would have to create a false memory of having seen red, which is something that an agent (human or AI) that values epistemic rationality would not want to do.
Since you’d know it was a false memory, it doesn’t necessarily seem to be a problem, at least if you really need to know what red is like for some reason.
If you know that it is a false memory then the experience is not completely accurate, though it may be perhaps more accurate than what human imagination could produce.