Of course, it may be possible, but just not reachable through Darwinian selection. But the fact that a search process as huge as evolution couldn’t find it and instead developed profoundly sophisticated phenomenally bound subjectivity is (possibly strong) evidence against the proposition that zombie AGI is possible (or likely to be stumbled on by accident).
This argument also proves that quantum computers can’t offer useful speedups, because otherwise evolution would’ve found them.
If OI is also true, then smarter than human AGIs will likely converge on this as well – since it’s within the reach of smart humans – and this will plausibly lead to AGIs adopting sentience in general as their target for valence optimization.
This argument also proves that Genghis Khan couldn’t have happened, because intelligence and power converge on caring about positive valence for all beings.
I think the orthogonality thesis is in good shape, if these are the strongest arguments we could find against it in almost a decade.
This argument also proves that quantum computers can’t offer useful speedups, because otherwise evolution would’ve found them.
This argument also proves that Genghis Khan couldn’t have happened, because intelligence and power converge on caring about positive valence for all beings.
I think the orthogonality thesis is in good shape, if these are the strongest arguments we could find against it in almost a decade.