It’s detectable because the algorithms are clean and simple as laid out here. Make it a bit more messy, add a few almost-irrelevant cross connections, and it becomes a lot harder.
In theory, of course, you could run an entire world self-contained inside an algorithm, and algorithmic equivalence would argue that it is therefore irrelevant.
And in practice, what I’m aiming for is use “human behaviour + brain structure + FMRI outputs” to get more than just “human behaviour”. It might be that those are equivalent in the limit of a super AI that can analyses every counterfactual universe, yet different in practice for real AIs.
It’s detectable because the algorithms are clean and simple as laid out here. Make it a bit more messy, add a few almost-irrelevant cross connections, and it becomes a lot harder.
In theory, of course, you could run an entire world self-contained inside an algorithm, and algorithmic equivalence would argue that it is therefore irrelevant.
And in practice, what I’m aiming for is use “human behaviour + brain structure + FMRI outputs” to get more than just “human behaviour”. It might be that those are equivalent in the limit of a super AI that can analyses every counterfactual universe, yet different in practice for real AIs.