Well, one practical result we’ve got is that we shouldn’t program AIs to assume (either implicitly or explicitly) that the universe must be computable. See this discussion between Eliezer and me about this.
Making agents with assumptions about anything which we are not confident of the truth of seems like a dubious strategy.
We are fairly confident of the Church-Turing thesis, though: “Today the thesis has near-universal acceptance” - http://en.wikipedia.org/wiki/Church–Turing_thesis
Well, one practical result we’ve got is that we shouldn’t program AIs to assume (either implicitly or explicitly) that the universe must be computable. See this discussion between Eliezer and me about this.
Making agents with assumptions about anything which we are not confident of the truth of seems like a dubious strategy.
We are fairly confident of the Church-Turing thesis, though: “Today the thesis has near-universal acceptance” - http://en.wikipedia.org/wiki/Church–Turing_thesis