An amusing n=3 survey of mathematics undergrads at Trinity Cambridge:
1) Refused to answer.
2) It depends on how reliable Omega is/but you cant (shouldn’t) really quantify ethics anyway/this situation is unreasonable.
3) Obviously 2 box, one boxing is insane.
3 said he would program an AI to one box. And when I pointed out that his brain was built of quarks just like the AI he responded that in that case free will didn’t exist and choice was impossible.
An amusing n=3 survey of mathematics undergrads at Trinity Cambridge:
1) Refused to answer. 2) It depends on how reliable Omega is/but you cant (shouldn’t) really quantify ethics anyway/this situation is unreasonable. 3) Obviously 2 box, one boxing is insane.
3 said he would program an AI to one box. And when I pointed out that his brain was built of quarks just like the AI he responded that in that case free will didn’t exist and choice was impossible.