Doing the research necessary to have something more than a vaguely arrived-at probability — that is, a wild guess — would have more dramatic consequences.
Well, obviously so. But that sounds more like a PhD program than a LW comment. My point was, there seems to be a trend, and the trend is “self reflection allows for self contradiction and inconsistency”. I imagine the general thrust of a more formal argument could be imagining that there is a “correct” decision theory, imagining a Turing machine implementing such a theory, proving that this machine is in itself Turing-complete (as in, it is always possible to input a decision problem that maps to any arbitrary algorithm) and then it would follow that it being reflexive would require it to be able to solve the halting problem.
Well, obviously so. But that sounds more like a PhD program than a LW comment. My point was, there seems to be a trend, and the trend is “self reflection allows for self contradiction and inconsistency”. I imagine the general thrust of a more formal argument could be imagining that there is a “correct” decision theory, imagining a Turing machine implementing such a theory, proving that this machine is in itself Turing-complete (as in, it is always possible to input a decision problem that maps to any arbitrary algorithm) and then it would follow that it being reflexive would require it to be able to solve the halting problem.