Basic true/false test; reverse stupidity is not intelligence but rationalists tend to have fewer false beliefs. Taking the test upon entering the school would prevent the school from teaching to the test and the test could be scored on multiple areas of which one is a cunningly disguised synonym for rationality and the others are red herrings so that irrationalists have no incentive to lie on the test.
Angela
I used to assume that the probability that heaven and hell existed was not zero, and I lived much of my teenage years by Pascal’s Wager, partly because I was scared of what my parents would say if I stopped believing in God and partly because I had heard of miracle stories and not yet worked out how they had happened and I could not bear the thought of life being meaningless. Then I realised that if there were a non-zero probability of me having eternal life then the probability of me currently being in this first finite fraction of my life would be zero. Since I am currently on Earth the probability of eternal life must therefore be zero.
The hard problem of consciousness will be solved within the next decade (60%).
The likes of Pythagoras got attributed with performing miracles too. Although Mark, the first synoptic gospel to be written, is claimed to be an eyewitness account in Christian circles, it is likely that none of the gospels were. Paul was writing before then, but he never directly met Jesus, he only had a vision of Jesus. Also, Paul does not mention the empty tomb anywhere.
There is a paper on both IIT and causal density here:
The amount of consciousness that a neural network S has is given by phi=MI(A^H_max;B)+MI(A;B^H_max), where {A,B} is the bipartition of S which minimises the right hand side, A^H_max is what A would be if all its inputs were replaced with maximum-entropy noise generators and MI(A,B)=H(A)+H(B)-H(AB) is the mutual information between A and B and H(A) is the entropy of A. 99.9%
Following the reasoning behind the Doomsday Argument, this particular thought is likely to be in the middle along the timeline of all thoughts experienced. This observation reduces the chances that in the future we will create AI that will experience many orders of magnitude more thoughts than those of all humans put together.
If some means could be found to estimate phi for various species, a variable claimed by this paper to be a measure of “intensity of sentience”, it would the relative value of the lives of different animals to be estimated and would help solve many moral dilemmas. Intensity of suffering as a result of a particular action would be expected to be proportionate to the intensity of sentience, however whilst mammals and birds (the groups which possess neocortex, the parts of the brain where consciousness is believed to occur) can be assumed to experience suffering when doing activities that decrease their evolutionary fitness (natural beauty etc. also determine pleasure and pain and are as yet poorly understood, but they are likely to be less significant in other species anyway, extrapolating from the differences in aesthetics from humans with high vs low IQ). However for AI it is much harder to determine what makes it happy or whether or not it enjoys dying, for which we will need to find a simple generalisable definition of suffering that can apply to all possible AI rather than our current concept which is more of an unrigorous Wittgensteinian family resemblance.
Then why does it also work for sugar water, which does not taste repulsive?