At a guess, “certain organizations beyond Britain’s shores” is most likely to imply “somewhere we don’t have to worry about pesky ‘laws’”.
fractalcat
Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I’ve seen.
I had a curious skim through this guy’s blog. Soon happened upon this interview with Joe Mercola. I get that people sometimes do questionable things they wouldn’t otherwise do for publicity, but this is pretty out there. For those unfamiliar with the good Dr Mercola, he’s second only to Mehmet Oz in damage done to public understanding of the scientific basis of medicine. That’s no flaw in McGuff’s own work, but I’m a little dubious of a physician who’s willing to associate with antivaxxers in a public professional context, from an ethical standpoint if nothing else. He may have good reasons for this, but it does trigger my quack-heuristic.
Imagine mapping my brain into two interpenetrating networks. For each brain cell, half of it goes to one map and half to the other. For each connection between cells, half of each connection goes to one map and half to the other.
What would happen in this case is that there would be no Manfreds, because (even assuming the physical integrity of the neuron-halves was preserved) you can’t activate a voltage-gated ion channel with half the potential you had before. You can’t reason about the implications of the physical reality of brains while ignoring the physical reality of brains.
Or are you asserting no physical changes to the system, and just defining each neuron to be multiple entities? For the same reason I think the p-zombies argument is incoherent, I’m quite comfortable not assigning any moral weight to epiphenomenal ‘people’.
Can someone post a ROT13ed link? I’m curious.
I’m not totally sure of your argument here; would you be able to clarify why satisficing is superior to a straight maximization given your hypothetical[0]?
Specifically, you argue correctly that human judgement is informed by numerous hidden variables over which we have no awareness, and thus a maximization process executed by us has the potential for error. You also argue that ‘eutopian’/‘good enough’ worlds are likely to be more common than sirens. Given that, how is a judgement with error induced by hidden variables any worse than a judgement made using deliberate randomization (or selecting the first ‘good enough’ world, assuming no unstated special properties of our worldspace-traversal)? Satisficing might be more computationally efficient, but that doesn’t seem to be the argument you’re making.
[0] The ex-nihilo siren worlds rather than the designed ones; an evil AI presumably has knowledge of our decision process and can create perfectly-misaligned worlds.
Yes, indeed, we don’t always have conscious control over the same set of things over which we intuitively believe we have conscious control. That’s the foundation of (among other things) the difference between System 1 and System 2 in the biases literature. It’s also (as Kaj_Sotala noted) one reason habit is such a powerful influence on human behaviour, and the reason things like drug addiction exist. But how could it be any other way? Brains aren’t made of magical-consciousness-stuff, they’re physical, modular, evolved entities in a species descended from lizards.
I’d be interested in hearing more about the methods you’ve found effective for noticing the semiconscious decisions that you’re making and how you’ve evaluated their effectiveness.
Nitpick: ‘anencephalic’. ‘cephalon’ is head, ‘encephalon’ is brain.
First off, I should note that I’m still not really sure what ‘Bayesianism’ means; I’m interpreting it here as “understanding of conditional probabilities as applied to decision-making”.
No human can apply Bayesian reasoning exactly, quantitatively and unaided in everyday life. Learning how to approximate it well enough to tell a computer how to use it for you is a (moderately large) research area. From what you’ve described, I think you have a decent working qualitative understanding of what it implies for everyday decision-making, and if everyday decision-making is your goal I suspect you might be better-served reading up on common cognitive biases (I heartily recommend /Heuristics and Biases/ ed Kahneman and Tversky as a starting point). Learning probability theory in depth is certainly worthwhile, but in terms of practical benefit outside of the field I suspect most people would be better off reading some cognitive science, some introductory stats and most particularly some experimental design.
Wrt your goals, learning probability theory might make you a better programmer (depends what your interests are and where you are on the skill ladder), but it’s almost certainly not the most important thing (if you would like more specific advice on this topic, let me know and I’d be happy to elaborate). I have examples similar to dhoe’s, but the important bits of the troubleshooting process for me are “base rate fallacy” and “construct falsifiable hypotheses and test them before jumping to conclusions”, not any explicit probability calculation.