Will that work? Or to put it particle-ish-ly, how is the information about a charge inside an event horizon able to escape?
JulianMorrison
Nick Bostrom argues persuasively that much science would be impossible if we treated ‘I observe X’ as ‘someone observes X’. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements.
The obvious flaw in this idea is that it’s doing half a boolean update—it’s ignoring the prior. And scientists spend effort setting themselves up in probabilistic states where their prior is that when they measure a temperature of 15 degrees, it’s because the temperature is 15 degrees. Stuff like calibrating the instruments and repeating the measurements are, whether or not they are seen as such, plainly intended to create a chain of AND-ed probability where inaccuracy becomes vanishingly unlikely.
Now you’ve made me curious what you thought it was.
Although to clarify, I meant that as generally as I said it. It applies in kink, and it applies out of kink. Kink just has the most readily accessible anecdotes.
If and only if it’s negatively self reinforcing. Which it might not be, if it’s serving some purpose.
Self-harm can help you to feel in control, and reduce uncomfortable feelings of tension and distress. If you feel guilty, it can be a way of punishing yourself and relieving your guilt. Either way, it can become a ‘quick fix’ for feeling bad.
-- http://www.rcpsych.ac.uk/mentalhealthinfo/problems/depression/self-harm.aspx
How about I direct you to this blog for a gentle introduction?
No, it would not. Those meds rebalance an unbalanced system, they don’t rebuild a completely undercut system. And you are assuming that simulating the chemical effect of the meds would be easy—lest you forget, we’ve thrown away the chemicals, that’s the problem.
The idea is that you accept default risk only from people to whom you can apply social arm-twisting. This includes them arm-twisting you not to overextend credit or take risky debt.
Anecdotally, punishment seems to be a good guilt-releaser, while guilt is dysthymic. Punishment may be effective at snapping someone out of a blue funk and getting them to be responsive to rewards. Guilty people reject rewards. (The above may work better if you are kinked that way.)
I’m reminded of Charles Stross on space colonization, where he talks about how it’s a bit too late to realize you forgot the (insert essential mineral here) supplement when your interstellar generation ship starts coming down with the purple polkadot scurvy at 0.001c and boosting. There’s a reason we can’t reliably provision a generation ship, and it’s that we have never yet tried to completely and permanently sever ourselves from Earth’s ecology and biosphere. We may think we’ve got it all covered, but if there’s a leak in the cycles somewhere, or something missing we never knew was important, our intrepid astronauts are going to be in for a hard time, either immediately or generations later.
This by analogy strikes me as a general problem with uploading, but a specific problem with anything that throws away a lot of “body biosphere”. There will be an initial shakedown period, mostly on animal models, where we learn the obvious breakages (some of which are likely to only show up in human uploads because they create subtler kinds of mental illness). But it’s going to be hard to be sure we have eliminated all the deficiencies and closed all the feedback loops. It will just plain take time, and a lot of unpleasantness and health scares.
Yeah, but my point being, you don’t know all those feedbacks. There are probably scads of them. And realistically the only way to find out would be to boot up a great many nonhuman animals first, and watch them bug out in informative ways. Which is likely to be cruel work, and not fun at all.
Also “only alter your mood”? Well, only as much as being hypoglycemic, or hypoxic, or panicking for breath, or ravenous, or various other hormone or feedback linked things alter your mood, especially with them all firing at OHSHITGONNADIE levels all at once—ie, it would be instantly and horribly incapacitating.
It loses any data that is not structural in the neurons’ physical shape—whose importance is not currently known. We can presume that electrical signals can be rebooted, but can chemical ones? Will the brain fail as badly as a drunkard or someone who drinks twenty espressos, if shorn of its chemical context?
This is particularly plausible because the brain is full of low-level feedback loops controlling endocrine stuff—I could fully expect them to go completely bugfuck if their sensor inputs suddenly read “0.0”.
To give an example here: “gonadotropin-releasing hormone analogue” drugs are used to block secretion of sex hormones. They were originally designed to increase them. GNRH is the “on switch” signal, the drugs mimic it. And indeed they do initially increase hormones—then the brain’s regulatory feedback slams on the brakes, all the way to zero. Nobody knew that particular feedback was there before they poked it with a drug.
This tech may make testing the above much easier though.
Freeze, slice, stain, and microscope can check chemicals in the way this cannot.
The words must have a correlation somewhere in my mind or I wouldn’t have thought them in that order,
I have a strong suspicion that is not so—that the brain just chatters to itself, it’s pareidolia operating on static hiss on the neurons.
“Refuse to adjust your utility function because you will no longer be you, unless the adjustment improves you in terms of your own values” seems to be an important general principle, and it should be enough for Gandhi to turn down the pill.
The general principle is: cached is fast, cache-populating is slow. This goes for mind and “body” both, because the body does as its told, but it needs telling in a lot of detail and the control signals need to be discovered. Most people, for both mind and body, learn enough control signals for day-to-day use, and stop.
I do somewhat wonder what it would be like to know the control signals for all my muscles, Bene Gesserit style.
Er, yes? Feelings are evolution’s way of playing carrot-and-stick with the brain. You really do not want to have an AI that needs spanking, whether it’s you or a emotion module that does it: it’s apt to delete the spanker and award itself infinite cake.
This is mistaken because systems can and do assemble out of sufficiently similar people pursuing self interest in a way that ends up coordinated because their motivations are alike. Capitalism is the simplest and most obvious example of such a system, but I’d argue things like patriarchy and racism are similar.
It means you could, in theory, run an AI on them (slowly).
You lose whatever information is no longer in the atoms, which might be a lot because the skull is not designed to assist cooling, and the brain is a considerable thermal mass. It’s going to cool slowly, be shredded to mush by crystal formation, and be warped and cracked by thermal stress, while undergoing runaway chemical reactions and cell death. Your “limit of perfect technology” is then faced with an awe inspiring task of running the reaction products backwards, modelling and reversing the thermal damage, un-killing the cells, and splicing the cracks, in 3D on tissue that does not come with alignment hints, and then inferring a mind. There’s going to be some level of physically unavoidable data loss even in the perfect case, the data is entailed in thermal noise and random photons and the damage is no longer reversible without reversing the universe. Presumably the perfect technology will paper over these cracks by copying in mind structures from Mr Perfectly Average. But the end result would be that you’re less you.
Inability to cope with technology maximizing societies is kind of a special case. It applies to basically ALL animals, birds, fish, plants, and even to other humans who decided on being expedient technologists. If you can’t call the Parlevar successful (“Before British colonisation in 1803, there were an estimated 3,000–15,000 Parlevar”—Wikipedia) then you can’t call any of the species successful that we wiped out or massively reduced.
I think this post is making the mistake of allowing the hypothesis to be non-total. Definition: a total hypothesis explains everything, it’s a universe-predicting machine and equivalent to “the laws of physics”. A non-total hypothesis is like an unspecified total hypothesis with a piece of hypothesis tacked on. Neither what it does, nor any meaningful understanding of its length, can be derived without specifying what it’s to be attached to.