I don’t even know any Haskell—I just have a vague idea that a monad is a function that accepts a “state” as part of its input, and returns the same kind of “state” as part of its output. But even so, the punchline was too good to resist making.
Random832
How do you write a function for the output of a state machine?
Monads.
while they await the outcome of clinical trials and new approaches
http://xkcd.com/989/ seems relevant despite the slightly different subject matter. Clinical trials can’t happen if all the potential subjects are frozen.
Well, don’t forget that it will hit the ground with a force proportional to its weight. You probably wouldn’t want him to have dropped it on your head—it would be a rather more unpleasant experience than having a controller thrown at your head.
General Relativity, actually. You could also look for “gravity as a fictitious force”.
Large CRTs are made of very thick curved glass. I once did hit one hard enough to chip it, which left a hole several millimeters deep and did not appear to affect the structural integrity of the tube. But I don’t know about “that durable”—if you dropped one from a sufficient height it would surely break—but it’s more a question of how much force you (or I) can throw a controller with.
when one inevitably fractures from the force of the blows
Define inevitably. I don’t think I could throw a controller hard enough to damage a CRT or a rear projector. These suggest designs for protective covers (for the former, put the TV behind thick curved glass; for the latter put it behind a durable plastic sheet held in a rigid frame.
The only phenomenon in all of physics that violates Liouville’s Theorem (has a many-to-one mapping from initial conditions to outcomes).
I don’t know what Liouville’s Theorem is, but this sounds like an objection to not being able to run time backwards.
I will note that the AI box experiment’s conditions expressly forbid a secure environment [i.e. one with inspection tools that cannot be manipulated by the AI]:
the results seen by the Gatekeeper shall again be provided by the AI party, which is assumed to be sufficiently advanced to rewrite its own source code, manipulate the appearance of its own thoughts if it wishes, and so on.
“escape the testing environment” is poorly defined. Some people read it as “deduce the exploitable vulnerabilities in the system, hack into it, run itself with higher privileges, somehow transmit itself to other machines / the internet at large / infecting people’s brains snow-crash style”, and others read it as “convince the people running the test to give it more resources (and maybe infect their brains snow-crash style)”.
The former can be prevented by having a secure (air gapped?) system, the latter can be prevented by not running tests interactively and ignoring the moral issues with terminating (or suspending) what may possibly be an intelligent ‘person’.
It also implicitly assumes that its ability to improve its own intelligence (and therefore gain the ability to do either of the above) is unbounded by the resources of the system and will have no cost in terms of increased processing time.
Quoting a physicist on their opinion about a physics question within their area of expertise would make an excellent non-fallacious argument.
“Science is the belief in the ignorance of experts.”
But since I am running on corrupted hardware, I can’t occupy the epistemic state you want me to imagine.
It occurs to me that many (maybe even most) hypotheticals require you to accept an unreasonable epistemic state. Even something so simple as trusting that Omega is telling the truth [and that his “fair coin” was a quantum random number generator rather than, say, a metal disc that he flipped with a deterministic amount of force, but that’s easier to grant as simple sloppy wording]
Not if their probability of cooperation is so high that the expected value of cooperation remains higher than that of defecting. Or if their plays can be predicted, which satisfies your criterion (nothing to do with my previous plays) but not mine.
If someone defects every third time with no deviation, then I should defect whenever they defect. If they defect randomly one time in sixteen, I should always cooperate. (of course, always-cooperate is not more complex than always-defect.)
...I swear, this made sense when I did the numbers earlier today.
Specifically, I learned that if you believe suffering is additive in any way, choosing torture is the only answer that makes sense.
Right. The problem was the people on that side seemed to have a tendency to ridicule the belief that it is not.
Torture v. Specks
The problem with that one is it comes across as an attempt to define the objection out of existence—it basically demands that you assume that X negative utility spread out across a large number of people really is just as bad as X negative utility concentrated on one person. “Shut up and multiply” only works if you assume that the numbers can be multiplied in that way.
That’s also the only way an interesting discussion can be held about it—if that premise is granted, all you have to do is make the number of specks higher and higher until the numbers balance out.
(And it’s in no way equivalent to the trolley problem because the trolley problem is comparing deaths with deaths)
liquid nitrogen is not a secure encryption method for brains.
It doesn’t have to be a secure encryption method to be a lossy compression method.
I think depicting ancient philosophers seated on a throne in heaven and the large caption “thou shalt not” sends a… somewhat mixed message about appeal to authority.
I thought people knew she was a MoR reader.
I took your original post to mean this, and looked for other information about it, and found none.
Future you “will have had more time to analyze” only if present you decides to actually spend that time analyzing.
Has your hypothesis that thought remains possible after the whole brain has been removed, in fact, been tested?
EDIT: I read your post as meaning that the “fact” that thought remains possible after a brain has been removed [to be cryo-frozen, for instance] was evidence against a soul.