Here’s another one. When reading wikipedia on Chaitin’s constant, I came across an article by Chaitin from 1956 (EDIT: oops, it’s 2006) about the consequences of the constant (and its uncomputability) on the philosophy of math, that seems to me to just be completely wrongheaded, but for reasons I can’t put my finger on. It really strikes the same chords in me that a lot of inflated talk about Godel’s Second Incompleteness theorem strikes. (And indeed, as is obligatory, he mentions that too.) I searched on the title but didn’t find any refutations. I wonder if anyone here has any comments on it.
pdf23ds
I may be stretching the openness of the thread a little here, but I have an interesting mechanical engineering hobbyist project, and I have no mechanical aptitude. I figure some people around here might, and this might be interesting to them.
The Avacore CoreControl is a neat little device, based on very simple mechanical principles, that lets you exercise for longer and harder than you otherwise could, by cooling down your blood directly. It pulls a slight vacuum on your hand, and directly applies ice to the palm. The vacuum counteracts the vasocontriction effect of cold and makes the ice effective.
I’m mainly interested in building one because I play a lot of DDR, but anyone who gets annoyed with how quickly they get hot during exercise could use one.
I called the company, and they sell the device for $3000 dollars (and they were very rude to me when I suggested making hobbyist plans available), but given the simplicity of the principles, it should be easy to build one using stuff from a hardware store for under $200. I have a post about it on my blog here.
As it was mocking bgrah’s assertion, and bgrah used “unrational”, and in my estimation his meaning was closer to “irrational” than “arational”, I used the former. Perhaps using “unrational” would have been better, though.
Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow.
I don’t think any such agreement could be legally binding under current law, which is relevant since we’re talking about rights.
Disliking Pollock is irrational. As is disliking Cage. Or Joyce. Or PEZ.
Hyper operators. You can represent even bigger numbers with Conway chained arrow notation. Eliezer’s 3^^^^3 is a form of hyper operator notation, where ^ is exponentiation, ^^ is tetration, ^^^ is pentation, etc.
If you’ve ever looked into really big numbers, you’ll find info about Ackermann’s function, which is trivially convertable to hyper notation. There’s also Busy Beaver numbers, which grow faster than any computable function.
Umm, that’s not what I meant by “faithful reproductions”, and I have a hard time understanding how you could have misunderstood me. Say you took a photograph using the exact visual input over some 70 square degrees of your visual field, and then compared the photograph to that same view, trying to control for all the relevant variables*. You seem to be saying that the photograph would show the shadows as darker, but I don’t see how that’s possible. I am familiar with the phenomenon, but I’m not sure where I go wrong in my thought experiment.
* photo correctly lit, held so that it subtends 70 square degrees of your visual field, with your head in the same place as the camera was, etc.
Along the same lines, this is why cameras often show objects in shadows as blacked out—because that’s the actual image it’s getting, and the image your own retinas get! It’s just that your brain has cleverly subtracted out the impact of the shadow before presenting it to you
That doesn’t explain why faithful reproductions of images with shadows don’t prompt the same reinterpretation by your brain.
I am fairly sure, though I haven’t been able to refind a link, that there’s some solid evidence that autolysis isn’t nearly that quick or severe.
Hmm. I can with the necker cube, but not at all with this one.
For people wanting different recordings of the garbled/non-garbled: it’s right on the page right above the one Morendil linked to.
On the next sample, I only caught the last few words on the first play (of the garbled version only), and after five plays still got a word wrong. On the third, I only got two words the first time, and additional replays made no difference. On the fourth, I got half after one play, and most after two. On the fifth, I got the entire thing on the first play. (I’m not feeling as clear-headed today as I was the other day, but it didn’t feel like a learning effect.) On some of them, I don’t believe that even with a lot of practice I could ever get it all right, since some garbled words sound more like other plausible words than they do the originals.
Thinking about it more, it’s a bit surprising that I did well. I generally have trouble making out speech in situations where other people don’t have quite as much trouble. I’ll often turn on subtitles in movies, even in my first language/dialect (American English). (In fact, I hate movies where the speech is occasionally muffled and there are no subtitles—two things that tend to go hand in hand with smaller production budgets.) OTOH, I have a good ear in general. I’ve had a lot of musical training, and I’ve worked with sound editing quite a bit.
Well, that was the big controversy over the AI Box experiments, so no need to rehash all that here.
This isn’t actually a case of pareidolia, as the squiggly noises (they call it “sine wave speech”) are in fact derived from the middle recording, using an effect that sounds, to me, most like an extremely low bitrate mp3 encoding. Reading up on how they produce the effect, it is in fact a very similar process to mp3 encoding. (Perhaps inspired by it? I believe most general audio codecs work on very similar basic principles.)
My problem with CEV is that who you would be if you were smarter and better-informed is extremely path-dependent. Intelligence isn’t a single number, so one can increase different parts of it in different orders. The order people learn things in, and how fully they integrate that knowledge, and what incidental declarative/affective associations they form with the knowledge, can all send the extrapolated person off in different directions. Assuming a CEV-executor would be taking all that into account, and summing over all possible orders (and assuming that this could be somehow made computationally tractable) the extrapolation would get almost nowhere before fanning out uselessly.
OTOH, I suppose that there would be a few well-defined areas of agreement. At the very least, the AI could see current areas of agreement between people. And if implemented correctly, it at least wouldn’t do any harm.
Hmm. I got the meaning of the first section of the clip the first time I heard it. OTOH, that was probably because I looked at the URL first, and so I was primed to look at the content that way.
Here’s an algorithm that I’ve heard is either really hard to derandomize, or has been proven impossible to derandomize. (I couldn’t find a reference for the latter claim.) Find an arbitrary prime between two large numbers, like 10^500 − 10^501. The problem with searching sequentially is that there are arbitrarily long stretches of composites among the naturals, and if you start somewhere in one of those you’ll end up spending a lot more time before you get to the end of the stretch.
I agree that this argument depends a lot on how you look at the idea of “evidence”. But it’s not just in the court-room evidence-set that the cryonics argument wouldn’t pass.
Yes, that’s very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.
But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn’t be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scientific evidence. The consensus around here is that Bayesian evidence is much more effective on an individual level, even though with current humans science is still very much necessary for overall progress in knowledge.
Of course, every perfect-information deterministic game is “a somewhat more complex tic-tac-toe variant” from the perspective of sufficient computing power.
Yeah, sure. And I have a program that gives constant time random access to all primes less than 3^^^^3 from the perspective of sufficient computing power.
So you know how to divide the pie? There is no interpersonal “best way” to resolve directly conflicting values. (This is further than Eliezer went.) Sure, “divide equally” makes a big dent in the problem, but I find it much more likely any given AI will be a Zaire than a Yancy. As a simple case, say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal 1. I mean, there are plenty of cases where there’s more overlap and orthogonal values, but this kind of conflict is unavoidable between any reasonably complex utility functions.
I forget who brought this up—maybe zero_call? jhrandom?--but I think a good question is “How quickly does brain information decay (e.g. due to autolysis) after the heart stops and before preservative measures are taken?” If the answer is “very quickly” then cryonics in non-terminal-illness cases becomes much less effective.