I probably have obstructive sleep apnea. I exhibit a symptoms (ie feeling sleepy despite getting normal or above average amounts of sleep, dry mouth when I wake up) and also I just had a sleep specialist tell me that the geometry of my mouth and sinuses makes puts me at high risk. I got an appointment for a sleep study a month from now. Based on what I’ve read, this means that it will probably take at least two months or more before I can start using a CPAP machine if I go through the standard procedure. This seems like an insane amount of time to wait for something that probably has a good chance of significantly improving my quality of life immediately. Is there any good reason why I can’t just buy a CPAP machine and start using it?
earthwormchuck163
So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?
I don’t think so. It seems more likely to me that the common factor between increased defection rate and self-perceived success is more consequentialist thinking. This leads to perceived success via actual success, and to defection via thinking “defection is the dominant strategy, so I’ll do that”.
After thinking about it a bit more I decided that I actually do care about simulated people almost exactly as the mugger thought I did.
Mugger: Give me five dollars, and I’ll save 3↑↑↑3 lives using my Matrix Powers.
Me: I’m not sure about that.
Mugger: So then, you think the probability I’m telling the truth is on the order of 1/3↑↑↑3?
Me: Actually no. I’m just not sure I care as much about your 3↑↑↑3 simulated people as much as you think I do.
Mugger: “This should be good.”
Me: There’s only something like n=10^10 neurons in a human brain, and the number of possible states of a human brain exponential in n. This is stupidly tiny compared to 3↑↑↑3, so most of the lives you’re saving will be heavily duplicated. I’m not really sure that I care about duplicates that much.
Mugger: Well I didn’t say they would all be humans. Haven’t you read enough Sci-Fi to know that you should care about all possible sentient life?
Me: Of course. But the same sort of reasoning implies that, either there are a lot of duplicates, or else most of the people you are talking about are incomprehensibly large, since there aren’t that many small Turing machines to go around. And it’s not at all obvious to me that you can describe arbitrarily large minds whose existence I should care about without using up a lot of complexity. More generally, I can’t see any way to describe worlds which I care about to a degree that vastly outgrows their complexity. My values are complicated.
How does your proposed solution for Game 1 stack up against the brute-force metastrategy?
Well the brute force strategy is going to do a lot better, because it’s pretty easy to come up with a number bigger than the length of the longest program anyone has ever thought to write, and then plugging that into your brute force strategy automatically beats any specific program that anyone has ever thought to write. On the other hand, the meta-strategy isn’t actually computable (you need to be able to decide whether program produces large outputs, which requires a halting oracle or at least a way of coming up with large stopping times to test against). So it doesn’t really make sense to compare them.
I think I can win Game 1 against almost anyone—in other words, I think I have a larger computable number than any sort of computable number I’ve seen anyone describe in these sorts of contests, where the top entries typically use the fast-growing hierarchy for large recursive ordinals, in contests where Busy Beaver and beyond aren’t allowed.
Okay I have to ask. Care to provide a brief description? You can assume familiarity with all the standard tricks if that helps.
In short, take it as a given that anyone, on any level, has a halting oracle for arbitrary programs, subprograms, and metaprograms, and that non-returning programs are treated as producing no output.
In this case, I have no desire to escape from the room.
Why not stay around and try to help fix the problem?
I did read that. It either doesn’t say anything at all, or else it trivializes the problem when you unpack it.
Also, this is not worth my time. I’m out.
Your question is not stated in anything like the standard terminology of game theory and decision theory. It’s also not clear what you are asking on an informal level. What do you mean by “analogous”?
- 31 Jan 2013 4:09 UTC; 0 points) 's comment on Simulating Problems by (
I’ll give you a second data point to consider. I am a soon-to-be-graduated pure math undergraduate. I have no idea what you are asking, beyond very vague guesses. Nothing in your post or the proceeding discussion is of a “rather mathematical nature”, let alone a precise specification of a mathematical problem.
If you think that you are communicating clearly, then you are wrong. Try again.
Oh wow this is so obvious in hindsight. Trying this asap thank you.
That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).
I don’t understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY’s quotes about apathetic uFAIs?
One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said “this is the best thing ever” and was pretty sincere. It looked pretty silly from the outside though.
This is largely a matter of keeping track of the distinction between “first order logic: the mathematical construct” and “first order logic: the form of reasoning I sometimes use when thinking about math”. The former is an idealized model of the latter, but they are distinct and belong in distinct mental buckets.
It may help to write a proof checker for first order logic. Or alternatively, if you are able to read higher math, study some mathematical logic/model theory.
I have personally witnessed great minds acting very stupid because of it.
I’m curious. Can you give a specific example?
Note that this actually has very little to do with most of the seemingly hard parts of FAI theory. Much of it would be just as important if we wanted to create a recursively self modifying paper-clip maximizer, and be sure that it wouldn’t accidentally end up with the goal of “do the right thing”.
The actual implementation is probably far enough away that these issues aren’t even on the radar screen yet.
Sorry I didn’t answer this before; I didn’t see it. To the extent that the analogy applies, you should think of non-standard numbers and standard numbers as having the same type. Specifically, the type of things that are being quantified over in whatever first order logic you are using. And you’re right that you can’t prove that statement in first order logic; Worse, you can’t even say it in first order logic (see the next post, on Godel’s theorems and Compactness/Lowenheim Skolem for why).
I am well versed in most of this math, and a fair portion of the CS (mostly the more theoretical parts, not so much the applied bits). Should I contact you now, or should I study the rest of that stuff first?
In any case, this post has caused me to update significantly in the direction of “I should go into FAI research”. Thanks.
I am on vacation in Japan until the end of August, I might be interested in attending a meetup. Judging from the lack of comments here this never took off, but I might as well leave this here just in case.