If the hypothetical external world in question diverges from our own world by a lot then the ancestor simulation argument loses all force.
JoshuaZ
Wildberger’s complaints are well known, and frankly not taking very seriously. The most positive thing one can say about it is that some of the ideas in his rational trignometry do have some interesting math behind them, but that’s it. Pretty much no mathematican who has listened to what he has to say have taken any of it seriously.
What you are doing in many ways amounts to the 18th and early 19th century arguments over whether 1-1+1-1+1-1… converged and if so to what. First formalize what you mean, and then get an answer. And a rough intuition of what should formally work that leads to a problem is not at all the same thing as an inconsistency in either PA or ZFC.
Phrasing it as a “super-task” relies on intuitions that are not easily formalized in either PA or ZFC. Think instead in terms of a limit, where your nth distribution and let n go to infinity. This avoids the intuitive issues. Then just ask what mean by the limit. You are taking what amounts to a pointwise limit. At this point, what matters then is that it does not follow that a pointwise limit of probability distributions is itself a probability distribution.
If you prefer a different example that doesn’t obfuscate as much what is going on we can do it just as well with the reals. Consider the situation where the nth distribution is uniform on the interval from n to n+1. And look at the limit of that (or if you insist move back to having it speed up over time to make it a supertask). Visually what is happening each step is a little 1 by 1 square moving one to the right. Now note that the limit of these distributions is zero everywhere, and not in the nice sense of zero at any specific point but integrates to a finite quantity, but genuinely zero.
This is essentially the same situation, so nothing in your situation has to do with specific aspects of countable sets.
The limit of your distributions is not a distribution so there’s no problem.
If there’s any sort of inconsistency in ZF or PA or any other major system currently in use, it will be much harder to find than this. At a meta level, if there were this basic a problem, don’t you think it would have already been noticed?
At least two major classes of existential risk, AI and physics experiments, are areas where a lot of math can come into play. In the case of AI, this is understanding whether hard take-offs are possible or likely and whether an AI can be provably Friendly. In the case of physics experiments, the issues connected to are analysis that the experiments are safe.
In both these cases, little attention is made to the precise axiomatic system being used for the results. Should this be concerning? If for example some sort of result about Friendliness is proven rigorously, but the proof lives in ZFC set theory, then there’s the risk that ZFC may turn out to be inconsistent. Similar remarks apply to analysis that various physics experiments are unlikely to cause serious problems like a false vacuum collapse.
In this context, should more resources be spent on making sure that proofs occur in their absolute minimum axiomatic systems, such as conservative extensions of Peano Arithmetic or near conservative extension?
Yes, but there’s less reason for that. A big part of the problem with neutrinos is that since only a small fraction are absorbed, it becomes much harder to get good data on what is going on. For example, the typical neutrino pulse from a supernova is estimated to last 5 seconds to 30 seconds, while the Earth is under a tenth of a second in diameter. Gamma rays don’t have quite as much of this problem and we can sort of estimate their directional data better.
On the other hand, the more recent work with neutrinos has been getting better and better at getting angle data which lets us get the same directional data to some extent.
You do know that both sets of ideas predate HPMOR, right?
Slightly crazy idea I’ve been bouncing around for a while: put giant IceCube style neutrino detectors on Mars and Europa. Europa would work really well because of all the water ice. This would allow one to get time delay data from neutrino bursts during a supernova to get very fast directional information as well as some related data.
That’s a rule I’d strongly support other than in cases of absolutely unambiguous spamming or clear sockpuppets of banned individuals.
While I’m deeply concerned about the possibility that AA has been engaging in vote-gaming which does seem to be a bannable offense, it isn’t clear to me that, as reprehensible as that comment is, that it is enough reason by itself for banning, especially because some of his comments (especially those on cryonics) have been clearly highly productive. I do agree that much of the content of that comment is pretty disgusting and unproductive, and at this point his focus on incel is borderline spamming with minimal connection to the point of LW. Maybe it would be more productive to just tell him that he can’t talk about incel as a topic here?
Not only does it not predict such large swings it also doesn’t fit with the fact that after such a swing (which occurs rapidly) he then gets a slow downward trend. I pointed this out to the moderators a while ago and so I have a record of how rapid some of the changes were:
http://lesswrong.com/lw/ls5/if_you_can_see_the_box_you_can_open_the_box/c1kf was at −9 within 8 hours of being posted, 12 hours later or so it was at +4. Note that it has now reverted to +0.
http://lesswrong.com/lw/ln8/february_2015_media_thread/bx5u was at −5, then within 24 hours went to +6 and is now +3.
http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw6v was at −8 at 5 PM EST. At 7:10 EST it was at +6. In the same span http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw6w was at −13 and went to +0. After the fact over the next few days, both those comments went into the deep negative. Similarly http://lesswrong.com/lw/lk7/optimal_eating_or_rather_a_step_in_the_right/bvmk was at −4, then went in the same 2 hour time span up to 3 and then went to 2 (so was left alone after that).
Curiously, within the same 2 hour time span as that set of rapid upvoting, two highly negative comments in support of A went through a similar swing with again a slow reversion over the next few days http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw9t and http://lesswrong.com/lw/lli/open_thread_jan_26_feb_1_2015/bw7l
These aren’t the only examples, but simply the most blatant
Based on this evidence I assign an extremely high credence that some form of karma abuse is going on with someone using multiple accounts (approximately 90% certain). I assign an 80% chance that this person is doing so deliberately to upvote comments which are seen at odds with “liberal” politics in some form. I assign a slightly over 50% chance that AA is doing this himself. The fact that it took until now for him to address such concerns despite the fact that others have mentioned them is not positive. After AA himself, I assign the next most likely individual to be Eugine for obvious reasons.
Another reason for optimism is that it seems that the level of political bias is actually lower today than it was historically. People are better at judging politically controversial issues in a detached, scientific way today than they were in, say, the 14th century. This shows that progress is possible.
Can you expand on what evidence there is for this?
This and other recent work with deep learning raises a substantial question: how much of intelligence is simply raw processing power? Many of the ideas and algorithms used in these recent advances have been around for a while, but they are taking advantage of much more processing power than they would have had available 10 or 20 years ago. This suggests that we may have most of the ingredients for intelligence and can’t implement it or can’t recognize it due to processing limitations.
Considering he claims a 98% probability of Donald Trump becoming the next US President, I’ll bother paying attention to what he says to say if/when that turned out to be accurate.
I don’t know about Lumifer, but I’d certainly be willing to take that bet.
Why do so many people see Adams as being rationality-compatible? I’ve seen very little that he has to say that sounds at all rational or helpful. Cynical != rational.
That is the problem in a nutshell: how do you know it is a valid proof? All the time one thinks the proof is valid and it turns out one is wrong.
Have you never seen an apparently valid mathematical proof that you later found an error in?
I’m not sure why you think that. This may depend strongly on what you mean by an in infinitary method. Is induction infinitary? Is transfinite induction infinitary?