I hope you’ll treat me fairly as a person and actually read and try to understand my comments instead of jumping to conclusions based on my “score”.
911truther
Your work is wrong. To apply diagonal lemma the definition of phi must be a formula, since you write |- (which is not a formula in PA) I assume you meant it as shorthand for Godels Bew (which is), but you can’t existentially quantify Bew like you did in line 3 of the definition.
- 3 Feb 2012 12:31 UTC; 7 points) 's comment on Elevator pitches/responses for rationality / AI by (
To save you some time: the standard response is “I’m being censored! You’re an Eliezer-cult! All these downvotes are just because you’re scared of the Truth!”.
I never said anything like this and I never invoked Eleizer. I don’t understand why you’re telling me off for something I didn’t do. Look at my post history if you don’t trust me.
What you are doing is not fitting into the community norms of discussion, like research and linking/referring to specific sources
It only makes sense to do so when making a claim. Yet people on this site have refused to back up their own claims with citations because apparently “I’m not worth bothering with”.
but there are almost never flame wars
I never flamed anyone. The only guy who is calling people names “like troll for example” is you (well now that you’ve done it others are following your lead too, well done..).
Are you enjoying wasting your time on this website?
Not really, I didn’t expect to get rejected so harsly. I’ve read all the sequences twice and been rational for years so I don’t know what the problem is. What’s the point of all this meta discussion, why is everyone trying to drag me into these metadiscussions and brand me as a troll after I passed 100 downvotes. We should get back onto the actual topic.
You are trying to submit too fast. try again in 6 minutes.
consdering
spelled wrong and I’m not a troll.
I have no such delusions. If you look at my user page (http://lesswrong.com/user/911truther) it’s blatantly obvious that someone is systematically downvoting everything I post multiple times. I don’t claim to be persecuted but clearly there is an attempt to censor me. Frankly it just proves that I’m right, if I was wrong people could easily disprove me.
- 3 Feb 2012 13:39 UTC; 26 points) 's comment on Elevator pitches/responses for rationality / AI by (
- 3 Feb 2012 3:26 UTC; -1 points) 's comment on Elevator pitches/responses for rationality / AI by (
It’s about figuring out what you really want and getting it. If you are at a game, and it’s really boring, should you walk out and waste what you paid for the tickets? If you apply for a position and don’t get it, does it help to decide that you didn’t really want it, anyway? If you are looking to buy a new car, what information should you take seriously? There are many pitfalls on the road to making a good decision; rationality is a systematic study of the ways to make better choices in life. Including figuring out what “better” really means for you.
Makes it sound great, but what are the real world benefits? I’ve been rational for years and it hasn’t done anything for me.
I have done research and seen this before.
You didn’t say anything explicitly wrong except vitrification can’t work 100% yet, ice crystals are still formed. information-theoretic “death” may not have happened but the claim that recovery may be possible in the far future is a seriously dubious, so is the evasive attempts of beleivers like gwern to maintain this beleif without backing it up.
You are trying to submit too fast. try again in 7 minutes.
It’s a terrible idea to try to learn theorems by memorization, if all you want to do is pass math tests fine.. but if you want to understand mathematics it’s definitely going to do more harm than good.
You do not have enough karma to downvote right now. You need 1 more point.
You do not have enough karma to downvote right now. You need 1 more point.
It’s blatantly obvious that you only beleive in cryonics to get upvotes here. I already explained why it’s not possible..
PHD in biomedical engineering agrees with me http://www.quora.com/Cryogenics/Is-it-technically-possible-to-undergo-cryogenics-and-wake-up-500-years-later
If “the cryonics literature” (presumably explaining why freezing does not destroy the brain) actually exists why don’t you link to it?
You are trying to submit too fast. try again in 4 minutes.
Remember: Downvotes are censorship, if you don’t want your beleifs questioned you’re doing the right thing.
You do not have enough karma to downvote right now. You need 1 more point.
There is practically no chance cyonics can work, there is no evidence of it ever being done successfully. Everything we know points to it being impossible: Freezing things makes water expand and burst the fragile parts of your brain. All the information necessary to revive you will simply be destroyed, even with futuristic recovery devices I feel that there’s no hope of it working. I’m against convincing yourself otherwise to buy peace of mind because this enables people to exploit you for money and also goes against rationality to beleive in something that isn’t true.
- 3 Feb 2012 3:26 UTC; -1 points) 's comment on Elevator pitches/responses for rationality / AI by (
Randomized algorithms are completely different to random program code: Randomized algorithms actually fit a precise specification while randomized algorithms just (try to) fit a range of test cases (hopefully I don’t need to explain why a range of test cases is not an adequete specification). It’s a misconception that unit tests help increase the probability of mistakes, A unit test cannot ever tell you that your program is free of errors. They’re the complete wrong paradaigm for making correct programs.
You are trying to submit too fast. try again in 9 minutes. every time I post a comment.
This research (it is a single piece of research written up in 4 different ways) simply concerns taking a rough piece of wood (a program that is almost correct) and sanding down the edges (fixing a small number of test cases). I didn’t say “programs can’t have algorithmic insight”, I said randomly generating problems by “evolutionary” means will not contain insight by any other means than coincidence. The research you linked doesn’t contradict that because all it concerns is smoothing down rough edges. Degeneracy is one of the fundamental features of a genetic code that is required for the theory of evolution to apply so I don’t know why you say that “doesn’t wash”, it’s a fact.
This idea is based on a whole range of confusions and misunderstandings. Program code does not have the redundancy or flexibility of genetics, as you know with syntax errors it shatters if a single character is wrong—for this reason it’s a mistake to use it as the carrier for the genetic paradaigm. Another mistake is extrapolating from a finite data source: you can’t expect to get a correct program this way, the code cannot contain any algorithmic insight into the data for any reason other than pure chance. As you’ll know the software crisis is still ongoing,and Critticall is just another example of people not understanding the true nature of programs and trying to get something for nothing.
- 3 Feb 2012 2:30 UTC; 16 points) 's comment on Looking for information on cryonics by (
It’s only possible to demonstrate evolution in finely tuned environments specifically designed to show it off (like a computer program for example) so I don’t think you’ll be able to demonstrate it like that. Also Darwins contribution to the theory was minimal at best, he’s just used as a figurehead because he wrote down the status quo slightly sooner than anyone else.
What you need to think about is what consequence any of this has to your life. The reality is, like the moon landing, it means absolutely nothing to the decisions you’ll be making whether it’s real or not. Like holocaust denial, the only reason people make one claim rather than another is to be seen as a certain type of person.
Make sure to wear your rationalist sneakers when you go!