variation in SIDS across socio-economic spectrum suggest infanticide is quite common in our culture.
xamdam
Good news about the Big Bang
25% of pregnancies end in miscarriage. Miscarriage is common...The hospital procedure is routine and not attended with the same kind of reverence as death usually is.
Infant death rate was around 20% (in Paris!) when they invented incubators. I wonder if their attitude to infant death was similar to our re: miscarriage.
this link is messed up in the post
http://lesswrong.com/lw/1ws/the_importance_of_goodharts_law/
I accept this as a valid point—first hour/day is an important heuristic indicator of goodness, but Eli wrote
You only need to convince them that the first hour or day
Interestingly 0 as in “free stuff” is also often mispriced (hence all the ‘free offers’ you get in the mail).
I think that’s a rational response
In the timespan under discussion
first hour or day
you just justified crack usage
The descriptive math part was very good, thanks—and that’s why I resisted downvoting the post. My problem is that the conclusion omits the hugely important factor that categories are useful for specific goals, and the kind of techniques you are suggesting (essentially unsupervised techniques) are context-free.
E.g. is a dead cow more similar to a dead (fixed from ‘live’) horse or to a live cow? (It clearly depends what you want to do with it)
If after 1⁄2 hr of poker you can’t tell who’s the patsy, it’s you. - Charles T. Munger
The Red guy is a dead ringer for Prime Intellect.
Great post, thanks.
I try to remember my heroes for the specific heroic act or trait, e.g. Darwin’s conscientious collection of disconfirming evidence.
No, I am not aware of any facts about progress in decision theory
Please take a look here: http://wiki.lesswrong.com/wiki/Decision_theory
As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it’s quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).
Taking UDT Seriously
Can you post this in the discussion area?
Thanks.
The posts (at least the second one) seem to point that symbolic reasoning is overstated and at least some reasoning is clearly non-symbolic (e.g. visual).
In this context the question is whether the symbolic processing (there is definitely some—math, for example) gave pre-humans the boost that allowed the huge increase in computing power, so I am not seeing the contradiction.
Would being seen be an advantage for them? (answering question with a question, still...)
Would considering the effects of Christianity on civilization help? Something about Dark Ages...
Another approach, find what is appealing to you in Christianity, and attempt to extract it from the silly religious context.
Freud once said that Jung was a great psychologist, until he became a prophet.
calling this “symbolic processing” assumes a particular theory of mind, and I think it is mistaken
Interesting. Can you elaborate or link to something?
It is going to be next to impossible to solve the problem of “Friendly AI” without first creating AI systems that have social cognitive capacities. Just sitting around “Thinking” about it isn’t likely to be very helpful in resolving the problem.
I am guessing that this unpacks to “to create and FAI you need some method to create AGI. For the later we need to create AI systems with social cognitive capabilities (whatever that means—NLP?)”. Doing this gets us closer to FAI every day, while “thinking about it” doesn’t seem to.
First, are you factually aware that some progress has been made in a decision theory that would give some guarantees about the future AI behavior?
Second, yes, perhaps whatever you’re tinkering with is getting closer to an AGI which is what FAI runs on. It is also getting us closer to and AGI which is not FAI, if the “Thinking” is not done first.
Third, if the big cat analogy did not work for you, try training a komodo dragon.
So I actually read the book; while there is a little “dis” in there, but the portrait is very partial: “Nate Caplan, my IQ is 160″ of “OverpoweringFalsehood.com″ is actually pictured as the rival of the “benign SuperIntelligence Project” (a stand-in for SIAI I presume, which is dissed in its own right of course). I think it’s funny flattering and wouldn’t take it personally at all, doubt Eliezer would in any case.
BTW the book is Ok, I prefer Egan in far-future mode than in near-future.