I’m enjoying these posts.
you do get to decide whether or not to perceive it as a complement or an insult.
compliment
dieties
believed
I’m enjoying these posts.
you do get to decide whether or not to perceive it as a complement or an insult.
compliment
dieties
believed
Harry should trick Voldemort into biting him, and then use his new freedom to bite him back.
Oops, you’re right
From that Future of Life conference: if self-driving cars take over and cut the death rate from car accidents from 32000 to 16000 per year, the makers won’t get 16000 thank-you cards—they’ll get 16000 lawsuits.
Yes, that’s the point.
(I think sphexish is Dawkins, not Hofstadter.)
I think it’s a bit of a leap to go from NASA being under-funded and unambitious in recent years to “people 50 years from now, in a permanently Earth-bound reality”.
Not sure if it’s in HPMOR but the symbol for the deadly hallows contains two right triangles.
EDIT err, deathly, I guess. I don’t seem to be a trufan.
I’m afraid I won’t have time to give you more help. There’s a short summary of each sequence under the link at the top of the page, so it won’t take you forever to see the relevance.
EDIT: you’re wondering elsewhere in the thread why you’re not being well received. It’s because your post doesn’t make contact with what other people have thought on the topic.
It can, but it doesn’t have the time...
So how can the universe “enjoy itself” as much as possible before the big crunch (or before and during the heat death)*.
Maybe read the Fun Theory sequence?
It might useful to look at Pareto dominance and related ideas, and the way they are used to define concrete algorithms for multi-objective optimisation, eg NSGA2 which is probably the most used.
OP mentions “I used less water in the shower”, so is obviously not only looking for extraordinary outcomes. So “saving the world” does indeed sound silly.
Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. [...] Therefore there is little to fear in the way of being tortured by an AI.
That makes no sense. The uFAIs most likely to be created are not drawn uniformly from the space of possible uFAIs. You need to argue that none of the uFAIs which are likely to be created will be interested in humans, not that few of all possible uFAIs will.
Off-topic:
I’m not talking about a basic vocabulary, but a vocabulary beyond that of the average, white, English-as-a-first-language adult.
Why white?
Golly, that sounds to me as if the people of this age don’t go to heaven!
it’s unclear to me how the category of “evolutionary restrictions” could apply to rationality techniques. Suggestions?
Not sure if this simple example is what you had in mind, but—evolution wasn’t capable of making us grow nice smooth erasable surfaces on our bodies, together with ink-secreting glands in our index fingers, so we couldn’t evolve the excellent rationality technique of writing things down to remember them. So when writing was invented, the inventor was entitled to say “my invention passes the EOC because of the “evolutionary restrictions” clause”.
And more important, its creators want to be sure that it will be very reliable before they switch it on.
can read the statement on its own
I like the principle behind Markdown: if it renders, fine, but if it doesn’t, it degrades to perfectly readable plain-text.
A percentage is just fine.
Also “the teacher smiled”? Damn your smugness, teacher!