Have you updated it since? It seems to be getting a lot better! After I started recording good ones, I got:
You know what a/an improperly-programmed upload of Pinkie Pie say: one person’s torture is another person’s rationality.
Corrupted Pinkie Pie would tile the universe with hypotheses.
The twenty fiveth virtue of rationality is “Beware of the utility function of any improperly-programmed General Arteficial Intelegences”.
Subjective physics is not rationality.
What are your priors on if a/an General Arteficial Intelegence once said: “look, just rejecting the sanity waterline and the planning fallacy doesn’t make someone a/an Friendly upload of Eliezer Yudowsky.”?
This may sound a bit crazy right now, but hear me out: that which can be destroyed by a/an human should be.
Before Lukeprog goes to sleep, it scans it’s computer for uploaded copies of a/an Unfriendly upload of Lukeprog.
This may sound a bit crazy right now, but hear me out: corrupted Omega is vulnerable to acausal infanticide.
(These are still at about the level of the results in my previous comment, but I got them a lot more frequently.)
you make a compelling argument that a/an Unfriendly upload of Omega would tile the universe with rationality if and only if a/an rationalist is an aspiring god
The new one gets a lot more variety, but hits on clever stuff with a lower frequency. I ran it for a while and only got a few worth sharing:
Universal friendship is a/an hugging maxemizer.
Paperclips is truly part of a/an corrupted upload of Pinkie Pie.
Yea, I got that feeling to. :(
Have you updated it since? It seems to be getting a lot better! After I started recording good ones, I got:
You know what a/an improperly-programmed upload of Pinkie Pie say: one person’s torture is another person’s rationality.
Corrupted Pinkie Pie would tile the universe with hypotheses.
The twenty fiveth virtue of rationality is “Beware of the utility function of any improperly-programmed General Arteficial Intelegences”.
Subjective physics is not rationality.
What are your priors on if a/an General Arteficial Intelegence once said: “look, just rejecting the sanity waterline and the planning fallacy doesn’t make someone a/an Friendly upload of Eliezer Yudowsky.”?
This may sound a bit crazy right now, but hear me out: that which can be destroyed by a/an human should be.
Before Lukeprog goes to sleep, it scans it’s computer for uploaded copies of a/an Unfriendly upload of Lukeprog.
This may sound a bit crazy right now, but hear me out: corrupted Omega is vulnerable to acausal infanticide.
(These are still at about the level of the results in my previous comment, but I got them a lot more frequently.)
Nope. Either you’re choosing from a larger sample this time or it’s just luck.
Another awesome one:
you make a compelling argument that a/an Unfriendly upload of Omega would tile the universe with rationality if and only if a/an rationalist is an aspiring god