If I earmark my donations for “HPMOR Finale or CPA Audit whichever comes first” would that act as positive or negative pressure towards Eliezer’s fiction creation complex? (I only ask because bugging him for an update has been previously suggested to reduce update speed)
Furthermore. Oracle AI/Nanny AI seem to both fail the heuristic of “other country is about to beat us in a war, should we remove the safety programming” that I use quite often with nearly everyone I debate AI about from outside the LW community. Thank you both for writing such concise yet detailed responses that helped me understand the problem areas of Tool AI better.
If I earmark my donations for “HPMOR Finale or CPA Audit whichever comes first” would that act as positive or negative pressure towards Eliezer’s fiction creation complex?
I think the issue is that we need a successful SPARC and an “Open Problems in Friendly AI” sequence more urgently than we need an HPMOR finale.
Nothing in this story so far represents either FAI or UFAI. Consider it Word of God.
(And later in the
thread,
when asked about “so far”: “And I have no intention at this
time to do it later, but don’t want to make it a blanket
prohibition.”)
In the earlier chapters, it seemed to me that the Hogwarts facility dealing with Harry was something like being faced with an AI of uncertain Friendliness.
Correction: It was more like the faculty dealing with an AI that’s trying to get itself out of its box.
I think our values our positively maximized by delaying the HPMOR finale as long as possible, my post was more out of curiosity to see what would be most helpful to Eliezer.
In general—never earmark donations. It’s a stupendous pain in the arse to deal with. If you trust an organisation enough to donate to them, trust them enough to use the money for whatever they see a need for. Contrapositive: If you don’t trust them enough to use the money for whatever they see a need for, don’t donate to them.
If I earmark my donations for “HPMOR Finale or CPA Audit whichever comes first” would that act as positive or negative pressure towards Eliezer’s fiction creation complex? (I only ask because bugging him for an update has been previously suggested to reduce update speed)
Furthermore. Oracle AI/Nanny AI seem to both fail the heuristic of “other country is about to beat us in a war, should we remove the safety programming” that I use quite often with nearly everyone I debate AI about from outside the LW community. Thank you both for writing such concise yet detailed responses that helped me understand the problem areas of Tool AI better.
I think the issue is that we need a successful SPARC and an “Open Problems in Friendly AI” sequence more urgently than we need an HPMOR finale.
A sudden, confusing vision just occurred, of the two being somehow combined. Aaagh.
Spoiler: Voldemort is a uFAI.
For the record:
(And later in the thread, when asked about “so far”: “And I have no intention at this time to do it later, but don’t want to make it a blanket prohibition.”)
In the earlier chapters, it seemed to me that the Hogwarts facility dealing with Harry was something like being faced with an AI of uncertain Friendliness.
Correction: It was more like the faculty dealing with an AI that’s trying to get itself out of its box.
I think our values our positively maximized by delaying the HPMOR finale as long as possible, my post was more out of curiosity to see what would be most helpful to Eliezer.
In general—never earmark donations. It’s a stupendous pain in the arse to deal with. If you trust an organisation enough to donate to them, trust them enough to use the money for whatever they see a need for. Contrapositive: If you don’t trust them enough to use the money for whatever they see a need for, don’t donate to them.
I never have before but this CPA Audit seemed like a logical thing that would encourage my wealthy parents to donate :)