Paraphrasing, I believe it was said by an SIer that “if uFAI wasn’t the most significant and manipulable existential risk, then the SI would be working on something else.” If that’s true, then shouldn’t its name be more generic? Something to do with reducing existential risk...?
I think there are some significant points in favor of a generic name.
Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion—imagine if GiveWell were called GiveToMalariaCauses.
By attaching yourself directly with reducing existential risk, you bring yourself status by connecting with existing high status causes such as climate change. Moreover, this creates debate with supporters of other causes connected to existential risk—this gives you acknowledgement and visibility.
The people you wish to convince won’t be as easily mind-killed by research coming from “The Center for Reducing Existential Risk” or such.
Is it worth switching to a generic name? I’m not sure, but I believe it’s worth discussing.
Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion—imagine if GiveWell were called GiveToMalariaCauses.
If it was pruning, that was helluva lot of pruning in very little time early in SI history: it must be likely that the AI would be created that would have a properly grounded material goal (nobody knows how to do that nor has a need to ground goals), the extreme foom has to be possible (hyper-singularity), the FAI has to be the only solution (I haven’t seen SI working on figuring out how to use the wireheading as failsafe, or how to use lack of symbol grounding for safety for that matter, despite examples of both the theorem prover that wireheads and the AIXI that doesn’t symbol ground and won’t see shutdown of it’s physical hardware as resulting in lack of reward to it’s logical structure).
Paraphrasing, I believe it was said by an SIer that “if uFAI wasn’t the most significant and manipulable existential risk, then the SI would be working on something else.” If that’s true, then shouldn’t its name be more generic? Something to do with reducing existential risk...?
I think there are some significant points in favor of a generic name.
Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion—imagine if GiveWell were called GiveToMalariaCauses.
By attaching yourself directly with reducing existential risk, you bring yourself status by connecting with existing high status causes such as climate change. Moreover, this creates debate with supporters of other causes connected to existential risk—this gives you acknowledgement and visibility.
The people you wish to convince won’t be as easily mind-killed by research coming from “The Center for Reducing Existential Risk” or such.
Is it worth switching to a generic name? I’m not sure, but I believe it’s worth discussing.
I feel like you could get more general by using the “space of mind design” concept....
Like an Institute for Not Giving Immense Optimisation Power to an Arbitrarily Selected Point in Mindspace, but snappier.
If it was pruning, that was helluva lot of pruning in very little time early in SI history: it must be likely that the AI would be created that would have a properly grounded material goal (nobody knows how to do that nor has a need to ground goals), the extreme foom has to be possible (hyper-singularity), the FAI has to be the only solution (I haven’t seen SI working on figuring out how to use the wireheading as failsafe, or how to use lack of symbol grounding for safety for that matter, despite examples of both the theorem prover that wireheads and the AIXI that doesn’t symbol ground and won’t see shutdown of it’s physical hardware as resulting in lack of reward to it’s logical structure).