Strikes me as an idea worth considering. If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could take OB and LW discussion as prerequisite material that commenters could be expected to have read, but not vice versa.
But then, in the still-censored site, we still wouldn’t be able to mention AGI/singularity in a response, even if it would be highly relevant.
A possible solution could be to have click-setable topic flags on posts and comments when bringing up topics that...
Are worth discussing
Are likely to be, fairly frequently
Lots of people would really rather they weren’t
...and readers can switch topics off in Options, boosting signal/noise ratio for the uninterested while allowing the interested to discuss freely. Comments would inherit parent’s flags by default.
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could...
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
I thought that was what was planned already (after May). I was responding to AnnaSalamon:
If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could...
I took that to mean keeping LW separate from AGI/singularity discussion, or why say ‘even after May’? Someone please explain if I misunderstood as I’m now most confused!
I think Anna wants to use the LW codebase to create a group blog to examine AGI/Singularity/FAI issues of concern to SIAI, even if they are not directly rationality-related. I think that’s a good plan for SIAI.
I think it would be a good idea to create a sister website on the same codebase as LW specifically for discussing this topic.
Strikes me as an idea worth considering. If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could take OB and LW discussion as prerequisite material that commenters could be expected to have read, but not vice versa.
I endorse this proposal.
But then, in the still-censored site, we still wouldn’t be able to mention AGI/singularity in a response, even if it would be highly relevant.
A possible solution could be to have click-setable topic flags on posts and comments when bringing up topics that...
Are worth discussing
Are likely to be, fairly frequently
Lots of people would really rather they weren’t
...and readers can switch topics off in Options, boosting signal/noise ratio for the uninterested while allowing the interested to discuss freely. Comments would inherit parent’s flags by default.
Possible flaggable topics:
Friendly AI/Singularitarianism
Libertarian politics
Simulism
Meta-discussion about possible LW changes
Another idea, more generally applicable: the ability to reroot comment threads under a different post, leaving a link to the new location.
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
I was responding to AnnaSalamon:
I thought the same.
I thought that was what was planned already (after May). I was responding to AnnaSalamon:
I took that to mean keeping LW separate from AGI/singularity discussion, or why say ‘even after May’? Someone please explain if I misunderstood as I’m now most confused!
I think Anna wants to use the LW codebase to create a group blog to examine AGI/Singularity/FAI issues of concern to SIAI, even if they are not directly rationality-related. I think that’s a good plan for SIAI.
Does the ban apply to Newcomb-like problems with simplifying Omegas?