I already tried discussing a very similar concept I call Superrational Signalling in this post. It got almost no attention, and I have doubts that Less Wrong is receptive to such ideas.
I also tried actually programming a Game Theoretic simulation to try to test the idea, which you can find here, along with code and explanation. Haven’t gotten around to making a full post about it though (just a shortform).
Thank you for the references! I’m reading your writings, it’s interesting
I posted the super-cooperation argument while expecting that LessWrong would likely not be receptive, but I’m not sure which community would engage with all this and find it pertinent at this stage
More concrete and empirical productions seems needed
I already tried discussing a very similar concept I call Superrational Signalling in this post. It got almost no attention, and I have doubts that Less Wrong is receptive to such ideas.
I also tried actually programming a Game Theoretic simulation to try to test the idea, which you can find here, along with code and explanation. Haven’t gotten around to making a full post about it though (just a shortform).
Thank you for the references! I’m reading your writings, it’s interesting
I posted the super-cooperation argument while expecting that LessWrong would likely not be receptive, but I’m not sure which community would engage with all this and find it pertinent at this stage
More concrete and empirical productions seems needed