My path to LW was to Sl4 → overcoming bias → LW. This was because interested in systems that can change themselves in how they handle problems ( how they manipulate facets I didn’t have a good language for it back then though).
I’m now getting close-ish (1-3 years) out of having something that I think can manipulate facets in a flexible way (using an internal market place based on user feedback). I mentioned it here. My rough strategy for what to do with it long term is here.
This requires different long term strategies. With the way I view things I think it more likely we will build/become demi-gods slowly. My over-arching strategy is to try and figure out what problem-solving is then figure out what to do about it. I have come across stuff like this which suggests world views on this forum at odds with mine. I’ve asked for back up plans. But got no response.
Most recently I’ve been trying to argue that we as rationalists need a place to work together on experiments that may end up wrong or not have necessarily the highest predicted value, because exploring via experiments gives us information about the world and may surprise us. The community could provide this sort of space for me and others..
I think it likely my work will fail, just because I am trying something complicated and exploring an unexplored area.
My assessment of the helpfulness is not going well. I put lots of effort into discussions. I should probably be putting more effort into posts which get more visibility. But I think I get a taste of the likely engagement. I got some people clicking through my links on my project, one person watching my video to the end. But no comments and one upvote.
experiments that may end up wrong or not have necessarily the highest predicted value, because exploring via experiments gives us information about the world and may surprise us
I want to express support for this idea. We need to do experiments where the expected answer is “Hell if I know”—where you might discover something and not just fine-tune the precision of the answer you already know.
That’s fair. If a lot of your goals seem orthogonal to most of ours, this might not be worth your time.
However, I suspect that if you verbalize your goal somewhere (assuming you haven’t already), you can probably see if anyone else has similar aspirations.
Why?
Because participating in a community that isn’t helpful for what I am trying to achieve is a significant drain on my mental and temporal resources :)
I’m currently seeing how helpful it is.
That seems like a setup for some interesting questions!
What are you trying to achieve? How do you think the community could be helpful towards this goal? Also, how is the assessment of helpfulness going?
My path to LW was to Sl4 → overcoming bias → LW. This was because interested in systems that can change themselves in how they handle problems ( how they manipulate facets I didn’t have a good language for it back then though).
I’m now getting close-ish (1-3 years) out of having something that I think can manipulate facets in a flexible way (using an internal market place based on user feedback). I mentioned it here. My rough strategy for what to do with it long term is here.
This requires different long term strategies. With the way I view things I think it more likely we will build/become demi-gods slowly. My over-arching strategy is to try and figure out what problem-solving is then figure out what to do about it. I have come across stuff like this which suggests world views on this forum at odds with mine. I’ve asked for back up plans. But got no response.
Most recently I’ve been trying to argue that we as rationalists need a place to work together on experiments that may end up wrong or not have necessarily the highest predicted value, because exploring via experiments gives us information about the world and may surprise us. The community could provide this sort of space for me and others..
I think it likely my work will fail, just because I am trying something complicated and exploring an unexplored area.
My assessment of the helpfulness is not going well. I put lots of effort into discussions. I should probably be putting more effort into posts which get more visibility. But I think I get a taste of the likely engagement. I got some people clicking through my links on my project, one person watching my video to the end. But no comments and one upvote.
I want to express support for this idea. We need to do experiments where the expected answer is “Hell if I know”—where you might discover something and not just fine-tune the precision of the answer you already know.
That’s fair. If a lot of your goals seem orthogonal to most of ours, this might not be worth your time.
However, I suspect that if you verbalize your goal somewhere (assuming you haven’t already), you can probably see if anyone else has similar aspirations.
Just a thought.