My immediate mental response was that I value this post, but it doesn’t fit with the mood of lesswrong. Which is kind of sad because this seems practical. But this is heavily biased by how upvotes are divvied out, since I typically read highly-upvoted posts.
It seems less likely to maximize my happiness or my contribution to society, but it doesn’t make me not want it
I thought this was clear to me, but then I thought some more and I no longer think it’s straightforward. It pattern matched against
high value vs low probability
personalities are inbuilt biases in human strategy
But deductions from them seem of spurious use.
I agree that it’s a good idea to give things a try to collect data before making longer term plans. Since you’re explicitly exploring rather than exploiting, I suggest trying low-effort wacky ideas in many different directions (eg. Not on lesswrong)
My immediate mental response was that I value this post, but it doesn’t fit with the mood of lesswrong. Which is kind of sad because this seems practical. But this is heavily biased by how upvotes are divvied out, since I typically read highly-upvoted posts.
I thought this was clear to me, but then I thought some more and I no longer think it’s straightforward. It pattern matched against
high value vs low probability
personalities are inbuilt biases in human strategy
But deductions from them seem of spurious use.
I agree that it’s a good idea to give things a try to collect data before making longer term plans. Since you’re explicitly exploring rather than exploiting, I suggest trying low-effort wacky ideas in many different directions (eg. Not on lesswrong)