Maybe I will, after I get a better idea why more people aren’t already working on these problems. One reason not to write it is that the feeling of being drowned in big problems is not a particularly good one, and possibly de-motivating. Sometimes I wish I could go back to 1998, when I thought Bayesianism was a pretty much complete solution to the problem of epistemology, except for this little issue of what to expect when you’re about to get copied twice in succession...
By the way, have you seen how I’ve been using MathOverflow recently? It seems that if you can reduce some problem to a short math question in standard terms, the default next action (after giving it your own best shot) should be posting it on MO. So far I’ve posted twoproblems that interested me, and both got solved within an hour.
So far I’ve posted two problems that interested me, and both got solved within an hour.
It’s all magic to me but it looks like a very effective human resource. Have you considered pushing MathOverflow to its limits and see if those people there might actually be able to make valuable contributions to open problems faced by Less Wrong or the SIAI?
I assume that the main obstacle in effectively exploiting such resources as MathOverflow is to formalize the problems that are faced by people working to refine rationality or create FAI. Once you know how to ask the the right questions one could spread them everywhere and see if there is someone who might be able to answer them, or if there is already a known solution.
Currently it appears to me that most of the important problems are not widely known, a lot of them being mainly discussed here on Less Wrong or on obscure mailing lists. By formalizing and spreading the gist of those problems one would be able to make people aware of Less Wrong and risks from AI and exploit various resources.
What I am thinking about is analogous to a huge roadside billboard with a short but succinct description of an important problem. Someone really smart or knowledgeable might drive-by and solve it. Not only would the solution be valuable but you would win a potential new human resource.
I’m all for exploiting resources to the limit! The bottleneck is formalizing the problems. It’s very slow and difficult work for me, and the SIAI people aren’t significantly faster at this task, as far as I can see.
One reason not to write it is that the feeling of being drowned in big problems is not a particularly good one, and possibly de-motivating
No! What is particularly demotivating for me is that I don’t know what heuristics I can trust and when I am better off trusting my intuitions (e.g. Pascal’s Mugging).
If someone was going to survey the rationality landscape and outline what we know and where we run into problems, it would help a lot by making people aware of the big and important problems.
I’d like you to write that post.
Maybe I will, after I get a better idea why more people aren’t already working on these problems. One reason not to write it is that the feeling of being drowned in big problems is not a particularly good one, and possibly de-motivating. Sometimes I wish I could go back to 1998, when I thought Bayesianism was a pretty much complete solution to the problem of epistemology, except for this little issue of what to expect when you’re about to get copied twice in succession...
By the way, have you seen how I’ve been using MathOverflow recently? It seems that if you can reduce some problem to a short math question in standard terms, the default next action (after giving it your own best shot) should be posting it on MO. So far I’ve posted two problems that interested me, and both got solved within an hour.
It’s all magic to me but it looks like a very effective human resource. Have you considered pushing MathOverflow to its limits and see if those people there might actually be able to make valuable contributions to open problems faced by Less Wrong or the SIAI?
I assume that the main obstacle in effectively exploiting such resources as MathOverflow is to formalize the problems that are faced by people working to refine rationality or create FAI. Once you know how to ask the the right questions one could spread them everywhere and see if there is someone who might be able to answer them, or if there is already a known solution.
Currently it appears to me that most of the important problems are not widely known, a lot of them being mainly discussed here on Less Wrong or on obscure mailing lists. By formalizing and spreading the gist of those problems one would be able to make people aware of Less Wrong and risks from AI and exploit various resources.
What I am thinking about is analogous to a huge roadside billboard with a short but succinct description of an important problem. Someone really smart or knowledgeable might drive-by and solve it. Not only would the solution be valuable but you would win a potential new human resource.
I’m all for exploiting resources to the limit! The bottleneck is formalizing the problems. It’s very slow and difficult work for me, and the SIAI people aren’t significantly faster at this task, as far as I can see.
No! What is particularly demotivating for me is that I don’t know what heuristics I can trust and when I am better off trusting my intuitions (e.g. Pascal’s Mugging).
If someone was going to survey the rationality landscape and outline what we know and where we run into problems, it would help a lot by making people aware of the big and important problems.