But you haven’t given even the roughest outline of SI’s progress on the thing that matters most, actual FAI research.
From what I understand they can’t do that yet. They don’t have enough people to make some actual progress on important problems. They also don’t have enough money to hire enough people. So they are concentrating on raising awareness of the issue and persuading people to work on it, respectively contribute money to SI.
The real problem I see is the lack of formalized problems. I perceive it to be very important to formalize some actual problems. Doing so will aid raising money and allow others to work on the problems. To be more specific, I don’t think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that’s nothing that will impress the important academics enough to actually believe him on AI issues. He should have rather written a book on decision theory where he seems to have some genuine ideas.
To be more specific, I don’t think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that’s nothing that will impress the important academics enough to actually believe him on AI issues.
Rationality is probably a moderately important factor in planetary collective intelligence. Pinker claims that rational thinking + game theory have also contributed to recent positive moral shifts. Though there are some existing books on the topic, it could well be an area where a relatively small effort could produce a big positive result.
However, I’m not entirely convinced that hpmor.com is the best way to go about it...
From what I understand they can’t do that yet. They don’t have enough people to make some actual progress on important problems. They also don’t have enough money to hire enough people. So they are concentrating on raising awareness of the issue and persuading people to work on it, respectively contribute money to SI.
The real problem I see is the lack of formalized problems. I perceive it to be very important to formalize some actual problems. Doing so will aid raising money and allow others to work on the problems. To be more specific, I don’t think that writing a book on rationality is worth the time it takes to do so when it is written by one of a few people who might be capable of formalizing some important problems. Especially since there are already many books on rationality. Even if Eliezer Yudkowsky is able to put everything the world knows about rationality together in a concise manner, that’s nothing that will impress the important academics enough to actually believe him on AI issues. He should have rather written a book on decision theory where he seems to have some genuine ideas.
There was a list of problems posted recently:
Rationality is probably a moderately important factor in planetary collective intelligence. Pinker claims that rational thinking + game theory have also contributed to recent positive moral shifts. Though there are some existing books on the topic, it could well be an area where a relatively small effort could produce a big positive result.
However, I’m not entirely convinced that hpmor.com is the best way to go about it...
It turns out that HPMOR has been great for SI recruiting and networking. IMO winners apparently read HPMOR. So do an absurd number of Googlers.