This is a draft of a post I’m planning to send to my everything-list, partly to invite them to join Less Wrong. I’d appreciate comments and feedback on it.
Recently I heard the news that Max Tegmark has joined the Advisory Board of SIAI (The Singularity Institute for Artificial Intelligence, see http://www.singinst.org/blog/2010/03/03/mit-professor-and-cosmologist-max-tegmark-joins-siai-advisory-board/). This news was surprising to me, but in retrospect perhaps shouldn’t have been. Out of the three authors of papers I cited in the original everything-list charter/invitation, two others had already effectively declared themselves to be Singularitarians (see http://en.wikipedia.org/wiki/Singularitarianism): Nick Bostrom has been on SIAI’s Advisory Board for a while, and Juergen Schmidhuber spoke at the Singularity Summit 2009. I was also recently invited to visit SIAI for a decision theory mini-workshop, where I found the ultimate ensemble idea to be very well-received. It turns out that many SIAI people have been following the everything-list for years.
There seems to be a very strong correlation between interest in the kind of ideas we discuss here, and interest in the technological singularity. (I myself have been interested in the Singularity even before starting this mailing list.) So the main point of this post is to let the list members who are not already familiar with the Singularity know that there is another set of ideas out there that they are likely to find fascinating.
Another reason for this post is to let you know that I’ve been spending most of my online discussion time at Less Wrong (http://lesswrong.com/lw/1/about_less_wrong/, “a community blog devoted to refining the art of human rationality” which is sponsored by the Future Humanity Institute, founded by Nick Bostrom, and effectively “owned” by Eliezer Yudkowsky, founder of SIAI). There I wrote a sequence of posts summarizing my current thoughts about decision theory, interpretations of probability, anthropic reasoning, and the ultimate ensemble theory.
I initially wanted to reach a difference audience with these ideas, but found that the Less Wrong format has several of advantages: both posts and comments can be voted upon, the site’s members uphold fairly strict standards of clarity and logic, and the threaded presentation of comments makes discussions much easier to follow. So I plan to continue to spend most of my time there, and invite other everything-list members to join me. But please note that the site has a different set customs and emphases in topics. New members are also expected to have a good grasp of the current state of the art in human rationality in general (Bayesianism, heuristics and biases, Aumann agreement, etc., see http://wiki.lesswrong.com/wiki/Sequences) before posting, and especially before getting into disagreements and arguments with others.
I’m still with Jack that pointing new readers to the entirety of the sequences is non-optimal. I’m waiting for the day when we can at least say “Start here (link) and keep clicking Next, and skim as much as you like”, but you probably don’t want to wait that long to send the post, so I don’t know.
This is a draft of a post I’m planning to send to my everything-list, partly to invite them to join Less Wrong. I’d appreciate comments and feedback on it.
Recently I heard the news that Max Tegmark has joined the Advisory Board of SIAI (The Singularity Institute for Artificial Intelligence, see http://www.singinst.org/blog/2010/03/03/mit-professor-and-cosmologist-max-tegmark-joins-siai-advisory-board/). This news was surprising to me, but in retrospect perhaps shouldn’t have been. Out of the three authors of papers I cited in the original everything-list charter/invitation, two others had already effectively declared themselves to be Singularitarians (see http://en.wikipedia.org/wiki/Singularitarianism): Nick Bostrom has been on SIAI’s Advisory Board for a while, and Juergen Schmidhuber spoke at the Singularity Summit 2009. I was also recently invited to visit SIAI for a decision theory mini-workshop, where I found the ultimate ensemble idea to be very well-received. It turns out that many SIAI people have been following the everything-list for years.
There seems to be a very strong correlation between interest in the kind of ideas we discuss here, and interest in the technological singularity. (I myself have been interested in the Singularity even before starting this mailing list.) So the main point of this post is to let the list members who are not already familiar with the Singularity know that there is another set of ideas out there that they are likely to find fascinating.
Another reason for this post is to let you know that I’ve been spending most of my online discussion time at Less Wrong (http://lesswrong.com/lw/1/about_less_wrong/, “a community blog devoted to refining the art of human rationality” which is sponsored by the Future Humanity Institute, founded by Nick Bostrom, and effectively “owned” by Eliezer Yudkowsky, founder of SIAI). There I wrote a sequence of posts summarizing my current thoughts about decision theory, interpretations of probability, anthropic reasoning, and the ultimate ensemble theory.
http://lesswrong.com/lw/15m/towards_a_new_decision_theory/
http://lesswrong.com/lw/175/torture_vs_dust_vs_the_presumptuous_philosopher/
http://lesswrong.com/lw/182/the_absentminded_driver/
http://lesswrong.com/lw/1a5/scott_aaronson_on_born_probabilities/
http://lesswrong.com/lw/1b8/anticipation_vs_faith_at_what_cost_rationality/
http://lesswrong.com/lw/1cd/why_the_beliefsvalues_dichotomy/
http://lesswrong.com/lw/1fu/why_and_why_not_bayesian_updating/
http://lesswrong.com/lw/1hg/the_moral_status_of_independent_identical_copies/
http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/
I initially wanted to reach a difference audience with these ideas, but found that the Less Wrong format has several of advantages: both posts and comments can be voted upon, the site’s members uphold fairly strict standards of clarity and logic, and the threaded presentation of comments makes discussions much easier to follow. So I plan to continue to spend most of my time there, and invite other everything-list members to join me. But please note that the site has a different set customs and emphases in topics. New members are also expected to have a good grasp of the current state of the art in human rationality in general (Bayesianism, heuristics and biases, Aumann agreement, etc., see http://wiki.lesswrong.com/wiki/Sequences) before posting, and especially before getting into disagreements and arguments with others.
I’m still with Jack that pointing new readers to the entirety of the sequences is non-optimal. I’m waiting for the day when we can at least say “Start here (link) and keep clicking Next, and skim as much as you like”, but you probably don’t want to wait that long to send the post, so I don’t know.
It doesn’t look bad to me—if you believe it would be well-received, I see no problem with sending it.
Looks fine to me.