I think the most common response to ‘community’ should have been a post to LessWrong and its founding sequences. We wanted to create a place for rationalists that can discuss the Art and Science, at least this year.
A place to discuss an important topic which might otherwise not be discussed, is CFAR.
To paraphrase two of the core themes on this site:
In humans, the world is an incredible fabric of complex, entangled, self-reinforcing processes. (Many of us can be made aware of this fact, though usually it isn’t necessary.)
Rather than trying to collect information from each person, we use a series of simpler, more useful shared models, based on our conversations and experiences.
One of the CFAR concepts is the “agent-agent distinction”, where the AI agent is the AI agent, and so also tries to understand its own goals and limitations. One of the main motivations for the new Center for Applied Rationality is to build a sense of understanding and understanding of its own motivations, and these are attempts to make the AI general intelligent agents reflect humanity’s goals.
CFAR has a whole overarching mission of raising the sanity waterline. That is, it is attempting to create people who can benefit from thinking clearly, and help each other reach its goals while also being more effective. As a nonprofit, CFAR is close to being a place where we can help people overcome their irrational biases, and to do so as best they can.
CFAR is building a whole new rationality curriculum that will hopefully help people become more effective.
We are reviving this November and November again. Like the rest of the January 2008 Singularity Summits, we tweaking the curriculum and organization of CFAR alumni. The new thinking tools workshop will give us specific ways to apply the principles of rationality to the behavior of different groups or individuals, as opposed to mere human “capital” and organizational stuff.
In past years, we’ve moved from “organizational inadequacy” to “additional common denominator” posts and to “additional organizational capital” posts, where I’d like there to be funding for doing high-impact good. Emphasizing and organizing such an organization allows us to step outside of the academic and organizational space that would normally be reserved for highly technical people.
In a more practical sense, the oxen-back infrastructure in Berkeley is existing, but we’
The paper doesn’t show up until 4:30, even if the book is intended very specifically to convince a significant fraction of the population that cryonics is plausible for humanity.
For those that don’t understand, see here.
For the first chapter, you basically make the case that the scientific method is wrong, or at least that is not a strawman. The rest of what I’ve read is the most mainstream-seeming and obvious the scientific method seems to be no doubt wrong.
For the second chapter, you basically show the science in a book that is primarily about the ability of human minds to generalize from one another, where it is based on:
The basic Bayes-related questions of personal identity—i.e., how much should it be enough to have a psychological effect?
How much should one’s society be prioritised that one can be in this position?
In particular, it doesn’t fit in the Bostrom model of personal identity.
It’s not entirely clear that the subject matter of writing about the relationship between personal identity and mental identity is exactly the sort of information-theoretic question that could lead us to a useful answer, and the kind of information that would be better in the context of the question you will find yourself in the future.
You probably see this phrasing and the objections about science, and I think you’ve taken them too far. Yes, it’s hard to argue about the degree of overlap with the scientific method, and yes, the two are relevant. But if it’s going to work in such extreme cases for a long time, then there should be an additional thing called “substrategic knowledge”.
One of the things that I think is really important is to figure out how to think about personal identity under the “internal locus of control”. Here’s my attempt to begin that.
The “internal locus of control” seems like it would be quite a different subject in this context, I think from where I’ve heading and here.
If this doesn’t work, then there could be some fundamental difference between myself and a rationalist.
A few of my observations:
I’ve been a slow reader for a while now. I was probably under-remembering a lot about LW when I was a teenager, so I didn’t really get anything.
I think the most common response to ‘community’ should have been a post to LessWrong and its founding sequences. We wanted to create a place for rationalists that can discuss the Art and Science, at least this year.
A place to discuss an important topic which might otherwise not be discussed, is CFAR.
To paraphrase two of the core themes on this site:
In humans, the world is an incredible fabric of complex, entangled, self-reinforcing processes. (Many of us can be made aware of this fact, though usually it isn’t necessary.)
Rather than trying to collect information from each person, we use a series of simpler, more useful shared models, based on our conversations and experiences.
One of the CFAR concepts is the “agent-agent distinction”, where the AI agent is the AI agent, and so also tries to understand its own goals and limitations. One of the main motivations for the new Center for Applied Rationality is to build a sense of understanding and understanding of its own motivations, and these are attempts to make the AI general intelligent agents reflect humanity’s goals.
CFAR has a whole overarching mission of raising the sanity waterline. That is, it is attempting to create people who can benefit from thinking clearly, and help each other reach its goals while also being more effective. As a nonprofit, CFAR is close to being a place where we can help people overcome their irrational biases, and to do so as best they can.
CFAR is building a whole new rationality curriculum that will hopefully help people become more effective.
We are reviving this November and November again. Like the rest of the January 2008 Singularity Summits, we tweaking the curriculum and organization of CFAR alumni. The new thinking tools workshop will give us specific ways to apply the principles of rationality to the behavior of different groups or individuals, as opposed to mere human “capital” and organizational stuff. In past years, we’ve moved from “organizational inadequacy” to “additional common denominator” posts and to “additional organizational capital” posts, where I’d like there to be funding for doing high-impact good. Emphasizing and organizing such an organization allows us to step outside of the academic and organizational space that would normally be reserved for highly technical people.
In a more practical sense, the oxen-back infrastructure in Berkeley is existing, but we’
Ah. It’s a bot. I suppose the name should have tipped me off. At least I get Being More Confused By Fiction Than Reality points.
You’ve covered a lot of things in my writing and I enjoy this. Thanks for what you’ve done.
How did you write that in less than a minute?
Thanks for writing this!
The paper doesn’t show up until 4:30, even if the book is intended very specifically to convince a significant fraction of the population that cryonics is plausible for humanity.
For those that don’t understand, see here.
For the first chapter, you basically make the case that the scientific method is wrong, or at least that is not a strawman. The rest of what I’ve read is the most mainstream-seeming and obvious the scientific method seems to be no doubt wrong.
For the second chapter, you basically show the science in a book that is primarily about the ability of human minds to generalize from one another, where it is based on:
The basic Bayes-related questions of personal identity—i.e., how much should it be enough to have a psychological effect?
How much should one’s society be prioritised that one can be in this position?
In particular, it doesn’t fit in the Bostrom model of personal identity.
It’s not entirely clear that the subject matter of writing about the relationship between personal identity and mental identity is exactly the sort of information-theoretic question that could lead us to a useful answer, and the kind of information that would be better in the context of the question you will find yourself in the future.
You probably see this phrasing and the objections about science, and I think you’ve taken them too far. Yes, it’s hard to argue about the degree of overlap with the scientific method, and yes, the two are relevant. But if it’s going to work in such extreme cases for a long time, then there should be an additional thing called “substrategic knowledge”.
One of the things that I think is really important is to figure out how to think about personal identity under the “internal locus of control”. Here’s my attempt to begin that.
The “internal locus of control” seems like it would be quite a different subject in this context, I think from where I’ve heading and here.
If this doesn’t work, then there could be some fundamental difference between myself and a rationalist.
A few of my observations:
I’ve been a slow reader for a while now. I was probably under-remembering a lot about LW when I was a teenager, so I didn’t really get anything.
I was