Yeah, don’t be discouraged. LW is just like that sometimes. If you link to something with little or no commentary, it really needs to be directly about rationality itself or be using lots of LW-style rationality in the piece. This was a bit too mainstream to be appreciated widely here (even in discussion).
Glad to see you’re posting though! You still in ATL and learning about FAI? I made a post you might like. :)
LW needs just a generic link-submission area to give it some of that reddit functionality its missing. Maybe have a third area besides main and discussion which is just for reddit-style img/link submission...
I guess I should add, I assumed life extension and such as being entirely on-topic here. It’s kind of an obvious, major interest to any rationalist. To those who have no idea what I’m talking about, go read HPMOR a couple more times or something, jesus.
Also, good list, thanks. I actually don’t know anything about functional programming, I’m going to look that up today.
It’s kind of an obvious, major interest to any rationalist.
I’m wary of this conflation of rationality and a specific set of values. Rationality is a tool for achieving your values; it doesn’t specify what your values should be.
Immortality is good as instrument for realisation of many values—you will have more time to reach them.
And also excluding values from rationality means that we think of them like a set of arbitrary irrational axioms.
It may seem rational only if think about rationality as pure math logic. But ratio is intelligence in Latin, if we check the meaning of the word. So, in order to make definition of ratinality we need to make definition of intelligence which is almost equal to have recepie for AI. And as we dont know how to creat AI we also could not claim to know what exactly mean to be rational. And here is beginig of some loop logic which could undermine many LW goals. We cant take absolutely rational approach to creating AI and FAI, because we dont know what is rationality before we know how AGI works.
So if someone claims of understading rationality leads to irrational conclusion like that one—“to fight death is not important”, it could means that his understanding of rationality is wrong, it not add up to normality.
Yeah, don’t be discouraged. LW is just like that sometimes. If you link to something with little or no commentary, it really needs to be directly about rationality itself or be using lots of LW-style rationality in the piece. This was a bit too mainstream to be appreciated widely here (even in discussion).
Glad to see you’re posting though! You still in ATL and learning about FAI? I made a post you might like. :)
LW needs just a generic link-submission area to give it some of that reddit functionality its missing. Maybe have a third area besides main and discussion which is just for reddit-style img/link submission...
I guess I should add, I assumed life extension and such as being entirely on-topic here. It’s kind of an obvious, major interest to any rationalist. To those who have no idea what I’m talking about, go read HPMOR a couple more times or something, jesus.
Also, good list, thanks. I actually don’t know anything about functional programming, I’m going to look that up today.
The biweekly Open Thread works well for that sort of thing.
Is making LW more like Reddit a good idea? What if it actually becomes more like Reddit?
I’m wary of this conflation of rationality and a specific set of values. Rationality is a tool for achieving your values; it doesn’t specify what your values should be.
You’re just going to blast away the entire epistemic half of rationality? :)
nope
Immortality is good as instrument for realisation of many values—you will have more time to reach them. And also excluding values from rationality means that we think of them like a set of arbitrary irrational axioms.
It may seem rational only if think about rationality as pure math logic. But ratio is intelligence in Latin, if we check the meaning of the word. So, in order to make definition of ratinality we need to make definition of intelligence which is almost equal to have recepie for AI. And as we dont know how to creat AI we also could not claim to know what exactly mean to be rational. And here is beginig of some loop logic which could undermine many LW goals. We cant take absolutely rational approach to creating AI and FAI, because we dont know what is rationality before we know how AGI works.
So if someone claims of understading rationality leads to irrational conclusion like that one—“to fight death is not important”, it could means that his understanding of rationality is wrong, it not add up to normality.