I’ll post this here and put a copy on my desktop so I remember to check it in a few months. My beliefs tend to change very often and very quickly.
Beliefs that I think are most likely to change:
Existence is very tied up with relative causal significance. Classical subjective anticipation makes little sense. Quantum immortality should be replaced with causal immortality.
Reflective consistency is slippery due to impermanence of agency and may be significantly less compelling than I had previously thought.
Something like human ‘morality’ (possibly more relevant to actions pre-Singularity than Eliezer’s conception of ‘good’ as humanity-CEV) might be important for reasons having to do with acausal control and, to a lesser extent, smarter-than-human intelligences in the multiverse looking at humanity and other alien civilizations as evidence of what patterns of action could be recognized as morally justified.
Building a seed AI that doesn’t converge on a design something like the one outlined in Creating Friendly AI may be impossible due to convergent decision theoretic reflective self re-engineering (and the grounding problem). Of course, for all purposes this intuition doesn’t matter, as we still have to prove something like Friendliness.
Solving Friendliness (minus the AGI part (which would be rather integrated so this is kind of vague)) is somewhat easier than cleanly engineered seed AI, independent of the previous bullet point.
Death the way it is normally conceptualized is a confusion. The Buddhist conception of rebirth is more accurate. (And it doesn’t mean the transparently stupid thing that most Westerners imagine.)
Most of Less Wrong’s intuitions about how the world works are based on an advanced form of naive realism that just doesn’t work. “It all adds up to normality” is either tautologous or just plain wrong. What you thought of as normality normally isn’t.
Ensemble universe theories are almost certainly correct.
Suicide with the intent of ending subjective experience is downright impossible. (The idea of a ‘self’ that is suffering is also a confusion, but anyway...) The only form of liberation from suffering is Enlightenment in the Buddhist sense.
I am the main character to the extent that ‘I’ ‘am’. At the very least I should act as if I am for instrumentally rational reasons.
People that are good at ‘doing things’ will never have the epistemic rationality necessary to build FAI or seed AI, due to limits of human psychology. Whoever does build AI will probably have some form of schizoid personality disorder or, ironically somewhat oppositely, autistic spectrum disorder.
My intuition is totally awesome and can be used reliably to see important themes in scientific, philosophical, and spiritual fields.
I am the main character to the extent that ‘I’ ‘am’. At the very least I should act as if I am for instrumentally rational reasons.
I’d be interested to hear about this one in more detail. There are a lot of possible interpretations of it, but most of them seem egoist in a way that doesn’t seem to mesh well with the spirit of your other comments.
Death the way it is normally conceptualized is a confusion. The Buddhist conception of rebirth is more accurate. (And it doesn’t mean the transparently stupid thing that most Westerners imagine.)
Seconded for Buddhist clarifications, particularly the one that goes to the quote above.
Causal immortality seems more and more true to me over time (I would be surprised if any of the major SIAI donors, including older ones, die before the Singularity) but could definitely use some explanation. Though I’m not sure of the consequences of encouraging people to maximize their causal significance. Almost definitely not good.
I’ll post this here and put a copy on my desktop so I remember to check it in a few months. My beliefs tend to change very often and very quickly.
Beliefs that I think are most likely to change:
Existence is very tied up with relative causal significance. Classical subjective anticipation makes little sense. Quantum immortality should be replaced with causal immortality.
Reflective consistency is slippery due to impermanence of agency and may be significantly less compelling than I had previously thought.
Something like human ‘morality’ (possibly more relevant to actions pre-Singularity than Eliezer’s conception of ‘good’ as humanity-CEV) might be important for reasons having to do with acausal control and, to a lesser extent, smarter-than-human intelligences in the multiverse looking at humanity and other alien civilizations as evidence of what patterns of action could be recognized as morally justified.
Building a seed AI that doesn’t converge on a design something like the one outlined in Creating Friendly AI may be impossible due to convergent decision theoretic reflective self re-engineering (and the grounding problem). Of course, for all purposes this intuition doesn’t matter, as we still have to prove something like Friendliness.
Solving Friendliness (minus the AGI part (which would be rather integrated so this is kind of vague)) is somewhat easier than cleanly engineered seed AI, independent of the previous bullet point.
Death the way it is normally conceptualized is a confusion. The Buddhist conception of rebirth is more accurate. (And it doesn’t mean the transparently stupid thing that most Westerners imagine.)
Most of Less Wrong’s intuitions about how the world works are based on an advanced form of naive realism that just doesn’t work. “It all adds up to normality” is either tautologous or just plain wrong. What you thought of as normality normally isn’t.
Ensemble universe theories are almost certainly correct.
Suicide with the intent of ending subjective experience is downright impossible. (The idea of a ‘self’ that is suffering is also a confusion, but anyway...) The only form of liberation from suffering is Enlightenment in the Buddhist sense.
I am the main character to the extent that ‘I’ ‘am’. At the very least I should act as if I am for instrumentally rational reasons.
People that are good at ‘doing things’ will never have the epistemic rationality necessary to build FAI or seed AI, due to limits of human psychology. Whoever does build AI will probably have some form of schizoid personality disorder or, ironically somewhat oppositely, autistic spectrum disorder.
My intuition is totally awesome and can be used reliably to see important themes in scientific, philosophical, and spiritual fields.
Could you explain/link to an explanation of the Buddhist bits?
I’d be interested to hear about this one in more detail. There are a lot of possible interpretations of it, but most of them seem egoist in a way that doesn’t seem to mesh well with the spirit of your other comments.
Seconded for Buddhist clarifications, particularly the one that goes to the quote above.
Nice.
Causal immortality seems more and more true to me over time (I would be surprised if any of the major SIAI donors, including older ones, die before the Singularity) but could definitely use some explanation. Though I’m not sure of the consequences of encouraging people to maximize their causal significance. Almost definitely not good.