You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences. This is a contradiction. Why are you currently endorsing stupid “terminal” values?
You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences.
Wait, are you suggesting that I be uploaded into something with really excellent computational power so I myself would become a superintelligence? As opposed to an external agent that happened to be superintelligent? That might actually work. I will have to think about that. You could have been less rude in proposing it, though.
No. I am suggesting that the situation I described is what you would find in an FAI. You really should be deferring to Eliezer’s expertise in this case.
What about my statements was rude? How can I present these arguments without making you feel uncomfortable?
No. I am suggesting that the situation I described is what you would find in an FAI.
Then I don’t understand what you said.
You really should be deferring to Eliezer’s expertise in this case.
I will not do that as long as he seems confused about the psychology he’s trying to predict things for.
What about my statements was rude? How can I present these arguments without making you feel uncomfortable?
I think calling my terminal values “stupid” was probably the most egregious bit. It is wise to avoid that word as applied to people and things they care about. I would appreciate it if people who want to help me would react with curiosity, not screeching incredulity and metaphorical tearing out of hair, when they find my statements about myself or other things puzzling or apparently inconsistent.
If he and I are confused, you are seriously failing to describe your situation. You are a human brain. Brains work by physical laws. Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.
I understand your antipathy for the word stupid. I shall try to avoid it in the future.
If he and I are confused, you are seriously failing to describe your situation.
Yes, this is very likely. I don’t think I ever claimed that the problem wasn’t in how I was explaining myself; but a fact about my explanation isn’t a fact about the (poorly) explained phenomenon.
Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.
I can figure out how to fix the issues I have too: I’m in the process of befriending some more cryonics-friendly people. Why do people think this isn’t going to work? Or does it just seem like a bad way to approach the problem for some reason? Or do people think I won’t follow through on signing up should I acquire a suitable friend, even though I’ve offered to bet money on my being signed up within two years barring immense financial disaster?
Your second paragraph clears up my lingering misunderstandings; that was the missing piece of information for me. We were (or at least I was) arguing about a hypothetical situation instead of the actual situation. What you’re doing sounds perfectly reasonable to me.
You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences. This is a contradiction. Why are you currently endorsing stupid “terminal” values?
Wait, are you suggesting that I be uploaded into something with really excellent computational power so I myself would become a superintelligence? As opposed to an external agent that happened to be superintelligent? That might actually work. I will have to think about that. You could have been less rude in proposing it, though.
No. I am suggesting that the situation I described is what you would find in an FAI. You really should be deferring to Eliezer’s expertise in this case.
What about my statements was rude? How can I present these arguments without making you feel uncomfortable?
Then I don’t understand what you said.
I will not do that as long as he seems confused about the psychology he’s trying to predict things for.
I think calling my terminal values “stupid” was probably the most egregious bit. It is wise to avoid that word as applied to people and things they care about. I would appreciate it if people who want to help me would react with curiosity, not screeching incredulity and metaphorical tearing out of hair, when they find my statements about myself or other things puzzling or apparently inconsistent.
If he and I are confused, you are seriously failing to describe your situation. You are a human brain. Brains work by physical laws. Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.
I understand your antipathy for the word stupid. I shall try to avoid it in the future.
Yes, this is very likely. I don’t think I ever claimed that the problem wasn’t in how I was explaining myself; but a fact about my explanation isn’t a fact about the (poorly) explained phenomenon.
I can figure out how to fix the issues I have too: I’m in the process of befriending some more cryonics-friendly people. Why do people think this isn’t going to work? Or does it just seem like a bad way to approach the problem for some reason? Or do people think I won’t follow through on signing up should I acquire a suitable friend, even though I’ve offered to bet money on my being signed up within two years barring immense financial disaster?
Your second paragraph clears up my lingering misunderstandings; that was the missing piece of information for me. We were (or at least I was) arguing about a hypothetical situation instead of the actual situation. What you’re doing sounds perfectly reasonable to me.
If you are willing to take the 1 in 500 chance, my best wishes.
Where did that number come from and what does it refer to?
Actuarial tables, odds of death for a two year period for someone in their twenties (unless I misread the table, which is not at all impossible).
It’s really that likely? Can I see the tables? The number sounds too pessimistic to me.
http://www.socialsecurity.gov/OACT/STATS/table4c6.html
Looks like it should be 1/1000 for two years to me.
It should be around 1 in 400 for males in their 20s and 1 in 1000 for females in their 20s.