>That’s much more like the sort of thing you can give to an optimizer. And it results in the world frozen solid.
That’s why I made sure to specify the gradual improvement. Also, development and improvement are also the natural state of humanity and people, so taking that away from them means breaking the status quo too.
>I notice that the word “reasonably” is doing most of the work there. (much like in English Common Law, where it works reasonably well, because it’s interpreted by reasonably human beings.
Mathematically speaking, polynomials are reasonable functions. Step functions or factorials are not. Exponents are reasonable, if they are exponent over ~constant value since somewhere before year 2000. Metrics of the reasonable world should be described with reasonable functions.
>There are three kinds of genies: Genies to whom you can safely say “I wish for you to do what I should wish for”; genies for which no wish is safe; and genies that aren’t very powerful or intelligent.
I’ll take third please. It just should be powerful enough that it can prevent other two types from being created in foreseable future.
Also, it seems that you imagine AI as not just the second type of genie, but of a genie that is explicitly hostile and would misinterpret your wish on purpose. Of cause, making any wish for such genie would end badly.
Genies that are not very powerful or intelligent are not powerful enough to prevent the other two types from being created. They need to be more capable than you are, or you could just do the stuff yourself.
not just the second type of genie, but of a genie that is explicitly hostile and would misinterpret your wish on purpose
The second type of genie is hostile and would misinterpret your wish! Not deliberately, not malevolently. Just because that’s what optimizers are like unless they’re optimising for the right thing.
Creating something malevolent, that would deliberately misinterpret an otherwise sound wish also requires solving the alignment problem. You’d need to somehow get human values into it so that it can deliberately pervert them.
Honestly, Eliezer’s original essay from aeons ago explains all this. You should read it.
AI can be useful without being ASI. Including in things such as identifying and preventing situations that could lead to creation of unaligned ASI.
Of cause, conservative and human-friendly AI would probably lose to existing AI with comparable power, but not limited by those “handicaps”. That’s why it’s important to prevent the possibility of their creation, instead of fighting them “fairly”.
Of cause, all those ideas and possibilities may be actually duds. And we are doomed no matter not. But then what’s the point of seeking for solution that does not exist?
I think it may be caused by https://en.wikipedia.org/wiki/Anxiety_disorder I suffer from that too. That’s a very counterproductive state of mind if the task is unsolvable in it’s full difficulty. It makes you lose hope and stop trying solutions that would work if situation is not as bad as you imagined.
A good guess, and thank you for the reference, but (although I admit that the prospect of global imminent doom is somewhat anxious-making), anxiety isn’t a state of mind I’m terribly familiar with personally. I’m very emotionally stable usually, and I lost all hope years ago. It doesn’t bother me much.
It’s more that I have the ‘taking ideas seriously’ thing in full measure, once I get an *idee fixe* I can’t let it go until I’ve solved it. AI Doom is currently third on the list after chess and the seed oil nonsense, but the whole Bing/Sydney thing started me thinking about it again and someone emailed me Eliezer’s podcast, you know how it goes.
Although I do have a couple of friends who suffer greatly from Anxiety Disorder, and you have my sympathies, especially if you’re interested in all this stuff! Honestly run away, there’s nothing to be done and you have a life to live.
Totally off topic but have you tried lavender pills? I started recommending them to friends after Scott Alexander said they might work, and out of three people I’ve got one total failure, one refusal to take for good reasons, and one complete fix! Obviously do your own research as to side effects, just cause it’s ‘natural’ doesn’t mean it’s safe. The main one is if you’re a girl it will interfere with your hormones and might cause miscarriages.
Thanks for advice. Looks like my mind works similar to yours, i.e. can’t give up task it has latched on. But mine brain draws way more from the rest of my body than it is healthy.
It’s not as bad now as it was in the first couple of week, but I still have problem sleeping regularly, because my mind can’t switch off the overdrive mode. So, I become sleepy AND agitated at the same time, which is quite unpleasant and unproductive state.
There are no Lavender Pills around here, but I take other anxiety medications, and they help, to an extent.
The person who had it work for them tried something purchased from a shop, Herbal Calms maybe?, anyway, lavender oil in vegetable oil in little capsules. She reports that she can get to sleep now, and can face doing things that she couldn’t previously do due to anxiety if she pops a capsule first.
>That’s much more like the sort of thing you can give to an optimizer. And it results in the world frozen solid.
That’s why I made sure to specify the gradual improvement. Also, development and improvement are also the natural state of humanity and people, so taking that away from them means breaking the status quo too.
>I notice that the word “reasonably” is doing most of the work there. (much like in English Common Law, where it works reasonably well, because it’s interpreted by reasonably human beings.
Mathematically speaking, polynomials are reasonable functions. Step functions or factorials are not. Exponents are reasonable, if they are exponent over ~constant value since somewhere before year 2000. Metrics of the reasonable world should be described with reasonable functions.
>There are three kinds of genies: Genies to whom you can safely say “I wish for you to do what I should wish for”; genies for which no wish is safe; and genies that aren’t very powerful or intelligent.
I’ll take third please. It just should be powerful enough that it can prevent other two types from being created in foreseable future.
Also, it seems that you imagine AI as not just the second type of genie, but of a genie that is explicitly hostile and would misinterpret your wish on purpose. Of cause, making any wish for such genie would end badly.
Genies that are not very powerful or intelligent are not powerful enough to prevent the other two types from being created. They need to be more capable than you are, or you could just do the stuff yourself.
The second type of genie is hostile and would misinterpret your wish! Not deliberately, not malevolently. Just because that’s what optimizers are like unless they’re optimising for the right thing.
Creating something malevolent, that would deliberately misinterpret an otherwise sound wish also requires solving the alignment problem. You’d need to somehow get human values into it so that it can deliberately pervert them.
Honestly, Eliezer’s original essay from aeons ago explains all this. You should read it.
AI can be useful without being ASI. Including in things such as identifying and preventing situations that could lead to creation of unaligned ASI.
Of cause, conservative and human-friendly AI would probably lose to existing AI with comparable power, but not limited by those “handicaps”. That’s why it’s important to prevent the possibility of their creation, instead of fighting them “fairly”.
And yes, computronium maximising is a likely behaviour, but there are ideas how to avoid it, such as https://www.lesswrong.com/posts/ngEvKav9w57XrGQnb/cognitive-emulation-a-naive-ai-safety-proposal or https://www.lesswrong.com/posts/5gQLrJr2yhPzMCcni/the-optimizer-s-curse-and-how-to-beat-it
Of cause, all those ideas and possibilities may be actually duds. And we are doomed no matter not. But then what’s the point of seeking for solution that does not exist?
I’m not proposing solutions here. I think we face an insurmountable opportunity.
But for some reason I don’t understand, I am driven to stare the problem in the face in its full difficulty.
I think it may be caused by https://en.wikipedia.org/wiki/Anxiety_disorder
I suffer from that too.
That’s a very counterproductive state of mind if the task is unsolvable in it’s full difficulty. It makes you lose hope and stop trying solutions that would work if situation is not as bad as you imagined.
A good guess, and thank you for the reference, but (although I admit that the prospect of global imminent doom is somewhat anxious-making), anxiety isn’t a state of mind I’m terribly familiar with personally. I’m very emotionally stable usually, and I lost all hope years ago. It doesn’t bother me much.
It’s more that I have the ‘taking ideas seriously’ thing in full measure, once I get an *idee fixe* I can’t let it go until I’ve solved it. AI Doom is currently third on the list after chess and the seed oil nonsense, but the whole Bing/Sydney thing started me thinking about it again and someone emailed me Eliezer’s podcast, you know how it goes.
Although I do have a couple of friends who suffer greatly from Anxiety Disorder, and you have my sympathies, especially if you’re interested in all this stuff! Honestly run away, there’s nothing to be done and you have a life to live.
Totally off topic but have you tried lavender pills? I started recommending them to friends after Scott Alexander said they might work, and out of three people I’ve got one total failure, one refusal to take for good reasons, and one complete fix! Obviously do your own research as to side effects, just cause it’s ‘natural’ doesn’t mean it’s safe. The main one is if you’re a girl it will interfere with your hormones and might cause miscarriages.
Thanks for advice. Looks like my mind works similar to yours, i.e. can’t give up task it has latched on. But mine brain draws way more from the rest of my body than it is healthy.
It’s not as bad now as it was in the first couple of week, but I still have problem sleeping regularly, because my mind can’t switch off the overdrive mode. So, I become sleepy AND agitated at the same time, which is quite unpleasant and unproductive state.
There are no Lavender Pills around here, but I take other anxiety medications, and they help, to an extent.
These seemed good, they taste of lavender, but the person trying them got no effect:
https://www.amazon.co.uk/gp/product/B06XPLTLLN/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
Lindens Lavender Essential Oil 80mg Capsules
The person who had it work for them tried something purchased from a shop, Herbal Calms maybe?, anyway, lavender oil in vegetable oil in little capsules. She reports that she can get to sleep now, and can face doing things that she couldn’t previously do due to anxiety if she pops a capsule first.