May I suggest a test for any such future model? It should take into account that I have unconsciousness sub-personalities which affect my behaviour but I don’t know about them.
turchin
I think you proved that values can’t exist outside a human mind, and it is a big problem to the idea of value alignment.
The only solution I see is: don’t try to extract values from the human mind, but try to upload a human mind into a computer. In that case, we kill two birds with one stone: we have some form of AI, which has human values (no matter what are they), and it has also common sense.
Upload as AI safety solution also may have difficulties in foom-style self-improving, as its internal structure is messy and incomprehensible for normal human mind. So it is intrinsically safe and only known workable solution to the AI safety.
However, there are (at least) two main problems with such solution of AI safety: it may give rise to neuromorphic non-human AIs and it is not preventing the later appearance of pure AI, which will foom and kill everybody.
The solution to it I see in using first human upload as AI Nanny or AI police which will prevent the appearance of any other more sophisticated AIs elsewhere.
I expected it will jump out and start to replicate all over the world.
You could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office.
There is no need to start any institute if you don’t have any dedicated group of people around. Institute consisting of one person is something strange.
I read in one Russian blog that they calculated the form of objects able to produce such dips. It occurred to be 10 million kilometres strips orbiting the star. I think it is very similar to very large comet tails.
Any attempts for posthumous digital immortality? That is collecting all the data about the person with the hope that the future AI will create his exact model.
Two my comments got −3 each, so probably only one person with high carma was able to do so.
Thanks for the explanation. Typically I got 70 percent upvoted in LW1, and getting −3 was a signal that I am in a much more aggressive environment, than was LW1.
Anyway, the best downvoting system is on the Longecity forum, where many types of downvotes exist, like “non-informative”, “biased” “bad grammar”—but all them are signed, that is they are non-anonymous. If you know who and why downvoted you, you will know how to improve the next post. If you are downvoted without explanation, it feels like a strike in the dark.
I reregistered as avturchin, because after my password was reseted for turchin, it was not clear what I should do next. However, after I reregistered as avturchin, I was not able to return to my original username, - probably because the LW2 prevent several accounts from one person. I prefer to connect to my original name, but don’t know how to do, and don’t have much time to search how to do it correctly.
Agree. The real point of a simulation is to use less computational resources to get approximately the same result as in reality, depending on the goal of the simulation. So it may simulate only surface of the things, like in computer games.
I posted there 3 comments and got 6 downvotes which resulted in extreme negative emotions all the evening that day. While I understand why they were downvoted, my emotional reaction is still a surprise for me.
Because of this, I am not interested to participate in the new site, but I like current LW where downvoting is turned off.
In fact, I will probably do a reality check, if I am in a dream, if I see something like “all mountains start to move”. I refer here to technics to reach lucid dreams that I know and often practice. Humans are unique as they are able to have completely immersive illusions of dreaming, but after all recognise them as dreams without wakening up.
But I got your point: definition of reality depends on the type of reality where one is living.
if I see that mountain start to move, there will be a conflict between what I think they are—geological formations, and my observations, and I have to update my world model. Onу way to do so is to conclude that it is not a real geological mountain, but something which pretended (or was mistakenly observed as) to be a real mountain but after it starts to move, it will become clear that it was just an illusion. Maybe it was a large tree, or a videoprojection on a wall.
I think there is one observable property of illusions, which become possible exactly because they are competitively cheap. And this is miracles. We constantly see flying mountains in the movies, in dreams, in pictures, but not in reality. If I have a lucid dream, I could recognise the difference between my idea of what is a mountain (a product of long-term geological history) and the fact that it has one peak and in the next second it has two peaks. This could make doubt about it consistency and often help to get lucidity in the dream.
So it is possible to learn about an illusion of something before I get the real one, if there is some unexpected (and computationally cheap) glitches.
So, are the night dreams illusions or real objects? I think that they are illusions: When I see a mountain in my dream, it is an illusion, and my “wet neural net” generates only an image of its surface. However, in the dream, I think that it is real. So dreams are some form of immersive simulations. And as they are computationally cheaper, I see strange things like tsunami more often in dreams than in reality.
Happy Petrov day! 34 years ago nuclear war was prevented by a single hero. He died this year. But many people now strive to prevent global catastrophic risks and will remember him forever.
It looks like the word “fake” is not very correct here. Let say illusion. If one creates a movie about volcanic eruption, he has to model only ways it will appear to the expected observer. It is often done in the cinema when they use pure CGI to make a clip as it is cheaper than actually filming real event.
Illusions in most cases are computationally cheaper than real processes and even detailed models. Even if they fild a real actress as it is cheaper than multiplication, the copying of her image creates many illusionary observation of a human, but in fact it is only a TV screen.
Personally, I lost point which you would like to prove. What is the main disagreement?
I meant that in a simulation most efforts go to the calculating of only the visible surface of the things. Inside details which are not affecting the visible surface, may be ignored, thus the computation will be computationally much cheaper than atom-precise level simulation. For example, all internal structure of Earth deeper that 100 km (and probably much less) may be ignored to get a very realistic simulation of the observation of a volcanic eruption.
In that case, I use just the same logic as Bostrom: each real civilization creates zillions of copies of some experiences. It already happened in form of dreams, movies and pictures.
Thus I normalize by the number of existing civilization and don’t have obscure questions about the nature of the universe or price of the big bang. I just assumed that inside the civilization rare experiences are often faked. They are rare because they are in some way expensive to create, like diamonds or volcanic observation, but their copies are cheap, like glass or pictures.
We could explain it in terms of observations. Fake observation is the situation than you experience something that does not actually exist. For example, you watch a video of a volcanic eruption on youtube. It is computationally cheaper to create a copy a video of volcanic eruption than to actually create a volcano—and because of it, we see pictures about volcanic eruptions more often than actual ones.
It is not meaningless to say that the world is fake, if only observable surfaces of things are calculated like in a computer game, which computationally cheaper.
Also, the question was not if I could judge other’s values, but is it possible to prove that AI has the same values as a human being.
Or are you going to prove the equality of two value systems while at least one of them of them remains unknowable?