https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation
Roman_Yampolskiy
Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I >value what we’ve lost, and how much do I value what we’ve gained?
I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.
So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads “At that point you already lost humanity by definition”. Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.
Just because you can experience something someone else can does not mean that you are of the same type. Belonging to a class of objects (ex. Humans) requires you to be one. A simulation of a piece of wood (visual texture, graphics, molecular structure, etc.) is not a piece of wood and so does not belong to the class of pieces of wood. A simulated piece of wood can experience simulated burning process or any other wood-suitable experience, but it is still not a piece of wood. Likewise a piece of software is by definition not a human being, it is at best a simulation of one.
Great question. To me a system is domain specific if it can’t be switched to a different domain without re-designing it. I can’t take Deep Blue and use it to sort mail instead. I can’t take Watson and use it to drive cars. An AGI (for which I have no examples) would be capable of switching domains. If we take humans as an example of general intelligence, you can take an average person and make them work as a cook, driver, babysitter, etc, without any need for re-designing them. You might need to spend some time teaching that person a new skill, but they can learn efficiently and perhaps just by looking at how it should be done. I can’t do this with domain expert AI. Deep Blue will not learn to sort mail regardless of how many times I demonstrate that process.
I don’t know you, but for me only a few hours a day is devoted to thinking or other non-physiological pursuits, the rest goes to sleeping, eating, drinking, Drinking, sex, physical exercise, etc. My goals are dominated by the need to acquire resources to support physiological needs of me and my family. You can extend any courtesy you want to anyone you want but you (human body) and a computer program (software) don’t have much in common as far as being from the same group is concerned. Software is not humanity; at best it is a partial simulation of one aspect of one person.
Just because you can’t imaging AGI in the next 5 years, doesn’t mean that in four years someone will not propose a perfectly workable algorithm for achieving it. So yes, it is necessary. Once everyone sees how obvious AGI design is, it will be too late. Random countries don’t develop cutting edge technology; it is always done by the same Superpowers (USA, Russia, etc.). I didn’t read your blog post so can’t comment on “global cooperation”. As to the general question you are asking, you can get most conceivable benefits from domain expert AI without any need for AGI. Finally, I do think that relinquishment/delaying is a desirable thing, but I don’t think it is implementable in practice.
We can talk about what high fidelity emulation includes. Will it be just your mind? Or will it be Mind + Body + Environment? In the most common case (with an absent body) most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent. People are mostly defined by their physiological needs (think of Maslow’s pyramid). An entity with no such needs (or with such needs satisfied by virtual/simulated abandoned resources) will not be human and will not want the same things as a human. Someone who is no longer subject to human weaknesses or relatively limited intelligence may lose all allegiances to humanity since they would no longer be a part of it. So I guess I define “humanity” as comprised on standard/unaltered humans. Anything superior is no longer a human to me, just like we are not first and foremost Neanderthals and only after homo sapiens.
Hey Wei, great question! Agents (augmented humans) with IQ of 250 would be superintelligent with respect to our current position on the intelligence curve and would be just as dangerous to us, unaugment humans, as any sort of artificial superintelligence. They would not be guaranteed to be Friendly by design and would be as foreign to us in their desires as most of us are from severely mentally retarded persons. For most of us (sadly?) such people are something to try and fix via science not something for whom we want to fulfill their wishes. In other words, I don’t think you can rely on unverified (for safety) agent (event with higher intelligence) to make sure that other agents with higher intelligence are designed to be human-safe. All the examples you give start by replacing humanity with something not-human (uploads, augments) and proceed to ask the question of how to safe humanity. At that point you already lost humanity by definition. I am not saying that is not going to happen, it probably will. Most likely we will see something predicted by Kurzweil (merger of machines and people).
Hey, my name is Roman. You can read my detailed bio here, as well as some research papers I published on the topics of AI and security. I decided to attend a local LW meet up and it made sense to at least register on the site. My short term goal is to find some people in my geographic area (Louisville, KY, USA) to befriend.
Just registered for this Meetup. Curious if anyone else will be coming from Louisville, KY?
https://www.researchgate.net/profile/Roman-Yampolskiy/publication/329012008_Minimum_Viable_Human_Population_with_Intelligent_Interventions