1: Is it better to emulate 1 human faithfully or 10 humans with occasional glitches (for example could no longer appreciate music in the same way)
This really depends on whether you are emulating the humans for personal reasons or for industrial reasons. If I want more time to spend with soon to be dead my wife, I will probably want a faithful reproduction. If I want mass produced servants, it’s probably simpler to go for the 10 with occasional glitches as long as those glitches meet certain standards of industrial safety. If the glitches are only minor aesthetic differences, it wouldn’t really bother me. It would be entirely different if the glitch was “Violent uprising.”
2: How glitch free would you want the emulation to be before you gave up your body.
This would substantially depend on how many years I had likely had left to live and also seems heavily dependent on the proportion and types of the glitches. For instance, “Accidentally goes corrupt and violently insane attacking loved ones in irreparable manner” can be considered a glitch. So can “A memory leak requires a shutdown and clean boot every 2 hours. In a few years we think we can upgrade this to 4 hours.” But my tolerance for for the glitches of incurable violent psychosis and curable narcolepsy is substantially different.
It’s hard to say what kind of glitches we might theoretically run into. I would imagine narcolepsy like glitches would be common because computers have those types of problems right now, where they have been run too long and need to be rebooted. And also, brains themselves have a very similar process (of needing to sleep periodically) But that doesn’t necessarily mean this will be the biggest problem with brain emulations.
I suppose it comes down to a utility function calculation depending on my disvalue of various types of glitches, how long brain emulations can run glitch free, and what kinds of glitches they have and my projected lifespan. I do want to note that it seems likely that the first people who try emulation are likely to include a substantial portion of old and sick people. If you are going to die next week of terminal cancer, you have much less to lose from a failed emulation. If this guess is correct, people may be permitted to try brain uploading slowly in situations that are incrementally further away from death.
If brain uploading happens in the near future, It is likely that at some point my health and age will be comparable to the health and age of other people trying brain uploading, in which case I may consider it at that point based on the glitch history.
I could go into more detail, but I don’t know if it would be relevant to anyone other than me, since most people do not have identical utility functions.
3: How glitch free would you want the emulation to be before letting it use heavy machinery.
There are already tests for qualifying for heavy machinery that we give to humans. As an example: http://www.dot.state.tx.us/hrd/tdp/skills/skills.htm
If human brain emulation software working in a robotic form passes all of these tests, then it at least appears to be equally competent to a human at first glance and we can allow them to attempt piloting vehicles on a provisional basis to begin collecting accident statistics on things like accidents per unit of operator time.
If Human operated asphalt spreaders have 1 serious accident per 10,000 operator hours, and Brain emulation robots asphalt spreaders have 1 serious accident every 100 operator hours, then clearly we need to design some more stringent tests on the Brain Emulation controlled robots.
On the other hand, if Human operated asphalt spreaders have 1 serious accident per 10,000 operator hours, and Brain emulation robots asphalt spreaders have 1 serious accident every 1,000,000 operator hours, then trying to increase Brain emulation robot safety has likely hit the point of diminishing returns.
4: How glitch free would you want the emulation to be before you had it working on FAI.
This is going to be similar to #3, but the idea is “Less glitchy than humans.” http://en.wikipedia.org/wiki/Mental_disorder Based on Wikipedia, mental disorders can be suprisingly common. A third of people in most countries report meeting criteria for the major categories at some point in their lives. If you assume that three peoples lifetimes are 100,000 days (Dying in your early 90′s) There should be 1 day where a Human had a mental glitch at some point in there. This seems to mean that a Human working on FAI has a roughly 1 in 100,000 of coming down with some form of glitch on a daily basis (This is a rough estimate. I am aware that there are confounding factors and am not going to be perfectly accurate)
If Human brain emulations are more resistant to glitches than this, and if the glitches to do not seem qualitatively worse, than it doesn’t seem like it should hurt our chances statistically to hand over development to them. Obviously, I would want to run this by other Mathematicians for finer details because the cost of failure might be very high. As I mentioned, there are a number of confounding factors, such as, Can we diagnose glitches before they strike? How frequently? I’m well aware I’m not going to list every possible confounding factor, which is why I’d want expert advice.
I agree that these answers aren’t easy, since they require a lot of details about circumstances and are context dependent. Not only that, they may require a substantial amount of in field knowledge. While I don’t see any immediate flaws in my answers, I would not be surprised if I was completely wrong about everything on multiple answers. But I hope that trying to break it down like this helps as a starting point.
1: Is it better to emulate 1 human faithfully or 10 humans with occasional glitches (for example could no longer appreciate music in the same way)
This really depends on whether you are emulating the humans for personal reasons or for industrial reasons. If I want more time to spend with soon to be dead my wife, I will probably want a faithful reproduction. If I want mass produced servants, it’s probably simpler to go for the 10 with occasional glitches as long as those glitches meet certain standards of industrial safety. If the glitches are only minor aesthetic differences, it wouldn’t really bother me. It would be entirely different if the glitch was “Violent uprising.”
2: How glitch free would you want the emulation to be before you gave up your body. This would substantially depend on how many years I had likely had left to live and also seems heavily dependent on the proportion and types of the glitches. For instance, “Accidentally goes corrupt and violently insane attacking loved ones in irreparable manner” can be considered a glitch. So can “A memory leak requires a shutdown and clean boot every 2 hours. In a few years we think we can upgrade this to 4 hours.” But my tolerance for for the glitches of incurable violent psychosis and curable narcolepsy is substantially different.
It’s hard to say what kind of glitches we might theoretically run into. I would imagine narcolepsy like glitches would be common because computers have those types of problems right now, where they have been run too long and need to be rebooted. And also, brains themselves have a very similar process (of needing to sleep periodically) But that doesn’t necessarily mean this will be the biggest problem with brain emulations.
I suppose it comes down to a utility function calculation depending on my disvalue of various types of glitches, how long brain emulations can run glitch free, and what kinds of glitches they have and my projected lifespan. I do want to note that it seems likely that the first people who try emulation are likely to include a substantial portion of old and sick people. If you are going to die next week of terminal cancer, you have much less to lose from a failed emulation. If this guess is correct, people may be permitted to try brain uploading slowly in situations that are incrementally further away from death. If brain uploading happens in the near future, It is likely that at some point my health and age will be comparable to the health and age of other people trying brain uploading, in which case I may consider it at that point based on the glitch history.
I could go into more detail, but I don’t know if it would be relevant to anyone other than me, since most people do not have identical utility functions.
3: How glitch free would you want the emulation to be before letting it use heavy machinery. There are already tests for qualifying for heavy machinery that we give to humans. As an example: http://www.dot.state.tx.us/hrd/tdp/skills/skills.htm
If human brain emulation software working in a robotic form passes all of these tests, then it at least appears to be equally competent to a human at first glance and we can allow them to attempt piloting vehicles on a provisional basis to begin collecting accident statistics on things like accidents per unit of operator time.
If Human operated asphalt spreaders have 1 serious accident per 10,000 operator hours, and Brain emulation robots asphalt spreaders have 1 serious accident every 100 operator hours, then clearly we need to design some more stringent tests on the Brain Emulation controlled robots.
On the other hand, if Human operated asphalt spreaders have 1 serious accident per 10,000 operator hours, and Brain emulation robots asphalt spreaders have 1 serious accident every 1,000,000 operator hours, then trying to increase Brain emulation robot safety has likely hit the point of diminishing returns.
4: How glitch free would you want the emulation to be before you had it working on FAI.
This is going to be similar to #3, but the idea is “Less glitchy than humans.” http://en.wikipedia.org/wiki/Mental_disorder Based on Wikipedia, mental disorders can be suprisingly common. A third of people in most countries report meeting criteria for the major categories at some point in their lives. If you assume that three peoples lifetimes are 100,000 days (Dying in your early 90′s) There should be 1 day where a Human had a mental glitch at some point in there. This seems to mean that a Human working on FAI has a roughly 1 in 100,000 of coming down with some form of glitch on a daily basis (This is a rough estimate. I am aware that there are confounding factors and am not going to be perfectly accurate)
If Human brain emulations are more resistant to glitches than this, and if the glitches to do not seem qualitatively worse, than it doesn’t seem like it should hurt our chances statistically to hand over development to them. Obviously, I would want to run this by other Mathematicians for finer details because the cost of failure might be very high. As I mentioned, there are a number of confounding factors, such as, Can we diagnose glitches before they strike? How frequently? I’m well aware I’m not going to list every possible confounding factor, which is why I’d want expert advice.
I agree that these answers aren’t easy, since they require a lot of details about circumstances and are context dependent. Not only that, they may require a substantial amount of in field knowledge. While I don’t see any immediate flaws in my answers, I would not be surprised if I was completely wrong about everything on multiple answers. But I hope that trying to break it down like this helps as a starting point.