Maybe not—but even with a straight brain-to-emulated-brain conversion, living in VR means that if I want, I could spend a decade in a cabin in an enormous forest without a single other emulated-brain entering that VR, while still keeping up with email. So it would still be possible for me to enjoy my preferred hermiting lifestyle even in a starship containing a great many emulated minds. :)
As to the matter of finding yourself a crew, you could try answering my PM on that other website! There’s a whole forum full of interesting freaks and geeks over there who would hop on your em-ship in an instant.
The point I was trying to focus on—who /tells/ the software that runs the ship that the ship should change course? Who has the authority? If the ship’s emulations get into a conflict and start trying to throw virii at each other, who has the power to limit the violent minds’ access to dangerous software, or even to the processing power needed to run at full speed, in order to prevent those software weapons from posing a risk to the ship’s low-level software?
This problem is called politics. It remains unsolved. The general problem is that in order to keep peace among intelligent agents who have a rather utility-idiotic tendency to break out fighting over petty spats… you would generally need an intelligent agent who wants to keep peace, or some very unbreakable fences to separate the humans from each-other.
Our current nation-state system is one of very-difficult-to-break fences, with layers of mutually-beneficial-and-necessary cooperation built on top, all designed to prevent major wars from occurring. And currently failing, due largely to the too-closeness of some borders, due to trade and financial policies that take away the incentives against war by ruining economies, etc.
So if you want to do better than that, you need to either think of something better, or get someone to do your thinking for you. Since we’re talking software, having the ship’s core systems run by an AI sounds convenient, but of course we’re on LessWrong so we all know how easy it is to get that just plain wrong. Of course, if you’re at this level of technology already, perhaps we already have Friendly AIs that can easily manufacture Highly Intelligent Utility AIs with Narrow Domains to do this sort of thing.
Or maybe people will just have to get along for once, which is really the simplest but most difficult solution, knowing people.
I didn’t realize that was you—the user-names are rather different.
This problem is called politics. It remains unsolved.
In the general case, yes; but certain extremely limited subsets do seem amenable to game theory. For example—say that future-me builds and/or buys a bunch of spacefaring spores, capable of carrying my emmed consciousness to other stars (and, upon arrival, can build the infrastructure to build more of them, etc, etc). I’d suggest that ‘personal property’ is a reasonably solved problem, in that few people would argue that the spores’ computers are mine, and I have the right to choose what software runs on them. (There may be some quibbling about what ‘I’ actually means once I start splitting into multiple copies, but I’m already working on how to handle that issue. :) )
If any other ems want to come along, then would it really be such a big issue if I make it clear ahead of time that the spores will remain under my control, and that if the passengers behave in such a way that I deem them a threat to the voyage, I reserve the right to limit their various privileges and accesses to each other, up to and including putting them on ‘pause’ for the remainder of the trip? Or, put another way—that I’m claiming the traditional rights of both owner-aboard and captain of a vessel?
(… And might we gain a few more people contributing to this conversational thread if we started a new topic?)
If any other ems want to come along, then would it really be such a big issue if I make it clear ahead of time that the spores will remain under my control, and that if the passengers behave in such a way that I deem them a threat to the voyage, I reserve the right to limit their various privileges and accesses to each other, up to and including putting them on ‘pause’ for the remainder of the trip? Or, put another way—that I’m claiming the traditional rights of both owner-aboard and captain of a vessel?
I’m sure you would still find people who agree to that.
(… And might we gain a few more people contributing to this conversational thread if we started a new topic?)
Hey, I’ve taken ems copying themselves seriously enough to try to figure out a workable system for them to divvy up their property and debts—and dropped a few details of those into the Orion’s Arm SF setting.
As to the matter of finding yourself a crew, you could try answering my PM on that other website! There’s a whole forum full of interesting freaks and geeks over there who would hop on your em-ship in an instant.
This problem is called politics. It remains unsolved. The general problem is that in order to keep peace among intelligent agents who have a rather utility-idiotic tendency to break out fighting over petty spats… you would generally need an intelligent agent who wants to keep peace, or some very unbreakable fences to separate the humans from each-other.
Our current nation-state system is one of very-difficult-to-break fences, with layers of mutually-beneficial-and-necessary cooperation built on top, all designed to prevent major wars from occurring. And currently failing, due largely to the too-closeness of some borders, due to trade and financial policies that take away the incentives against war by ruining economies, etc.
So if you want to do better than that, you need to either think of something better, or get someone to do your thinking for you. Since we’re talking software, having the ship’s core systems run by an AI sounds convenient, but of course we’re on LessWrong so we all know how easy it is to get that just plain wrong. Of course, if you’re at this level of technology already, perhaps we already have Friendly AIs that can easily manufacture Highly Intelligent Utility AIs with Narrow Domains to do this sort of thing.
Or maybe people will just have to get along for once, which is really the simplest but most difficult solution, knowing people.
I didn’t realize that was you—the user-names are rather different.
In the general case, yes; but certain extremely limited subsets do seem amenable to game theory. For example—say that future-me builds and/or buys a bunch of spacefaring spores, capable of carrying my emmed consciousness to other stars (and, upon arrival, can build the infrastructure to build more of them, etc, etc). I’d suggest that ‘personal property’ is a reasonably solved problem, in that few people would argue that the spores’ computers are mine, and I have the right to choose what software runs on them. (There may be some quibbling about what ‘I’ actually means once I start splitting into multiple copies, but I’m already working on how to handle that issue. :) )
If any other ems want to come along, then would it really be such a big issue if I make it clear ahead of time that the spores will remain under my control, and that if the passengers behave in such a way that I deem them a threat to the voyage, I reserve the right to limit their various privileges and accesses to each other, up to and including putting them on ‘pause’ for the remainder of the trip? Or, put another way—that I’m claiming the traditional rights of both owner-aboard and captain of a vessel?
(… And might we gain a few more people contributing to this conversational thread if we started a new topic?)
I’m sure you would still find people who agree to that.
You really take this that seriously? I dunno.
Hey, I’ve taken ems copying themselves seriously enough to try to figure out a workable system for them to divvy up their property and debts—and dropped a few details of those into the Orion’s Arm SF setting.