Also, since I notice you are apparently motivated to improve your health by wanting to gain eternal life, I would figure that eternal life would be enough of a motivation. Are you really failing to find immortality emotionally moving? I can see why one would, but you claimed that was your conscious motivation. Need something else, perhaps?
There’s a somewhat involved chain of causality between ‘increasing the odds of immortality’ and ‘do another one of this painful thing’, said chain involving all sorts of probabilities and estimations and outright guesses. Sometimes, that chain alone is enough to keep me keeping on. Sometimes, the odds that the effort will result in the reward seem low enough that it doesn’t seem worth the cost. Thus, I’m quite open to whatever other motivations could help get over the hump.
Yes, you could get killed by a mutated pandemic, an asteroid strike, a car crash, a mugger, or a UFAI. But really, probabilistically speaking, what will probably kill you is either cancer or heart disease. Exercise prevents and fights heart disease, thus decreasing the largest single preventable mortality factor.
Eternal life is gained one year at a time! You want to make it as far as trying to tile the Solar System in copies of yourself and end up in an epic war with my army of mecha? Survive long enough for the technology to exist!
It may please you to hear that I read your post shortly before arriving at the stores that were the theoretical destination for my walk today… and after some consideration, I bought a set of adjustable wrist/ankle weights to help improve the effectiveness of my exercising.
(Even if I were to entirely drop the actually-painful parts of my routine—the push-ups and planks—I seem to be doing well with the gradual increase of the other exercises, and such weights seem reasonably likely to improve those exercises’ effectiveness.)
PS: I only want to tile the Solar System with myself to the degree necessary to mine He-3, or whatever other fuel source is most viable to start launching myself elsewhere. As long as you don’t want to hoard so many resources for your mecha that you keep me from sporing, I’m sure we can get along, and that our epic wars will involve boffer nukes. ;)
For just a few minutes, I’ve earned the privilege of wearing Kamina-sama’s flag as a cape. Thank you.
So, you’re just aiming for a bit of lonely star sailing, eh? No need to tile, then. We’ll have plenty of infrastructure to give you a lift. Seems a pity, going out into the black all alone, though. You should take a friend.
Seems a pity, going out into the black all alone, though. You should take a friend.
If anyone else wants to come with, I don’t particularly mind. I also don’t mind going it alone. (I’ve seriously looked into whether or not I’d be qualified as a lighthouse keeper or fire-tower lookout.) But at least at the moment, the number of people who are willing to even agree that “yeah, that sounds like it’d be fun”, let alone seriously consider it as a future possibility… are quite small, eclectic, and tend to be downright weirdos.
I wonder how many folk would be willing to accept handing over root access for the duration of the voyage, to ensure any interpersonal conflicts don’t get so out of hand that survival starts getting threatened? What sort of trust would that require? … What sort of person should I have already started being by now, in order to help gain that level of trust?
But at least at the moment, the number of people who are willing to even agree that “yeah, that sounds like it’d be fun”, let alone seriously consider it as a future possibility… are quite small, eclectic, and tend to be downright weirdos.
Excuse me, sorry, which website are we on? Oh, right, the quite small website full of eclectic downright weirdos, many of whom actually think emming is a good idea (mind, admittedly, if you’re already living in a world where that sort of thing is safe, secure, and commonly available, it’s certainly one valid way to live among many).
I don’t have anything planned for after turning 120 other than “being dead”. Why not? I should talk to my fiancee about moving out of the Solar System one of these days.
On the other hand, even the nearest star at very high speed is generation-ship territory (for non-ems). Are you just planning to sleep the whole way there, or will there be party games?
I wonder how many folk would be willing to accept handing over root access for the duration of the voyage, to ensure any interpersonal conflicts don’t get so out of hand that survival starts getting threatened?
Why are you having the piloting done by a person rather than by software? I figure once you’ve got the hardware for a spaceship, it’s really mostly the same sort of software task done by self-driving cars. Some well-made narrow AI ought to take care of it.
Also, root access? Unix? Bro, do you even cryptographically-signed capability security?
What sort of trust would that require?
Quite a lot, if you’re really proposing Unix-level root access.
… What sort of person should I have already started being by now, in order to help gain that level of trust?
Well, obviously: a trustworthy one, who makes good on his obligations and his words almost all the time. Also, a likeable one, since you’re proposing to spend quite possibly hundreds of years at a time with someone. Are you married, or seeing someone, perhaps?
Are you just planning to sleep the whole way there, or will there be party games?
Depends on the power generation compared to power requirements per em. A really low-cost trip, such as to spam spores to every star in reach, is one option; something more along the lines of an STL version of a Culture GSV is another.
Why are you having the piloting done by a person rather than by software?
Because plans can change mid-voyage.
Also, root access? Unix? Bro, do you even cryptographically-signed capability security?
… Not yet? (I haven’t even started lifting, yet...)
Because you have noticeable loner tendencies in this thread. You don’t seem like the “many” type.
Because plans can change mid-voyage.
Ah. But if you have to run a person, you should let the software run the ship, and let the person enjoy themselves.
I mean, it could be more complicated than a self-driving car in a way that requires real cognitive work to handle, but I can’t imagine you swerve to avoid debris on a minute-by-minute basis in interstellar space.
Because you have noticeable loner tendencies in this thread. You don’t seem like the “many” type.
Maybe not—but even with a straight brain-to-emulated-brain conversion, living in VR means that if I want, I could spend a decade in a cabin in an enormous forest without a single other emulated-brain entering that VR, while still keeping up with email. So it would still be possible for me to enjoy my preferred hermiting lifestyle even in a starship containing a great many emulated minds. :)
you should let the software run the ship
The point I was trying to focus on—who /tells/ the software that runs the ship that the ship should change course? Who has the authority? If the ship’s emulations get into a conflict and start trying to throw virii at each other, who has the power to limit the violent minds’ access to dangerous software, or even to the processing power needed to run at full speed, in order to prevent those software weapons from posing a risk to the ship’s low-level software?
Even if the OS is formally mathematically verified, there are still going to be opportunities for unexpected side-channel attacks...
Maybe not—but even with a straight brain-to-emulated-brain conversion, living in VR means that if I want, I could spend a decade in a cabin in an enormous forest without a single other emulated-brain entering that VR, while still keeping up with email. So it would still be possible for me to enjoy my preferred hermiting lifestyle even in a starship containing a great many emulated minds. :)
As to the matter of finding yourself a crew, you could try answering my PM on that other website! There’s a whole forum full of interesting freaks and geeks over there who would hop on your em-ship in an instant.
The point I was trying to focus on—who /tells/ the software that runs the ship that the ship should change course? Who has the authority? If the ship’s emulations get into a conflict and start trying to throw virii at each other, who has the power to limit the violent minds’ access to dangerous software, or even to the processing power needed to run at full speed, in order to prevent those software weapons from posing a risk to the ship’s low-level software?
This problem is called politics. It remains unsolved. The general problem is that in order to keep peace among intelligent agents who have a rather utility-idiotic tendency to break out fighting over petty spats… you would generally need an intelligent agent who wants to keep peace, or some very unbreakable fences to separate the humans from each-other.
Our current nation-state system is one of very-difficult-to-break fences, with layers of mutually-beneficial-and-necessary cooperation built on top, all designed to prevent major wars from occurring. And currently failing, due largely to the too-closeness of some borders, due to trade and financial policies that take away the incentives against war by ruining economies, etc.
So if you want to do better than that, you need to either think of something better, or get someone to do your thinking for you. Since we’re talking software, having the ship’s core systems run by an AI sounds convenient, but of course we’re on LessWrong so we all know how easy it is to get that just plain wrong. Of course, if you’re at this level of technology already, perhaps we already have Friendly AIs that can easily manufacture Highly Intelligent Utility AIs with Narrow Domains to do this sort of thing.
Or maybe people will just have to get along for once, which is really the simplest but most difficult solution, knowing people.
I didn’t realize that was you—the user-names are rather different.
This problem is called politics. It remains unsolved.
In the general case, yes; but certain extremely limited subsets do seem amenable to game theory. For example—say that future-me builds and/or buys a bunch of spacefaring spores, capable of carrying my emmed consciousness to other stars (and, upon arrival, can build the infrastructure to build more of them, etc, etc). I’d suggest that ‘personal property’ is a reasonably solved problem, in that few people would argue that the spores’ computers are mine, and I have the right to choose what software runs on them. (There may be some quibbling about what ‘I’ actually means once I start splitting into multiple copies, but I’m already working on how to handle that issue. :) )
If any other ems want to come along, then would it really be such a big issue if I make it clear ahead of time that the spores will remain under my control, and that if the passengers behave in such a way that I deem them a threat to the voyage, I reserve the right to limit their various privileges and accesses to each other, up to and including putting them on ‘pause’ for the remainder of the trip? Or, put another way—that I’m claiming the traditional rights of both owner-aboard and captain of a vessel?
(… And might we gain a few more people contributing to this conversational thread if we started a new topic?)
If any other ems want to come along, then would it really be such a big issue if I make it clear ahead of time that the spores will remain under my control, and that if the passengers behave in such a way that I deem them a threat to the voyage, I reserve the right to limit their various privileges and accesses to each other, up to and including putting them on ‘pause’ for the remainder of the trip? Or, put another way—that I’m claiming the traditional rights of both owner-aboard and captain of a vessel?
I’m sure you would still find people who agree to that.
(… And might we gain a few more people contributing to this conversational thread if we started a new topic?)
Hey, I’ve taken ems copying themselves seriously enough to try to figure out a workable system for them to divvy up their property and debts—and dropped a few details of those into the Orion’s Arm SF setting.
There’s a somewhat involved chain of causality between ‘increasing the odds of immortality’ and ‘do another one of this painful thing’, said chain involving all sorts of probabilities and estimations and outright guesses. Sometimes, that chain alone is enough to keep me keeping on. Sometimes, the odds that the effort will result in the reward seem low enough that it doesn’t seem worth the cost. Thus, I’m quite open to whatever other motivations could help get over the hump.
Healthy exercise is easily one of the most effective everyday measures you can take to decrease your chances of mortality.
Yes, you could get killed by a mutated pandemic, an asteroid strike, a car crash, a mugger, or a UFAI. But really, probabilistically speaking, what will probably kill you is either cancer or heart disease. Exercise prevents and fights heart disease, thus decreasing the largest single preventable mortality factor.
Eternal life is gained one year at a time! You want to make it as far as trying to tile the Solar System in copies of yourself and end up in an epic war with my army of mecha? Survive long enough for the technology to exist!
It may please you to hear that I read your post shortly before arriving at the stores that were the theoretical destination for my walk today… and after some consideration, I bought a set of adjustable wrist/ankle weights to help improve the effectiveness of my exercising.
(Even if I were to entirely drop the actually-painful parts of my routine—the push-ups and planks—I seem to be doing well with the gradual increase of the other exercises, and such weights seem reasonably likely to improve those exercises’ effectiveness.)
PS: I only want to tile the Solar System with myself to the degree necessary to mine He-3, or whatever other fuel source is most viable to start launching myself elsewhere. As long as you don’t want to hoard so many resources for your mecha that you keep me from sporing, I’m sure we can get along, and that our epic wars will involve boffer nukes. ;)
For just a few minutes, I’ve earned the privilege of wearing Kamina-sama’s flag as a cape. Thank you.
So, you’re just aiming for a bit of lonely star sailing, eh? No need to tile, then. We’ll have plenty of infrastructure to give you a lift. Seems a pity, going out into the black all alone, though. You should take a friend.
If anyone else wants to come with, I don’t particularly mind. I also don’t mind going it alone. (I’ve seriously looked into whether or not I’d be qualified as a lighthouse keeper or fire-tower lookout.) But at least at the moment, the number of people who are willing to even agree that “yeah, that sounds like it’d be fun”, let alone seriously consider it as a future possibility… are quite small, eclectic, and tend to be downright weirdos.
I wonder how many folk would be willing to accept handing over root access for the duration of the voyage, to ensure any interpersonal conflicts don’t get so out of hand that survival starts getting threatened? What sort of trust would that require? … What sort of person should I have already started being by now, in order to help gain that level of trust?
Excuse me, sorry, which website are we on? Oh, right, the quite small website full of eclectic downright weirdos, many of whom actually think emming is a good idea (mind, admittedly, if you’re already living in a world where that sort of thing is safe, secure, and commonly available, it’s certainly one valid way to live among many).
I don’t have anything planned for after turning 120 other than “being dead”. Why not? I should talk to my fiancee about moving out of the Solar System one of these days.
On the other hand, even the nearest star at very high speed is generation-ship territory (for non-ems). Are you just planning to sleep the whole way there, or will there be party games?
Why are you having the piloting done by a person rather than by software? I figure once you’ve got the hardware for a spaceship, it’s really mostly the same sort of software task done by self-driving cars. Some well-made narrow AI ought to take care of it.
Also, root access? Unix? Bro, do you even cryptographically-signed capability security?
Quite a lot, if you’re really proposing Unix-level root access.
Well, obviously: a trustworthy one, who makes good on his obligations and his words almost all the time. Also, a likeable one, since you’re proposing to spend quite possibly hundreds of years at a time with someone. Are you married, or seeing someone, perhaps?
Depends on the power generation compared to power requirements per em. A really low-cost trip, such as to spam spores to every star in reach, is one option; something more along the lines of an STL version of a Culture GSV is another.
Because plans can change mid-voyage.
… Not yet? (I haven’t even started lifting, yet...)
Who said it would be just some/one/?
Because you have noticeable loner tendencies in this thread. You don’t seem like the “many” type.
Ah. But if you have to run a person, you should let the software run the ship, and let the person enjoy themselves.
I mean, it could be more complicated than a self-driving car in a way that requires real cognitive work to handle, but I can’t imagine you swerve to avoid debris on a minute-by-minute basis in interstellar space.
Maybe not—but even with a straight brain-to-emulated-brain conversion, living in VR means that if I want, I could spend a decade in a cabin in an enormous forest without a single other emulated-brain entering that VR, while still keeping up with email. So it would still be possible for me to enjoy my preferred hermiting lifestyle even in a starship containing a great many emulated minds. :)
The point I was trying to focus on—who /tells/ the software that runs the ship that the ship should change course? Who has the authority? If the ship’s emulations get into a conflict and start trying to throw virii at each other, who has the power to limit the violent minds’ access to dangerous software, or even to the processing power needed to run at full speed, in order to prevent those software weapons from posing a risk to the ship’s low-level software?
Even if the OS is formally mathematically verified, there are still going to be opportunities for unexpected side-channel attacks...
As to the matter of finding yourself a crew, you could try answering my PM on that other website! There’s a whole forum full of interesting freaks and geeks over there who would hop on your em-ship in an instant.
This problem is called politics. It remains unsolved. The general problem is that in order to keep peace among intelligent agents who have a rather utility-idiotic tendency to break out fighting over petty spats… you would generally need an intelligent agent who wants to keep peace, or some very unbreakable fences to separate the humans from each-other.
Our current nation-state system is one of very-difficult-to-break fences, with layers of mutually-beneficial-and-necessary cooperation built on top, all designed to prevent major wars from occurring. And currently failing, due largely to the too-closeness of some borders, due to trade and financial policies that take away the incentives against war by ruining economies, etc.
So if you want to do better than that, you need to either think of something better, or get someone to do your thinking for you. Since we’re talking software, having the ship’s core systems run by an AI sounds convenient, but of course we’re on LessWrong so we all know how easy it is to get that just plain wrong. Of course, if you’re at this level of technology already, perhaps we already have Friendly AIs that can easily manufacture Highly Intelligent Utility AIs with Narrow Domains to do this sort of thing.
Or maybe people will just have to get along for once, which is really the simplest but most difficult solution, knowing people.
I didn’t realize that was you—the user-names are rather different.
In the general case, yes; but certain extremely limited subsets do seem amenable to game theory. For example—say that future-me builds and/or buys a bunch of spacefaring spores, capable of carrying my emmed consciousness to other stars (and, upon arrival, can build the infrastructure to build more of them, etc, etc). I’d suggest that ‘personal property’ is a reasonably solved problem, in that few people would argue that the spores’ computers are mine, and I have the right to choose what software runs on them. (There may be some quibbling about what ‘I’ actually means once I start splitting into multiple copies, but I’m already working on how to handle that issue. :) )
If any other ems want to come along, then would it really be such a big issue if I make it clear ahead of time that the spores will remain under my control, and that if the passengers behave in such a way that I deem them a threat to the voyage, I reserve the right to limit their various privileges and accesses to each other, up to and including putting them on ‘pause’ for the remainder of the trip? Or, put another way—that I’m claiming the traditional rights of both owner-aboard and captain of a vessel?
(… And might we gain a few more people contributing to this conversational thread if we started a new topic?)
I’m sure you would still find people who agree to that.
You really take this that seriously? I dunno.
Hey, I’ve taken ems copying themselves seriously enough to try to figure out a workable system for them to divvy up their property and debts—and dropped a few details of those into the Orion’s Arm SF setting.