The safest job, the only one safe job—is to be the owner. You can’t milk the cow better than a machine, you can’t do a thing better than a machine on the long run. But you can own a farm and take the dividends.
I don’t know, taking dividends sounds like something very easy to automate. My piggy bank could do that.
More seriously, is being an owner even a job? If by job we mean “source of income”, then the first big question is, is an economy sustainable if the only source of income for humans is ownership? (And probably redistribution, like a basic income guarantee.) My guess is yes, but I am not sure. (And the transition to this economy will be surely painful.) The second big question is, will the humans be happy about all their free time? My guess is yes, but I am even less sure.
I think that may the only job that could be safe for a given length of time. Eventually what you own will be superseded by something owned by someone else, and what you own will be worthless. If you are constantly investing in different products, markets and technologies you might stay ahead of it for a long time but that isn’t what most people think of as a “safe job”. I think what people are asking for in a “safe job” does not and will not exist.
The safest job, the only one safe job—is to be the owner.
Depends on what kind of society you expect. In a communist revolution, being owner brings you a bullet in head.
I don’t know what kind of society will emerge when most people will not have jobs because of automation… a good long term strategy should investigate this too.
How many individuals will own those machines? How will their conflicts be settled?
Here is a million acres of land, 1000 miles deep, here are some basic tools like nano factory—now you are on your own. Farm and be in a good friendship with your neighbors or just ignore them! “Do no evil” rule still applies.
And what exactly happens if someone breaks the rule?
Will someone stronger punish them? And what exactly happens if someone stronger breaks the rule? Is the majority together stronger than them, and the majority will punish them? And what happens if majority breaks the rule by deciding that some minority does not deserve any rights? It could be any kind of minority, even one that does not exist today. What if someone reproduces wildly? Exponential growth—and after a few generations even 1000 miles with nanobots won’t be enough for them; and by the way, at that time they will be the majority. Also if someone uses nanobots to strategically prepare for the war, they can be stronger than the majority that has other preferences; and also will have first-strike advantage. I don’t say this situation is impossible, just that a mysterious answer is not enough. Maybe some solution will develop naturally; but maybe we really wouldn’t like the solution.
And what exactly happens if someone breaks the rule?
I have a lot of ideas, but I am not very keen to share it here, where the “FAI” and “CEV” are the answers for your questions.
I think, they aren’t good answers, but let put that aside.
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
What do you mean the ‘zero’ level? As in “you’re such n00bs you haven’t done anything?”. (Because even I can say I did a THIS in the AI field, even though I don’t consider this fact especially significant.)
(Will reply tomorrow if I get time. I will, of course, be in the position of arguing how utterly insignificant something I spent several years of my life doing actually is. I’m sure something is backwards here. This explains why I was never cut out to be an academic.)
Let me guess: The first person or corporation that develops a super-intelligent AI becomes the master of the universe. At least until the moment when a bug in program removes them from control.
What is the mass of the smallest self-replicating unit that can be reconfigured after receiving instructions from another planet or solar system? I would venture to speculate it dosen’t take that much.
In any case after it arrives wait a few years for the stuff to cook (self-replicate), build infrastructure and then just build the human bodies (or computers running ems) of the people you want to settle there.
The safest job, the only one safe job—is to be the owner. You can’t milk the cow better than a machine, you can’t do a thing better than a machine on the long run. But you can own a farm and take the dividends.
I don’t know, taking dividends sounds like something very easy to automate. My piggy bank could do that.
More seriously, is being an owner even a job? If by job we mean “source of income”, then the first big question is, is an economy sustainable if the only source of income for humans is ownership? (And probably redistribution, like a basic income guarantee.) My guess is yes, but I am not sure. (And the transition to this economy will be surely painful.) The second big question is, will the humans be happy about all their free time? My guess is yes, but I am even less sure.
Even if it isn’t, it should bring you the money, what is the most important part of the “job business”. (The job you are doing for fun is a play.)
I think that may the only job that could be safe for a given length of time. Eventually what you own will be superseded by something owned by someone else, and what you own will be worthless. If you are constantly investing in different products, markets and technologies you might stay ahead of it for a long time but that isn’t what most people think of as a “safe job”. I think what people are asking for in a “safe job” does not and will not exist.
Depends on what kind of society you expect. In a communist revolution, being owner brings you a bullet in head.
I don’t know what kind of society will emerge when most people will not have jobs because of automation… a good long term strategy should investigate this too.
Another reason we don’t like communism.
Essentially we have two ways:
A—the individual ownership over machines
B—the state ownership over machines
Capitalism or communism.
Either way it is not sustainable. Humans will transcend. No “Golden age utopia”.
But working toward the capitalist version of this utopia is the way to go for now.
C—the machine ownership over atoms (including ill-defined concepts such as individuals and states)
There are some interesting details to these essential options.
A: How many individuals will own those machines? How will their conflicts be settled?
B: What algorithm will the “state” use to decide how to command those machines? Voting?
Somehow, all A, B, C options seem scary to me.
Here is a million acres of land, 1000 miles deep, here are some basic tools like nano factory—now you are on your own. Farm and be in a good friendship with your neighbors or just ignore them! “Do no evil” rule still applies.
I suggest this one, for now.
And what exactly happens if someone breaks the rule?
Will someone stronger punish them? And what exactly happens if someone stronger breaks the rule? Is the majority together stronger than them, and the majority will punish them? And what happens if majority breaks the rule by deciding that some minority does not deserve any rights? It could be any kind of minority, even one that does not exist today. What if someone reproduces wildly? Exponential growth—and after a few generations even 1000 miles with nanobots won’t be enough for them; and by the way, at that time they will be the majority. Also if someone uses nanobots to strategically prepare for the war, they can be stronger than the majority that has other preferences; and also will have first-strike advantage. I don’t say this situation is impossible, just that a mysterious answer is not enough. Maybe some solution will develop naturally; but maybe we really wouldn’t like the solution.
I have a lot of ideas, but I am not very keen to share it here, where the “FAI” and “CEV” are the answers for your questions.
I think, they aren’t good answers, but let put that aside.
Also not to mention, that the whole AI business in the SIAI and on the LW is at about ZERO level. I don’t know for ONE member of this community who can say—I did THIS in the AI field. (Myself excluded, but I am hardly a member here.)
What do you mean the ‘zero’ level? As in “you’re such n00bs you haven’t done anything?”. (Because even I can say I did a THIS in the AI field, even though I don’t consider this fact especially significant.)
What did you do? Tell us, be cause that IS significant.
(Will reply tomorrow if I get time. I will, of course, be in the position of arguing how utterly insignificant something I spent several years of my life doing actually is. I’m sure something is backwards here. This explains why I was never cut out to be an academic.)
Let me guess: The first person or corporation that develops a super-intelligent AI becomes the master of the universe. At least until the moment when a bug in program removes them from control.
Your guess is a bit naive.
Where are you going to get all that land from?
Solar system and beyond.
Where are you going to get the energy and time to reach those places?
What is the mass of the smallest self-replicating unit that can be reconfigured after receiving instructions from another planet or solar system? I would venture to speculate it dosen’t take that much.
In any case after it arrives wait a few years for the stuff to cook (self-replicate), build infrastructure and then just build the human bodies (or computers running ems) of the people you want to settle there.