This is a particular instance of the general approach, “I have to assign a number to each of these items, but it’s hard and contentious to do, so instead I will give them all zeroes (objects) or ones (agents).” It always increases the total error. The world is not divided into agents and objects, and this approach would still increase total error, or at best leave it unchanged, even if they were, since errors in classification give a larger total error when they are thresholded instead of just left as, say, probabilities.
You should also consider that, when AI is developed, you will become an “object”.
This approach doesn’t work well even for humans. Very intelligent humans, armed with and experienced with mathematics, large computerized databases, regression analysis, probability and statistics, information theory, dimension reduction, data mining, machine learning, stability analysis, optimization techniques, and a good background in cog sci, biology, & physics, think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs. So where do you draw the line?
Very intelligent humans, … think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs.
Reference? The priors for recent humans and dogs thinking more alike than modern humans and dogs (despite Euarchontoglires and Laurasiatheria diverging about 90 MYA and humans in 1600 diverging from modern humans 400 years ago). I might estimate at 90M:400 against, if I had to do so very quickly.
Why do you think that more than half of the change in thinking in the last 90 million years has occurred in the last 400?
Obviously I cannot cite a reference. This is an opinion. I take it you think less than half of the sum total of what has been discovered or learned was learned in the past 400 years? Your priors suggest you assume linear advance in thinking, but hominid cranial enlargement began only 1-2 million years ago. So you must also expect, as a prior, that the difference between humans and chimps is 1/90th − 1/45th of the difference between chimps and dogs. In that case, why exclude chimps from our society?
The maximum travel speed of humans today have travelled is about 7 miles per second. Assuming a travel rate of 0 miles per second 4 billion years ago, we do not conclude that bacteria were able to propel themselves 3.5 miles per second 2 billion years ago.
I don’t really think there’s been a change in humans. I think there are new tools available that help us think better, much like the new machines available that let us move fast.
You don’t believe that homind cranial enlargement is responsible for more than half of the difference between modern humans and dogs, so why does it matter when it happened?
Suppose that dogs are 50-100 times further away from humans than chimps are. Further suppose that bacteria are more than 100 times further away from humans than dogs are. Why is one of those a reason to include chimps, and the other not a reason to include dogs. (Rocks are more than 100 times as different from humans than fungi are, right?) Rather than use relative closeness, I’m going to assert that absolute distance is important. (If that means that a typical human 400 years ago would not qualify now, I think it says more about them than it does about me; but I don’t think that is the case).
I also danced around and didn’t actually say that 90M:400 was the best prior; I said if I needed one quickly it’s the one I would use. To refine that number first requires refining the question.
That is it an implementation of equality among unequal agents. Why is an average day of Agent Alpha the same value as an average day of Agent Beta, and how does Agent Beta determine how much utility Agent Alpha gains from something other than the reference economy?
If we allow the agents to determine their own utility derived from, say, fiat currency, we have instead of a utility economy a financial economy. Everyone gains instrumental value from participating (or they stop participating). Allow precommitment and assume rational, well-informed agents, and the economic system maximizes each individual utility within the possible parameters.
That looks like a great foundation for a set of laws, but a poor foundation for a set of ethics.
How so? I view this as an implementation of equality among agents. What makes it ethically repugnant?
This is a particular instance of the general approach, “I have to assign a number to each of these items, but it’s hard and contentious to do, so instead I will give them all zeroes (objects) or ones (agents).” It always increases the total error. The world is not divided into agents and objects, and this approach would still increase total error, or at best leave it unchanged, even if they were, since errors in classification give a larger total error when they are thresholded instead of just left as, say, probabilities.
You should also consider that, when AI is developed, you will become an “object”.
This approach doesn’t work well even for humans. Very intelligent humans, armed with and experienced with mathematics, large computerized databases, regression analysis, probability and statistics, information theory, dimension reduction, data mining, machine learning, stability analysis, optimization techniques, and a good background in cog sci, biology, & physics, think more differently from average humans around 1600 AD, than average humans in 1600 AD did from dogs. So where do you draw the line?
Reference? The priors for recent humans and dogs thinking more alike than modern humans and dogs (despite Euarchontoglires and Laurasiatheria diverging about 90 MYA and humans in 1600 diverging from modern humans 400 years ago). I might estimate at 90M:400 against, if I had to do so very quickly.
Why do you think that more than half of the change in thinking in the last 90 million years has occurred in the last 400?
Obviously I cannot cite a reference. This is an opinion. I take it you think less than half of the sum total of what has been discovered or learned was learned in the past 400 years? Your priors suggest you assume linear advance in thinking, but hominid cranial enlargement began only 1-2 million years ago. So you must also expect, as a prior, that the difference between humans and chimps is 1/90th − 1/45th of the difference between chimps and dogs. In that case, why exclude chimps from our society?
The maximum travel speed of humans today have travelled is about 7 miles per second. Assuming a travel rate of 0 miles per second 4 billion years ago, we do not conclude that bacteria were able to propel themselves 3.5 miles per second 2 billion years ago.
I don’t really think there’s been a change in humans. I think there are new tools available that help us think better, much like the new machines available that let us move fast.
You don’t believe that homind cranial enlargement is responsible for more than half of the difference between modern humans and dogs, so why does it matter when it happened?
Suppose that dogs are 50-100 times further away from humans than chimps are. Further suppose that bacteria are more than 100 times further away from humans than dogs are. Why is one of those a reason to include chimps, and the other not a reason to include dogs. (Rocks are more than 100 times as different from humans than fungi are, right?) Rather than use relative closeness, I’m going to assert that absolute distance is important. (If that means that a typical human 400 years ago would not qualify now, I think it says more about them than it does about me; but I don’t think that is the case).
I also danced around and didn’t actually say that 90M:400 was the best prior; I said if I needed one quickly it’s the one I would use. To refine that number first requires refining the question.
That is it an implementation of equality among unequal agents. Why is an average day of Agent Alpha the same value as an average day of Agent Beta, and how does Agent Beta determine how much utility Agent Alpha gains from something other than the reference economy?
If we allow the agents to determine their own utility derived from, say, fiat currency, we have instead of a utility economy a financial economy. Everyone gains instrumental value from participating (or they stop participating). Allow precommitment and assume rational, well-informed agents, and the economic system maximizes each individual utility within the possible parameters.