It’s a thought experiment. It’s not meant to be a practical path to artificial consciousness or even brain emulation. It’s a conceptually possible scenario that raises interesting questions.
Furslid
That is probably the best answer. It has the weird aspect of putting consciousness on a continuum, and one that isn’t easy to quantify. If someone with 50% cyber brain cells is 50% conscious, but their behavior is the same as as a 100% biological, 100% conscious brain it’s a little strange.
Also, it means that consciousness isn’t a binary variable. For this to make sense consciousness must be a continuum. That is an important point to make regardless of the definition we use.
Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition. Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject’s whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn’t. If it doesn’t, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn’t. If it isn’t then how was consciousness lost and at what point in the process?
Why are we talking about jobs rather than man-hours worked? Automation reduced man-hours worked. We went from much longer work weeks to 40 hour work weeks as well as raising standards of living.
AI will reduce work time further. If someone can use AI to produce as much in 30 hours as they did in 40, they could chose to work anywhere from 30 − 40 hours and be better off. Many people would chose to work less as they compare the marginal values of free time and extra pay.
Why are we seeing long term unemployment instead of shorter work weeks now? Is this inevitable or is there some structural or institutional problem causing it?
I don’t think that’s the relevant difference between forestry and fishing. Forestry can be easily parceled out by plot in a way that fishing can’t. Forests can be managed by giving one logging concern responsibility for a specific plot and holding them responsible for any overlogging in that area and for any mandated replanting.
Fishing has to be managed by enforcing quotas, this is a much more difficult problem even for a single government. I haven’t done research in fishing, but do we see fishing being managed well in areas that are under the jurisdiction of one government or governments with good cooperation (like the great lakes)? Or for species that’s habitat is within the coastal waters of one government?
Why is it legitimate to assume that a singleton would be effective at solving existential risks? A one world government would have all the same internal problems as current governments. The only problems that scaling up would automatically eliminate are those of conflicts between different states, and these would likely be transformed into conflicts between interest groups in one state. This is not a reduction to a solved problem.
There are wars of secession and revolution now. There are also violent conflicts among ethnic and religious groups within one state. There is terrorism. Why would a one world government ruling over a more diverse populace than any current government not have these problems? People won’t automatically accept the singleton any more than they accept the current governments.
Even with unified powers, governments regularly mismanage crises. Current governments (even democratic first world governments) have problems dealing with such things as predictable weather and earthquakes along known fault lines. Why would a one world government be better able to handle much less predictable crises, like a pandemic?
I’m just pointing out the way such a bias comes into being. I know I don’t listen to classical, and although I’d expect a slightly higher proportion here than in the general population, I wouldn’t guess it wold be a majority or significant plurality.
If I had to guess, I’d guess on varied musical tastes, probably trending towards more niche genres than broad spectrum pop than the general population.
Because of the images of different musical genres in our culture. There is an association of classical music and being academic or upper class. In popular media, liking classical music is a cheap signal for these character types. This naturally triggers confirmation biases, as we view the rationalist listening to Bach as typical, and the rationalist listening to The Rolling Stones as atypical. People also use musical preference to signal what type of person they are. If someone wants to be seen as a rationalist, they often mention their love of Bach and don’t mention genres with a different image, except to disparage them.
Out of the price of a new car, how much goes to buying raw materials? How much to capital owners? How much to labor?
Out of the price of a new car, how much goes to buying raw materials? How much to capital owners? How much to labor?
Different method. Assume all 300 million us citizens are served by a Wal Mart. Any population that doesn’t live near a Wal-Mart has to be small enough to ignore. Each Wal-mart probably has between 10,000 and 1 million potential customers. Both fringes seem unlikely, so we can be within a factor of 10 by guessing 100000 people per Wal-Mart. This also leads to 3000 Wal-Marts in the US.
The difference between instrumental and terminal values are in the perception of the evaluator. If they believe that something is useful to achieve other values, then it is an instrumental value. If they are wrong about its usefulness, that makes it an error in evaluation, not a terminal value. The difference between instrumental and terminal values is in the map, not in the territory. For someone who believes in astrology, getting their horoscope done is an instrumental value.
Nyan, I think your freedom example is a little off. The converse of freedom is not bowing down to a leader. It’s being made to bow. People choosing to bow can be beautiful and rational, but I fail to see any beauty in someone bowing when their values dictate they should stand.
I think your definition of terminal value is a little vague. The definition I prefer is as follows. A value is instrumental if derives its value from its ability to make other values possible. To the degree that a value is not instrumental, it is a terminal value. Values may be fully instrumental (money), partially instrumental (health [we like being healthy, but it also lets us do other things we like]) or fully terminal (beauty).
Terminal values do not have the warm fuzzy glow of high concepts. Beauty, truth, justice, and freedom may be terminal values, but they aren’t the only ones. They aren’t even the most clear cut examples. One of the clearest examples of a terminal value is sexual pleasure. It is harder to argue it is instrumental to a higher value or more determined on other facts and circumstances than any of the above examples.
Also, how does identifying terminal values help us make choices? We must still chose between our values. If we split our values into terminal and instrumental it will still be rational to chose instrumental values over terminal values sometimes. I’d rather make a million dollars (instrumental value) than a painting short of a masterpiece (terminal value). Identifying values as terminal does not prevent us from having to chose between them either.
A couple of assumptions that you did not state. You assume that your favored candidate’s budget contains truly optimal uses of charitable dollars. You need a step down function unless your preferred charity is funding government programs.
You assume that the opposition candidate’s spending is valueless. Otherwise you need to consider the relative merits.
You assume that there is no portion of the opposition budget that is preferable. If you believe that each candidate has some portions right, you need to be subtracting this spending from the value of your contribution.
You assume that the proposed budget will be implemented. Given the track record of campaign promises, this is an iffy assumption. As this probability is necessarily less than 100% it should reduce the value of your contribution.
These assumptions are the mind killing biases of politics.
Not quite. They don’t go all the way to completing an ought statement, as this doesn’t solve the Is/Ought dichotomy. They are logical transformations that make applying our values to the universe much easier.
“X is unjust” doesn’t quite create an ought statement of “Don’t do X”. If I place value on justice, that statement helps me evaluate X. I may decide that some other consideration trumps justice. I may decide to steal bread to feed my starving family, even if I view the theft as unjust.
Justice, mercy, duty, etc are found by comparison to logical models pinned down by axioms. Getting the axioms right is damn tough, but if we have a decent set we should be able to say “If Alex kills Bob under circumstances X, this is unjust.” We can say this the same way that we can say “Two apples plus two apples is four apples.” I can’t find an atom of addition in the universe, and this doesn’t make me reject addition.
Also, the widespread convergence of theories of justice on some issues (eg. Rape is unjust.) suggests that theories of justice are attempting to use their axioms to pin down something that is already there. Moral philosophers are more likely to say “My axioms are leading me to conclude rape is a moral duty, where did I mess up?” than “My axioms are leading me to conclude rape is a moral duty, therefore it is.” This also suggests they are pinning down something real with axioms. If it was otherwise, we would expect the second conclusion.
Counter-example. “There exists at least one entity capable of sensory experience.” What constraints on sensory experience does this statement impose? If not, do you reject it as meaningless?
Internal consistency. Propositions must be non self-contradictory. If a proposition is a conjunction of multiple propositions, then those propositions must not contradict each other.
- Oct 2, 2012, 2:59 PM; 0 points) 's comment on The Useful Idea of Truth by (
Absolutely. I do too. I just realized that the continuum provides another interesting question.
Is the following scale of consciousness correct?
Human > Chimp > Dog > Toad > Any possible AI with no biological components
The biological requirement seems to imply this. It seems wrong to me.