On the other hand… people say they hate politicians and then vote for them anyway.
Who are they going to vote for instead?
On the other hand… people say they hate politicians and then vote for them anyway.
Who are they going to vote for instead?
Ah, I see what you mean. I don’t think one has to believe in objective morality as such to agree that “morality is the godshatter of evolution”. Moreover, I think it’s pretty key to the “godshatter” notion that our values have diverged from evolution’s “value”, and we now value things “for their own sake” rather than for their benefit to fitness. As such, I would say that the “godshatter” notion opposes the idea that “maladaptive is practically the definition of immoral”, even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.
Disagree? What do you mean by this?
Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
But the theory fails because this fits it but isn’t wireheading, right? It wouldn’t actually be pleasing to play that game.
Fair question! I phrased it a little flippantly, but it was a sincere sentiment—I’ve heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.
neurotypical
Are you using this to mean “non-autistic person”, or something else?
a GAI with [overwriting its own code with an arbitrary value] as its only goal, for example, why would that be impossible? An AI doesn’t need to value survival.
A GAI with the utility of burning itself? I don’t think that’s viable, no.
What do you mean by “viable”? You think it is impossible due to Godelian concerns for there to be an intelligence that wishes to die?
As a curiosity, this sort of intelligence came up in a discussion I was having on LW recently. Someone said “why would an AI try to maximize its original utility function, instead of switching to a different / easier function?”, to which I responded “why is that the precise level at which the AI would operate, rather than either actually maximizing its utility function or deciding to hell with the whole utility thing and valuing suicide rather than maximizing functions (because it’s easy)”.
But anyway it can’t be that Godelian reasons prevent intelligences from wanting to burn themselves, because people have burned themselves.
I’d be interested in the conclusions derived about “typical” intelligences and the “forbidden actions”, but I don’t see how you have derived them.
At the moment it’s little more than professional intuition. We also lack some necessary shared terminology. Let’s leave it at that until and unless someone formalizes and proves it, and then hopefully blogs about it.
Fair enough, though for what it’s worth I have a fair background in mathematics, theoretical CS, and the like.
could you clarify your position, please?
I think I’m starting to see the disconnect, and we probably don’t really disagree.
You said:
This sounds unjustifiably broad
My thinking is very broad but, from my perspective, not unjustifiably so. In my research I’m looking for mathematical formulations of intelligence in any form—biological or mechanical.
I meant that this was a broad definition of the qualitative restrictions to human self-modification, to the extent that it would be basically impossible for something to have qualitatively different restrictions.
Taking a narrower viewpoint, humans “in their current form” are subject to different laws of nature than those we expect machines to be subject to. The former use organic chemistry, the latter probably electronics. The former multiply by synthesizing enormous quantities of DNA molecules, the latter could multiply by configuring solid state devices.
Do you count the more restrictive technology by which humans operate as a constraint which artificial agents may be free of?
Why not? Though of course it may turn out that AI is best programmed on something unlike our current computer technology.
I think it could make a pretty interesting Discussion post, and would pair well with some discussion of how becoming a cyborg supposedly makes you less empathic.
I find this quite aesthetically pleasing :D
I tend to agree. Customizable contracts would be the best solution.
For some reason I’m picturing the Creative Commons licenses.
If polygamous people where high status they wouldn’t voice nor perhaps even think of these objections.
Why isn’t it the other way around?
Hm. Some sort of standardized institution in place to take care of the pet in case the human dies, perhaps? Tax breaks?
I don’t care what other people are convinced.
When you said above that status was the real reason LW-associates oppose legal polygamy, you were implying that these people are not actually convinced of these issues, or only pretend to care about them for status reasons.
I’m in a happy polygamous relationship and I know I’m not the only one.
Certainly! I’d like to clarify that I don’t think polyamory is intrinsically oppressive, and that I am on the whole pretty darn progressive (philosophically) regarding sexual / relationship rights etc. (That is, I think it probably ideally should be legal. There are probably additional political concerns but politics makes me ill.) I think it’s kinda weird that government is in the marriage business to begin with, but probably it is useful to have some sort of structure for dealing with the related tax / property / etc. concerns. I think that polygamy does occur in some cultures that are oppressive towards women, but I don’t really have a notion of how much a part of that oppression it facilitates, and I don’t necessarily think that’s a legitimate factor in whether to legalize the institution. I’m on your side philosophically / politically.
Looks like there are a few pc input devices on the market that read brain activity in some way. The example game above sounds like this Star Wars toy.
Regarding your example, I think what Mills is saying is probably a fair point—or rather, it’s probably a gesture towards a fair point, muddied by rhetorical constraints and perhaps misunderstanding of probability. It is very difficult to actually get good numbers to predict things outside of our past experience, and so probability as used by humans to decide policy is likely to have significant biases.
I’ve certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad.
Same argument can and has been applied to other kinds of marriage.
On the one hand, the argument doesn’t need to be correct to be the (or a) real reason. On the other, I’d expect more people to be more convinced that polygamy is more oppressive (as currently instantiated) than vanilla marriage (and other forms, such as arranged marriages or marriage of children to adults, are probably more strongly opposed).
thus we tend to see forbidding that as a bad idea.
ITYM ‘good’?
I’ve certainly heard the argument that polygamy is tied into oppressive social structures, and therefore legitimizing it would be bad. Would you say this is rationalization?
FWIW I’m very skeptical of the whole “status explains everything” notion in general.
Ah! Well, good to know. Generally I expect “Utahans” and “weird brown foreigners” are to be inflected similarly in both of these versions, anyway.
or that polyamory is when it’s done by fashionable white people, and polygamy is when it’s done by weird brown foreigners
I thought it was “polyamory is when it’s done by New Yorkers (Californians?), polygamy is when it’s done by Utahans,” and weird brown people have harems and concubines instead.
(Though of course I also don’t think this is a fair characterization)
I agree with your analysis, and further:
Gurer ner sbhe yvarf: gur gjb “ebgngvat” yvarf pbaarpgrq gb gur pragre qbg, naq gur gjb yvarf pbaarpgrq gb gur yrsg naq evtug qbgf. Gur pragre yvarf fgneg bhg pbaarpgrq gb gur fvqr qbgf, gura ebgngr pybpxjvfr nebhaq gur fdhner. Gur bgure yvarf NYFB ebgngr pybpxjvfr: gur yrsg bar vf pragrerq ba gur yrsg qbg, naq ebgngrf sebz gur pragre qbg qbja, yrsg, gura gb gur gbc, gura evtug, gura onpx gb gur pragre. Gur evtug yvar npgf fvzvyneyl.