I wish I could vote you up and down at the same time.
dlthomas
As long as we’re using sci-fi to inform our thinking on criminality and corrections, The Demolished Man is an interesting read.
Thank you. So, not quite consensus but similarly biased in favor if inaction.
Assuming we have no other checks on behavior, yes. I’m not sure, pending more reflection, whether that’s a fair assumption or not...
It’s a potential outcome, I suppose, in that
[T]here’s nothing I prefer/antiprefer exist, but merely things that I prefer/antiprefer to be aware of.
is a conceivable extrapolation from a starting point where you antiprefer something’s existence (in the extreme, with MWI you may not have much say what does/doesn’t exist, just how much of it in which branches).
It’s also possible that you hold both preferences (prefer X not exist, prefer not to be aware of X) and the existence preference gets dropped for being incompatible with other values held by other people while the awareness preference does not.
My understanding is that CEV is based on consensus, in which case the majority is meaningless.
Um, if you would object to your friends being killed (even if you knew more, thought faster, and grew up further with others), then it wouldn’t be coherent to value killing them.
I am not the one who is making positive claims here.
You did in the original post I responded to.
All I’m saying is that what has happened before is likely to happen again.
Strictly speaking, that is a positive claim. It is not one I disagree with, for a proper translation of “likely” into probability, but it is also not what you said.
“It can’t deduce how to create nanorobots” is a concrete, specific, positive claim about the (in)abilities of an AI. Don’t misinterpret this as me expecting certainty—of course certainty doesn’t exist, and doubly so for this kind of thing. What I am saying, though, is that a qualified sentence such as “X will likely happen” asserts a much weaker belief than an unqualified sentence like “X will happen.” “It likely can’t deduce how to create nanorobots” is a statement I think I agree with, although one must be careful not use it as if it were stronger than it is.
A positive claim is that an AI will have a magical-like power to somehow avoid this.
That is not a claim I made. “X will happen” implies a high confidence—saying this when you expect it is, say, 55% likely seems strange. Saying this when you expect it to be something less than 10% likely (as I do in this case) seems outright wrong. I still buckle my seatbelt, though, even though I get in a wreck well less than 10% of the time.
This is not to say I made no claims. The claim I made, implicitly, was that you made a statement about the (in)capabilities of an AI that seemed overconfident and which lacked justification. You have given some justification since (and I’ve adjusted my estimate down, although I still don’t discount it entirely), in amongst your argument with straw-dlthomas.
No, my criticism is “you haven’t argued that it’s sufficiently unlikely, you’ve simply stated that it is.” You made a positive claim; I asked that you back it up.
With regard to the claim itself, it may very well be that AI-making-nanostuff isn’t a big worry. For any inference, the stacking of error in integration that you refer to is certainly a limiting factor—I don’t know how limiting. I also don’t know how incomplete our data is, with regard to producing nanomagic stuff. We’ve already built some nanoscale machines, albeit very simple ones. To what degree is scaling it up reliant on experimentation that couldn’t be done in simulation? I just don’t know. I am not comfortable assigning it vanishingly small probability without explicit reasoning.
It can’t deduce how to create nanorobots[.]
How do you know that?
But in the end, it simply would not have enough information to design a system that would allow it to reach its objective.
I don’t think you know that.
You’ll have to forgive Eliezer for not responding; he’s busy dispatching death squads.
The answer from the sequences is that yes, there is a limit to how much an AI can infer based on limited sensory data, but you should be careful not to assume that just because it is limited, it’s limited to something near our expectations. Until you’ve demonstrated that FOOM cannot lie below that limit, you have to assume that it might (if you’re trying to carefully avoid FOOMing).
Of those who attempted, fewer thought they were close, but fifty still seems very generous.
Why isn’t it a minor nitpick? I mean, we use dimensioned constants in other areas; why, in principle, couldn’t the equation be E=mc (1 m/s)? If that was the only objection, and the theory made better predictions (which, obviously, it didn’t, but bear with me), then I don’t see any reason not to adopt it. Given that, I’m not sure why it should be a significant* objection.
Edited to add: Although I suppose that would privilege the meter and second (actually, the ratio between them) in a universal law, which would be very surprising. Just saying that there are trivial ways you can make the units check out, without tossing out the theory. Likewise, of course, the fact that the units do check out shouldn’t be taken too strongly in a theory’s favor. Not that anyone here hadn’t seen the XKCD, but I still need to link it, lest I lose my nerd license.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
Things that are true “by definition” are generally not very interesting.
If consciousness is defined by referring solely to behavior (which may well be reasonable, but is itself an assumption) then yes, it is true that something that behaves exactly like a human will be conscious IFF humans are conscious.
But what we are trying to ask, at the high level, is whether there is something coherent in conceptspace that partitions objects into “conscious” and “unconscious” in something that resembles what we understand when we talk about “consciousness,” and then whether it applies to the GLUT. Demonstrating that it holds for a particular set of definitions only matters if we are convinced that one of the definitions in that set accurately captures what we are actually discussing.
Ah, fair. So in this case, we are imagining a sequence of additional observations (from a privileged position we cannot occupy) to explain.
In the macro scale, spin (ie rotation) is definitely quantitative—any object is rotating at a particular rate about a particular axis. This can be measured, integrated to yield (change in) orientation, etc.
In QM, my understanding is that (much like “flavor” and “color”) the term is just re-purposed for something else.
Azathoth should probably link here. I think using our jargon is fine, but links to the source help keep it discoverable for newcomers.