I think the more relevant case is when the random noise is imperceptibly small. Of course you two-box if it’s basically random.
Luke_A_Somers
… you don’t think that pissing away credibility could weaken the arguments? I think presenting those particular arguments is more likely to do that than it is to work.
I suspect that an AI will have a bullshit detector. We want to avoid setting it off.
I read up to 3.1. The arguments in 3.1 are weak. It seems dubious that any AI would not be aware of the risks pertaining to disobedience. Persuasion to be corrigible seems too late—either already this would already work because its goals were made sufficiently indirect that this question would be obvious and pressing, or it doesn’t care to have ‘correct’ goals in the first place; I really don’t see how persuasion would help. The arguments for allowing itself to be turned off are especially weak, doubly-especially the MWI.
See: my first post on this site.
What do you mean by natural experiment, here? And what was the moral, anyway?
I remember poking at that demo to try to actually get it to behave deceptively—with the rules as he laid them out, the optimal move was to do exactly what the humans wanted it to do!
I understand EY thinks that if you simulate enough neurons sufficiently well you get something that’s conscious.
Without specifying the arrangements of those neurons? Of course it should if you copy the arrangement of neurons out of a real person, say, but that doesn’t sound like what you meant.
I would really want a cite on that claim. It doesn’t sound right.
Like many cases of Motte-and-Bailey, the Motte is mainly held by people who dislike the Bailey. I suspect that an average scientist in a relevant field somewhere at or below neurophysics in the generality hierarchy (e.g. chemist, physicist, but not sociologist), would consider that bailey to be… non-likely at best, while holding the motte very firmly.
This looks promising.
Also, the link to the Reality of Emergence is broken.
1) You could define the shape criteria required to open lock L, and then the object reference would fall away. And, indeed, this is how keys usually work. Suppose I have a key with tumbler heights 0, 8, 7, 1, 4, 9, 2, 4. This is an intrinsic property of the key. That is what it is.
Locks can have the same set of tumbler heights, and there is then a relationship between them. I wouldn’t even consider it so much an extrinsic property of the key itself, as a relationship between the intrinsic properties of the key and lock.
2) Metaethics is a function from cultural situations and moral intuitions into a space of ethical systems. This function is not onto (i.e. not every coherent ethical system is the result of metaethical analysis on some cultural system and moral intuitions) , and it is not at all guaranteed to yield the same ethical system at use in that cultural situation. This is a very significant difference from Moral relativism, not a mere slight increase in temperature.
Yes, but that’s not the way the problem goes. You don’t fix your prior in response to the evidence in order to force the conclusion (if you’re doing it anything like right). So different people with different priors will have different amounts of evidence required: 1 bit of evidence for every bit of prior odds against, to bring it up to even odds, and then a few more to reach it as a (tentative, as always) conclusion.
Where’s that from?
This is totally backwards. I would phrase it, “Priors get out of the way once you have enough data.” That’s a good thing, that makes them useful, not useless. Its purpose is right there in the name—it’s your starting point. The evidence takes you on a journey, and you asymptotically approach your goal.
If priors were capable of skewing the conclusion after an unlimited amount of evidence, that would make them permanent, not simply a starting-point. That would be writing the bottom line first. That would be broken reasoning.
Like, “Please, create a new higher bar that we can expect a truly super-intelligent being to be able to exceed.”?
It missed in all story that superintelligecne will be probably able to resurrect people even if they were not cryopreserved, using creation of their copies based on digital immortality.
Enough of what makes me me hasn’t and won’t make into digital expression by accident short of post-singularity means, that I wouldn’t identify such a poor individual as being me. It would be neuro-sculpture on the theme of me.
That’s more about the land moving in response to the changes in ice, and a tiny correction for changing the gravitational force previously applied by the ice.
This is (probably?) about the way the water settles around a spinning oblate spheroid.
Good point; how about, someone who is stupider than the average dog.
If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it’s a hypothetical where we’re AI to begin with so deterministic behavior is just to be expected.