I do not agree that accuracy has no meaning outside of resolution. At least this is not the sense in which I was employing the word. By accurate I simply mean numerically correct within the context of conventional probability theory. Like if I ask the question “A dice is rolled—what is the probability that the result will be either three or four?” the accurate answer is 1⁄3. If I ask “A fair coin is tossed three times, what is the probability that it lands heads each time?” the accurate answer is 1⁄8 etc. This makes the accuracy of a probability value proposal wholly independent from pay-offs.
Guillaume Charrier
I don’t think so. Even in the heads case, it could still be Monday—and say the experimenter told her: “Regardless of the ultimate sequence of event, if you predict correctly when you are woken up, a million dollars will go to your children.”
To me “as a rational individual” is simply a way of saying “as an individual who is seeking to maximize the accuracy of the probability value she proposes—whenever she is in a position to make such proposal (which implies, among others, that she must be alive to make the proposal).”
Sleeping Beauty – the Death Hypothesis
AND THERE YOU FUCKING HAVE IT. Sure—ban please. Consider this my farewell—dear dear fucking friends.
I laughed. However you must admit that your comical exaggeration does not necessarily carry a lot of ad rem value.
But then would a less intelligent being (i.e. the collectivity of human alignment researchers and less powerful AI systems that they use as tool in their research) be capable of validly examining a more intelligent being, without being deceived by the more intelligent being?
Exactly—and then we can have an interesting conversation etc. (e.g. are all ASIs necessarily paperclip maximizers?), which the silent downvote does not allow for.
I see. But how can the poster learn if he doesn’t know where it has gone wrong? To give one concrete example: in a comment recently, I simply stated that some people hold that AI could be a solution to the Fermi paradox (past a certain level of collective smartness an AI is created that destroys its creators). I got a few downvotes on that—and frankly I am puzzled as to why and I would really be curious to understand the reasonings between the downvotes. Did the downvoters hold that the Fermi paradox is not really a thing? Did they think that it is a thing but that AI can’t be a solution to it for some obvious reason? Was it something else—I simply don’t know; and so I can’t learn.
He he… what do they call it again? Ah: cosmic justice. However, on net, you’re still doing oretty well. So.
Humm I see… not sure if it totally serves the purpose though. For instance, when I see a comment with a large number of downvotes, I’m much more likely to read it than a comment with a relatively now number of upvotes. So: within certain bounds, I guess.
For any confidence that an AI system A will do a good job of its assigned duty of maximizing alignment in AI system B, wouldn’t you need to be convinced that AI system A is well aligned with its given assignment of maximizing alignment in AI system B? In other words, doesn’t that suppose you have actually already solved the problem you are trying to solve?
And if you have not—aren’t you just priming yourself for manipulation by smarter beings?
There might be good reasons why we don’t ask the fox about the best ways to keep the fox out of the henhouse, even though the fox is very smart, and might well actually know what those would be, if it cared to tell us.
The whole Socrates process, the attitude of its main protagonist throughout etc. should make us see one thing particularly clearly, which is banal but bears repeating: there is an extremely wide difference between being smart (or maybe: bright) and wise. Something that the proceedings on this site can also help remind us, at times.
I personally think that the fact that you are allowed to downvote without providing a summary explanation as to why is also a huge issue for the quality of debate on this site, and frankly: deeply antithetic to its proffessed ethics. Either you don’t know exactly why you are downvoting, or your doing it for reasons that you would rather not expand on, or you’re doing but are to lazy to explain why: either case—you’re doing it wrong.
So for instance: if anybody wants to downvote this (I sort of have a feeling that this could well be the case—somehow), please go ahead and do; AND take the minimal pain (not to mention courtesy) of leaving a leaving a brief note as to the reason why.
Interesting. It seems to imply however that a rationalist would always consider, a priori, its own individual survival as the highest ultimate goal, and modulate—rationally—from there. This is highly debatable however: you could have a rationalist father who considers, a priori, the survival of his children to be more important than its own, a rationalist patriot, who considers, a priori, the survival of its political community to be more important than its own etc.
From somebody equally as technically clueless: I had the same intuition.
Philosophically : no. When you look at the planet Jupiter you don’t say : “Hum, oh: - there’s nothing to understand about this physical object beyond math, because my model of it, which is sufficient for a full understanding of its reality, is mathematic.” Or mabye you do—but then I think our differences might too deep to bridge. If you don’t—why don’t you with Jupiter, but would with an electron or a photon?
Bizarly, for people whose tendencies were to the schizoid anyway and regardless of sociological changes—this might be midly comforting. Your plight will always seem somewhat more bearable when it is shared by many.
Also: the fact that people now move out later might be a kind of disguised compliment, or at least nod, to better quality parents-children relationships. While I was never particularly resourceful or independent, I couldn’t wait to move out—but that was not necessarily for the right reasons.
Finally—one potentially interesting way of looking at the increasingly exacerbated partisanship / level of political division across the country might be as a sort of last ditch attempt to fight desocialization. When no community remains the “I really don’t like liberals” and “I really don’t like conservatives” groups of kindred spirits offer what might be the last credible alternative to it.
I mean: I just look at the world as it is, right, without preconceived notions, and it seems relatively evident to me that no: it cannot be fully explained and understood through math. Please describe to me, in mathematical terms, the differences between Spanish and Italian culture? Please explain to me, in mathematical terms, the role and function of sheriffs in medieval England. I could go on and on and on…
Yeah… as they say: there’s often a big gap between smart and wise.
Smart people are usually good at math. Which means they have a strong emotional incentive to believe that math can explain everything.
Wise people are aware of the emotional incentives that fashion their beliefs, and they know to distrust them.
Ideally—one would be both: smart and wise.
All right—but here the evidence predicted would simply be “the coin landed on heads”, no? I don’t really the contradiction between what you’re saying and conventional probability theory (more or less all which was developped with the specific idea of making predictions, winning games etc.) Yes I agree that saying “the coin landed on heads with probability 1/3″ is a somewhat strange way of putting things (the coin either did or did not land on heads) but it’s a shorthand for a conceptual framework that has firmly simple and sound foundations.