I second this proposal. In the sites I’ve seen where it’s implemented, I’ve found it extremely useful.
devas
it would probably be some kind of weird signalling game, maybe. On the other hand, posting:”I don’t understand how etc etc, please, somebody explain to me the reasoning behind it” would be a good strategy to start debating and opening an avenue to “convert” others
Now I really, really, really want to know in what SI units rationality is measured.
Litres, perhaps?
Another test could be to see if its performance in its select field suddenly jumps up in effectiveness. To give a real world example, when Google (which is the closest thing we have to an AI right now, I think) gained the ability to suggest terms based on what one has already typed, it became much easier to search for things. Or when it will eventually gain the ability to parse human language, or so on.
And in fact, I seem to recall OkCupid doing another informal study a couple of years ago on which profile pictures were the best at getting replies and messages; and finding out that these were not the ones which explicitly showed the person’s face and physique, but the ones which showed the person engaged in a cool activity (skiing, bunjee jumping, swimming etc)
Now I’m interested in the steepness of that line, and by the fact personality scores seem to be lower than “looks” score. Also, are universities using OkCupid as a resource in their studies? I know 1 university has famously used facebook, but OkCupid seems much more open and amenable to this kind of thing
Thing is, it’s when an AI is much much wiser than a human that it is at its most dangerous. So, I’d go with programming the AI in such a way that it wouldn’t manipulate the human, postponing the ‘coming of age’ ceremony indefinitely
I have a question: why should Albert limit itself to showing the powerpoint to his engineers? A potentially unfriendly AI sounds like something most governments would be interested in :-/
Aside from that, I’m also puzzled by the fact that Albert immediately leaps at trying to speed up Albert’s own rate of self-improvement instead of trying to bring Bertram down-Albert could prepare a third powerpoint asking the engineers if Albert can hack the power grid and cut power to Bertram or something along those lines. Or Albert could ask the engineers if Albert can release the second, manipulative powerpoint to the general public so that protesters will boycott Bertram’s company :-/
Unless, of course, there is the unspoken assumption that Bertrand is slightly further along the AI-development way than Albert, or if Bertrand is going to reach and surpass Albert’s level of development as soon as the powerpoint is finished.
Is this the case? :-/
All three options fit the bill, actually, but I was going for strongly dislike. Man, I must have been more tired than I realized to miss a whole word like that.
Aren’t we all forgetting something big and obvious[1] that’s staring us in the face? :-/ There are people out there for whom “rationality” is counter to their values! Imagine someone who reads the horoscope every morning, who always trusts their gut feelings and emotions, who’s a sincere believer in homeopathy, etc etc (whatever you think an irrational person believes). Such a person would probably strongly rationality, rationalists, and the complex of ideas surrounding rationality, for probably understandable reasons (i.e. if a group consistently belittles your treasured beliefs, you’re liable to hate and dislike the group). Such people might dislike R!Harry because they’d see rationality as a magic feather, and seeing it working in the story (to an uncanny degree, I might add) would be reading an author tract for them. Imagine a black person reading a fanfic where, through the power of !RACISM! (exaggeration mine), Harry gets everything handed to him on a silver platter.
[1]disclaimer: just because it’s big and obvious doesn’t mean it’s actually more right or important, but only that it’s easier to see and think about
this is actually related to my pet theory that, at least in signalling status terms, it is better to call one’s self “aspiring rationalist” rather than “rationalist” full stop.
The problem with that is that the first is longer, less concise, and more awkward to use :-/
This sounds like something from Schelling’s strategy of conflict, although I haven’t read it
That may be so, but it doesn’t mean it might not be effective; before facebook, social networking websites hadn’t really taken off, and-to give an example already in the post-fundraisers existed even before kickstarter; it doesn’t mean kickstarter didn’t make things easier for a lot of people.
The main draw of this kind of program, I think, is that it would remove a lot of the trivial inconveniences that come with voting, and it could work as a beeminder-like prompt for slacktivists, thereby making them actually useful.
Wait this is actually brilliant in a couple of ways, because to get the right (estimated) answer, the listener has to distinguish between probability that one of the three is a rabbi and this is a joke, and probability that this is a joke if we put the probability of the third being a rabbi at 100%.
It follows the setup of a rationality calibration question while subverting it and rendering “guessing the teacher’s password” useless, since c) is (maybe) higher than a) or b)
I actually hadn’t considered the time; in retrospect, though, it does make a lot of sense. Thank you! :-)
I am surprised by the fact that this post has so little karma. Since one of the...let’s call them “tenets” of the rationalism community is the drive to improve one’s own self, I would have imagined that this kind of criticism would have been welcomed.
Can anyone explain this to me, please? :-/
Okay, I believe I have a very stupid question I need to ask:
Why isn’t there more research in progress on how to wake up people from cryonics? Or, rather, why aren’t more people sticking hamsters and dogs under liquid nitrogen*, then trying to revive them and bring them back to “full life”, and seeing if dear ole Spot remembers all the tricks we taught him?
If such things are underway, why aren’t there more news and data on this?
*gross oversimplification is funny
Actually, I have pretty much your same misgivings/objections; it didn’t feel particularly scary to me either :-/
maybe it’s the fact that uploading/etc. is basically a foregone conclusion when facing a superintelligence? Although I thought that was obvious from the concept itself :-/
How about a part in binary where the AI itself sings with mustache-twirling villainy? :-P
One becomes vulnerable to Ind pretending to be Coo?