For example, someone who was completely colorblind from birth could never understand what it felt like to see the color green, no matter how much neuroscience that person knew, i.e., you could never convey the sensation of “green” through a layout of a connectome or listing wavelengths of light.
shware
I didn’t have a problem with 1 or 2 but 3 and 4 were the big problems. Though I didn’t downvote because it was already well negative at that point. Saying AI is software is an assertion but its not meaningful. Are you saying software that prints ‘hello world’ is intelligent? From some of your previous comments I gather you are interested in how software, the user, the designer and other software interact in some way but there was none of that in the post. Its as if Eliezer had said ‘rationality IS winning IS rationality’ as the entirety of the sequences.
“Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.”—Scott Aaronson
A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.
I find it takes a great deal of luminosity in order to be honest with someone. If I am in a bad mood, I might feel that its my honest opinion that they are annoying when in fact what is going on in my brain has nothing to do with their actions. I might have been able to like the play in other circumstances, but was having a bad day so flaws I might have been otherwise able to overlook were magnified in my mind. etc.
This is my main fear with radical honesty, since it seems to promote thinking that negative thoughts are true just because they are negative. The reasoning going ‘I would not say this if I were being polite, but I am thinking it, therefore it is true’ without realizing that your brain can make your thoughts be more negative from the truth just as easily as it can make them more positive than the truth.
In fact, saying you enjoyed something you didnt enjoy, and signalling enjoyment with appropriate facial muscles (smiling etc) can improve your mood by itself, especially if it makes the other person smile.
Many intelligent people get lots of practice pointing out flaws, and it is possible that this trains the brain into a mode where one’s first thoughts on a topic will be critical regardless of the ‘true’ reaction. If your brain automatically looks for flaws in something and then a friend asks your honest opinion you would tell them the flaws; but if you look for things to compliment your ‘honest’ opinion might be different.
tl;dr honesty is harder than many naively think, because our brains are not perfect reporters of their state, and even if they were good luck explaining your inner feelings about something across the inferential distance. Better to just adjust all your reactions slightly in the positive direction to reap the benefits of happier interactions (but only slightly, don’t say you liked activities you loathed otherwise you’ll be asked back, say they were ok but not your cup of tea etc)
he puts 2.72 on the table, and you put 13.28 on the table.
I’m confused...if the prediction does not come true (which you estimated as being 33 percent likely) you only gain $2.72? and if the most probable outcome does come true you lose 13.28?
An always open mind never closes on anything. There is a time to confess your ignorance and a time to relinquish your ignorance and all that...
Well, yes, obviously the classical paperclipper doesn’t have any qualia, but I was replying to a comment wherein it was argued that any agent on discovering the pain-of-torture qualia in another agent would revise its own utility function in order to prevent torture from happening. It seems to me that this argument proves too much in that if it were true then if I discovered an agent with paperclips-are-wonderful qualia and I “fully understood” those experiences I would likewise be compelled to create paperclips.
By signing up for cryonics you help make cryonics more normal and less expensive, encouraging others to save their own lives. I believe there was a post where someone said they signed up for cryonics so that they wouldn’t have to answer the “why aren’t you signed up then?” crowd when trying to convince other people to do so.
Anyone who is isn’t profoundly disturbed by torture, for instance, or by agony so bad one would end the world to stop the horror, simply hasn’t understood it.
Similarly, anyone who doesn’t want to maximize paperclips simply hasn’t understood the ineffable appeal of paperclipping.
Taking this post in the way it was intended i.e. ‘are there any reasons why such a policy would make people more likely to attribute violent intent to LW’ I can think of one:
The fact that this policy is seen as necessary could imply that LW has a particular problem with members advocating violence. Basically, I could envision the one as saying: ‘LW members advocate violence so often that they had to institute a specific policy just to avoid looking bad to the outside world’
And, of course, statements like ‘if a proposed conspiratorial crime were in fact good you shouldn’t talk about it on the internet’ make for good out-of-context excerpts.
I feel this should not be in featured posts, as amusing as it was at the time