Semantics.
Good PR requires you to put a filter between what you think is true and what you say.
Semantics.
Good PR requires you to put a filter between what you think is true and what you say.
The level of PR you aim for puts an upper limit to how much “radical” honesty you can have.
If you aim for perfect PR, you can have 0 honesty.
If you aim for perfect honesty, you can have no PR. lesswrong doesn’t go that far, by a long shot—even without a PR team present.
Most organization do not aim for honesty at all.
The question is where do we draw the line.
Which brings us to “Disliking racism isn’t some weird idiosyncratic thing that only Gerard has.”
From what I understand, Gerard left because he doesn’t like discussions about race/IQ.
Which is not the same thing as racism.
I, personally, don’t want lesswrong to cater to people who can not tolerate a discussion.
“It’s sad that our Earth couldn’t be one of the more dignified planets that makes a real effort, correctly pinpointing the actual real difficult problems and then allocating thousands of the sort of brilliant kids that our Earth steers into wasting their lives on theoretical physics. But better MIRI’s effort than nothing.”
To be fair, a lot of philosophers and ethicist have been trying to discover what does “good” mean and how humans should go about aligning with it.
Furthermore, a lot of effort has gone into trying to align goals and incentives on all levels—from the planetary to the personal scale.
People have actually tried to create bonus systems that cannot be gamed.
Maybe all of this does not rise to the standard of actually achieving the desired result—but then again, neither has MIRI, so far.
So, for anyone depressed at how little dignity we get to die with, the good news is that people have been working (at least tangentially) on the alignment problem for a long time.
The bad news is, of course, that people have been working on the alignment problem for a long time.
For any statement one can make, there will be people “alienated” (=offended?) by it.
David Gerard was alienated by a race/IQ discussion and you think that should’ve been avoided.
But someone was surely equally alienated by discussions of religion, evolution, economics, education and our ability to usefully define words.
Do we value David Gerard so far above any given creationist, that we should hire a PR department to cater to him and people like him specifically?
There is an ongoing effort to avoid overtly political topics (Politics is the mind-killer!) - but this effort is doomed beyond a certain threshold, since everything is political to some extent. Or to some people.
To me, a concerted PR effort on part of all prominent representatives to never say anything “nasty” would be alienating. I don’t think a community even somewhat dedicated to “radical” honesty could abide a PR department—or vice versa.
TL;DR—LessWrong has no PR department, LessWrong needs no PR department!
Tolstoy sounds ignorant of game theory—probably because he was dead when it was formulated.
Long story short, non-cooperating organisms regularly got throttled by cooperating ones, which is how we evolved to be cooperating.
14 years too late, but I can never pass on an opportunity to recommend “Essence of Calculus” by 3blue1brown on youtube.
It is a series of short clips, explaining Calculus concepts and core ideas without too much formalism and with plenty of geometric examples.
“Dear God” by XTC is my favourite atheist hymn. On the other hand, “Transcendence” with Johnny Depp made me feel empathy for christians watching bible flicks—I so wanted to like the damn thing.
As to OPs main point, “politics is the art killer” has recently entered the discourse of almost every fandom (if the franchise is still ongoing). Congratulations on pointing out yet another problem years before it became so exacerbated, that people can no longer ignore it.
Reverse stupidity is not wisdom. Here we have reversed ad populus (aka The Hipster’s Fallacy). Pepsi and Macs are not strictly superior to their more popular counterparts by dent of existing. Rather, their existence is explained by comparative advantage in some cases for some users.
I’ve heard Peterson accuse feminists of disregarding what is true in the name of ideology on many occasions.
Sam Harris initially spent an hour arguing against Peterson’s redefinition of “truth” to include a “moral dimension”. They’ve clashed about it since, with no effect. Afaik, “the bible is true because it is useful” is central component of Peterson’s worldview.
To be fair, I believe Peterson has managed to honestly delude himself on this point and is not outright lying about his beliefs.
Nevertheless, when prompted to think of a “General Defense of Fail”, attempting to redefine the word “truth” in order to protect one’s ideology came to mind very quickly.
If we accept MWI, cryonics is a backdoor to Quantum Immortality, one which waiting and hoping may not offer.
Parents getting to their 9 to 5 jobs on time is more important.
Going any further would require to taboo “task”.
I agree your reading explains the differences in responses given in the survey.
Creating an AI that does linguistic analysis of a given dataset better than me is easier than creating an AI that is a better linguist than me because it actually requires additional tasks such as writing academic papers.
If AI is not better than you at task “write an academic paper”, it is not at the level, specified in the question.
If a task requires output for both the end result and the analysis used to reach it, both shall be outputted. At least that is how I understand “better at every task”.
Thank you for the link.
Right, none of our models are philosophically grounded. But, does that make them all equal? That’s what the post sounds like to me:
Well maybe: deny the concept of objective truth, of which there can only be one, and affirm subjectivism and pluralism.
To me, this seems like the ultimate Fallacy of Gray.
Then again, I am not well read at philosophy, so my comments might be isomorphic to “Yay pragmatism! Go objectivity!”, while those may or may not be compatible.
The IoT (internet of things) comes to mind. Why not experience WiFi connectivity issues while trying to use the washing machine?
Everything trying to become a subscription service is another example (possibly related to IoT). My favourite is a motorcycle lifesaving airbag vest, which won’t activate during a motorcycle crash, if the user misses a monthly payment. The company is called Klim, and in fairness, the user can check whether the airbag is ready for use, before getting on their bike.
Extractable internal data is only needed during troubleshooting. During normal operation, only the task result is needed.
As for the time/process-flow management, I already consider it a separate task—and probably the one that would benefit the most drastically by being automated, at least in my case.
Yes, there probably is an in-universe explanation for why organic pilots are necessary. I think droids were shown to be worse fighters than clones (too slow/stupid ?) in the Prequels.
However, the implied prediction that FTL travel will be discovered before AI pilots superior to humans still seems unlikely.
I don’t see how acknowledging that different models work in different contexts necessitates giving up the search for objective truth.
Let’s say that in order to reduce complexity, we separate Physics into two fields—Relativistic Mechanics and Quantum Mechanics—whose models currently don’t mesh together. I think we can achieve that without appealing to subjectivity, or abandoning the search of an unifying model. Acknowledging the limitations of our current models seems enough.
After the training begins, something like 80% of the recruits drop out during Hell Week. Seals are selected for their motivation, which is not available to everyone headed for a warzone.
On the other hand, if you’d really like an existential treat to get you going, you may consider looking into the problem of goal alignment in AGI, or aging.
“The mystery is why the community doesn’t implement obvious solutions. Hiring PR people is an obvious solution. There’s a posting somewhere in which Anna Salamon argues that there is some sort of moral hazard involved in professional PR, but never explains why, and everyone agrees with her anyway.”
“”You”, plural, the collective, can speak as freely as you like …in private.”
Suppose a large part of the community wants to speak as freely as it likes in public, and the mystery is solved.
We even managed to touch upon the moral hazard involved in professional PR—insofar as it is a filter between what you believe and what you say publicly.