The comment inside the google docs was made by me, I was not trolling you on purpose I thought your writing was literally a test for me and I had little time to spare at the time. Most importantly, I simply couldn’t figure out how to remove things once they were written. If you are not too spooked I’d like to retry reading it. No comments inside the google docs this time, I will just leave feedback right here if you want to.
Eugen
What if it actually doesn’t and their craft are really only limited by how fast their typical UFO-discs can spin without killing the crew inside (apparently they are spongelike inside) since unlike us they already know how to create anti-gravity to pull their ships forward? In that case the reason we are not dead yet is because they needed to figure out how to construct fast enough motherships capable of a full-scale earth invasion after we apparently killed most of their messengers. In that case our strategy should be to pool resources into defending the earth against an alien invasion and make it so costly to them that they will instead consider a trade agreement with us, which may at some point be more attractive to them than an all-out war. Trade is the way forward. Of course that is only conjecture, I don’t really know if they exist, but assigning literally zero probability to this may be stupid.
A good example of the proposed mechanism at work can probably be seen in the variety of psychosomatic symptoms experienced and reported by soldiers who fought in the first world war (Often diagnosed as “shell shock”, “male hysteria”, or “war neurosis”). Symptoms included hysterical blindness, deafness, becoming mute, even paralyzed limbs without any apparent cause were a thing.
Also, the “thousand yard stare” seems to be explainable by a similar mechanism: The module producing conscious experience seemingly “detaches” itself from the body it inhabits in an attempt to distance itself from horrors, which seems to be very similar to depersonalization disorder, which can also be triggered by highly distressing experiences.
Agreed. I would further claim most sellers are not actually aware that they are just selling the representation.
The plain but known and studied “secret” to happiness is to adjust your expectations, refrain from unfavorable social comparisons and keep a gratitude journal or write gratitude letters to people (whether you choose to send them or not), which no ever one does. In my case watching a lot of nature and history documentaries on BBC and being aware on how everyone and everything tends to have a life a lot shittier than me helps keeping track of my relative fortune and put my minor miseries into a stark perspective.
Not being depressed or having some other mental illness helps a lot with happiness too I heard, but we can’t reliably help anyone with that.
As someone who worked with online marketing services like google AdWords, facebook ads and search engine optimization, I can absolutely assue you that your statement is not generally true. Advertising is definitely not a terrible waste of resources, but you are right insofar that it CAN be. However, if the basic conditions are right (product and price makes sense, website looks attractive and is easily navigable etc.) ads can be amazing and increase your sales by several 1000%. I’ve had customers that made a return of roughly 100€ for every 3€ invested (yes, including both the cost-per-click for the ad platform as well as our online marketing service priced at 75€/h. And no, people directly navigating to the URL rather than being led to a specific product or landingpage are not counted either).
In fact, next to the stock market advertising may be one of the few examples of actual civilizational adequacy; I’m especially speaking of the ad platform providers themselves (like google or facebook) who have the resources to optimize the absolute crap out of their ad system. If a company who is doing the online advertising (or providing the platform) is the same that is directly profiting from it, then you have an alignment of incentives and the ability to experiment and continuously improve ads in ways that you can almost perfectly quantify and track—this can be incredibly powerful.
Yes, there are companies who can afford to not advertise their product and specific constellations where it in fact may even save a lot money to not advertise, but consider that those are usually products that are completely unique, have high signaling value and are already known and/or are plainly ten times better than anything else the competition has on the market (e.g. Tesla). But if you basically sell the same thing other people are selling, then I’m afraid there is no other similarly reliable way for you to gain visibility and sales than to simply invest in ads. (And if you think otherwise, then please elaborate).
I can’t comment on “old world” things like the efficiency of billboards or TV ads, but barring the odd dolt you can be damn sure that the people making decisions about purchasing that adspace or airtime usually know their way around evaluating numbers and I’d honestly be extremely surprised to learn that they are actually all fools burning their money.
The obvious question is how is it even possible that Wikipedia works at all? If Wikipedia didn’t exist in our universe, we would now be tempted walk away from this with a high probability estimate that this concept is simply impossible to pull off due to the various reasons mentioned, yet here we live in a world where Wikipedia is clear evidence to the contrary, and to my knowledge it suffers from many problems you and Qiaochu_Yuan mentioned above. Are we to conclude then, that the sequential nature of the arbital content is the crux here?
As we all know, you can almost always dig up something on what could be considered the most obscure niche topic. So what is the core appeal for the vast number of content creators? Is it simply that Wikipedia is recognized as the internets “centralized encyclopedia” and contributing to it feels so high status that ones total anonymity is not perceived as a huge issue? That would not explain how it got to where it is today, how did Wikipedia bootstrap itself to where it is now?
I love the concept, very useful for one’s own mental hygene to notice slipping into a justification mindset and I expect if you manage to put it to regular use in social situations, it can really become the equivalent of some oil on the gears of tedious and annoying conversation.
I sometimes have to deal with people who are always late and never ever get anything done on time or as promised, possibly due to procrastination, and their instinct is to justify it because it seems that their entire strategy of getting through life is built around the concept of getting around responsibility by always justifying everything. In fact though, at this point their constant justifications annoy me even more than the fact they they don’t get it done on time, which in such cases I already assumed and factored in anyway. For such cases I find I’m already used to cutting their BS shorter with a very close variant of HWA: “Oh well, what’s done is done. How do you think we should deal with it?”.
So basically it’s just saying something is “not great but everything else is even worse” in a somewhat ethnically incorrect way? Don’t think I’m sold on becoming a frequent user of the phrase; I predict your mileage will vary with “trigger-happy” crowds, scorning the inherent political incorrectness of the term. Also it’s not exactly a super complex thought you are trying to compress here, you are saving like five words at the cost of raised eyebrows and bewilderment. IMO not worth it.
I totally got that part, I’m saying your writing heavily implies the assumption that nerds in general are oblivious to this insight of yours, rather than acting contratian on purpose by semi-conscious calculated choice. I definitely consider myself a member of the nerd spectrum, but I was never blind to these social transactions. If someone talks nonsense that is of the kind that signals group membership there are still many valid reasons to engage in an object-level discussion. I may try to signal to others of the SMART tribe, or even just to that person that I’m not his/her tribe and don’t care to belong to it or spend time with any of them. I may try to dominate and ridicule my opponent, or I may try to genuinely engage, because some people can actually be saved from their folly. People sometimes deconvert from their follies and their religions—they never tell it to your face but sometimes you can plant a seed in just the right place and it happens a week later when the cognitive dissonance becomes unbearable.
EDIT: On the upside I should point out though that “How nerdy vs political is this person about this topic?” is not really a bad question to ask oneself before engaging. If you choose to defect by keeping your supposed object-level frame, make sure you are aware of the cost and the potential gains rather than going with your gut.
I’ve glanced over a few posts of yours recently and I feel your caricature of nerds is quite off, if we mean something even remotely similar by that word. I’ve rarely met a nerd with that level of clearly autistic (conscious & subconscious) obliviousness to the fact that each message has several sides to it, and that the object-level information side is not all there is to what is going on in social settings. There are tribes and subtribes of smart people out there, who make a big deal of object-level truth but they are all also simultaneously playing the social game, too (hint: academia). And they do it quite proficiently as measured and judged by the (sub)tribe they choose to belong to. If belonging to your tribe implies you better pay attention to what kind of object-level-information leaves your face (or fingertips), then you will tend to do so across situations and with other people as well, sometimes quite deliberately and in a contratian manner at the entirely calculated cost of “defecting” against people who don’t share that value.
There is a very obvious problem with [1] as well:
″ The first strategy involves sending a hypothetical example’s equivalent back in time and using the present knowledge of the outcome as a justification for the validity or not of the argument.”
It has basically the same problem as any “reasoning by analogy”-type argument. Reality is built from relatively simple components from the ground up and becomes complex quickly the “further up” you go. What you do is take a slice from the middle and compare it to some other slice from the middle and say X is like Y > Z applies to Y > Z thus also applies to X
In a perfect world you’d never even have to rely on reasonig by analogy because instead of comparing a slice of reality to some other slice you’d just explain something from the ground up. Often we can’t do that with sufficient detail let alone with enough time, so reasoning by analogy is not always the wrong way to do but the example you picked are too far apart and too different I think.
Here’s an example of reasoning by analogy I put once to a friend:
Your brain is like a vast field of wheat and the paths that lead through the field are like your neural connections. The more often you walk down the same path the deeper that path becomes ingreined and eventually habits form. Leaving the path and doing something you’ve never done before is like leaving the path and going through a thick patch of wheat—it requires a lot more energy from you, and it will for some time. But what you need to trust in is that—as you know—eventually there will be a new path in the field if you just walk it often enough. And in exactly the same fashion will your brain will develop new connections, you just have to trust that it will actually happen, just as you completely trust your intuition that eventually you’ll walk a deep path into the field.
An he replied: So what if I took a tractor and just mowed all the field down? WOW I never saw that one coming, I never even expected anyone could be missing the point so completely...
obviously I wasn’t claiming a brain IS LIKE a field in every way you can possibly think of. It just shares a few abstract features with fields and those were the features that I was interested it, so therefore it seemed like a reasonable analogy.
Coming back to your story: The strenght of an argument by analogy relies on how well you can actually connect the two and make a persuasive case that the two things work similar in those features / structural similarities that you actually try to compare. It’s not clear to me how your analogy helps your case. A superintelligent AI is the most intelligent thing you can(t) , imagine but could turn the universe into paperclips, which I don’t care much for, so I for one do not value intelligence above literlly ALL else.
I your friend says feature X is the most important thing we should value about humans, the obvious counterargument would be “perhaps it could be, yet there are many features that we also care about in humans apart from their intelligence and dealing with someone that is only intelligent but cannot/does not do Y, Z and W would not be good company for any normal human, so these other things must matter too.”
Alternativly you could try to transcend the whole argument and point out how meaningless it is. To explain how here’s a more mathematical approach: If “human value” is the outcome variable of a function, your friend rips out particular variable X and says this variable contributes most to the value outcome of human value. For him that may be true, or at least he may genuinely believe it, but the whole concept seems ridiculous we all know we care about a lot of things in other humans like loyalty and firendship and reciprocity and humor and whatnot.
To make this argument meaningful, he’s really the one to argue for how precisely does it help to decide focusing on only one part of very many parts that we clearly all value. What purpose does he think it would accomplish if he managed to make others believe it?
Now I don’t think he or most people actually care, it seems like a classic case of “here’s random crap I believe, and you should believe it too for some clever reason I’m making up on the fly—both so I can dominate you by making you give in and so we can better relate”.
2) That depends entirely on the definition of meaning, just as AndHisHorse points out. It’s not clear to me what is the most accepted definition of meaning, not even among scientists, let alone laymen.
One could even define meaning as loosely as “a map/representation of something that is depdendant or entangled with reality”. Most people seem to use meaning and purpose interchangably. In this case I like purpose better, because then we can ask “what is the purpose of this hammer” and there is a reasonable answer to it that we all know. And if you furtner ask “why does it have purpose” you can say because humans made it to fulfill a certain function, which is therefore its purpose.
But be careful; “what is the purpose of a wing” (or alternatively insert any other biologically evolved feature here) may be a deeply confused question. In the case of the hammer “purpose” is a future-directed function/utility because an agent shaped it. In the case of a wing, there is no future-directed function, but rather a past-directed reason for its existance. Therefore, “In order to fly” is not the correct answer to the question “why do wings exist”, the correct answer has to be past-directed (something like “becasue it enabled many generations before to do X, Y, and Z and thus became selected for by the environment”). So the purpose of hammers and wings aren’t necessarily well-defined questions at all.
Humans, being biologically evolved beings, don’t have purpose in the sense of a hammer, but only in the sense of a wing—the difference however may be, that we can exert more agency than a wing or a bird and can actually create things with purpose and can thus possibly give ourselves or our lives purpose.
My answer then would be that we don’t have future-directed purpose apart from whatever purpose(s) we choose to give ourselves. Sure we may be in a simulation, but there is little evidence that this simulation is in any way about us, we may just be a complete by-product of whatever purpose the simulation may have.
I thought a bit about whether or not the existence of such as-of-yet unfelt feelings is plausible and believe I came up with one real-life example:
Depending on whether or not one is willing to qualify the following as a feeling or an emotion, feeling truly and completely anonymous may be one of those feelings that was not “fully” realizable for generations in the past, but is now possible thanks to the internet. The most immersive example of that weird experience of anonymity as of yet may be something like VRchat, which incidentally seems to lead to some rather peculiar behaviors, (e.g. look for some “VRchat uganda knuckles” videos on youtube).
Great review and summary in one! I especially liked the scorecard section in the end as a quick recap. The issues you had with the content of the book were well-founded and you explained your gripes with them well enough to make me nod in agreement. I especially buy your general “in addition to” argument.
Only thing I’d like to add is the following general insight:
In your scorecard section you write:
Ranked from most about Y to least about Y:
Food isn’t about Nutrition
[...]
I’ve worked in sales for websites and ads before and similar to your shocking personal insight when it comes to medicare I’ve had my own: You wouldn’t believe how much food isn’t about nutrition and actually about signalling, judging by the absolutely shocking amount of customers peddling their crackpot vegan food, raw food and supplements I’ve had to deal with during my time there. Sure, obviously food is about nutrition, too—however, I’d say food is only about nutrition if you actually don’t have enough food. If people actually have a choice of what to eat then it’s no longer about just the nutrition, but it becomes a potential playing field for signalling, just like pretty much anything else.
For me personally though, I think I can say with a straight face that food was never about signalling, probably because I came from parents that actually experienced hunger and scarcity, so I eat whatever is on the plate and I’m really fine with whatever you dump on my plate, while I raise a judging eyebrow at people making a big deal of their moral or expensive or “sophisticated” food choices. I abhorr all sorts of foodies with pretty much in the same zeal that I abhorr modern art museums.
I think the particular insight here is generalizable to the following statement:
For people who choose to compete in the social status playing field for X, X is not about Y but mainly about Z.
In other words, the truth of the statement ” Food isn’t about Nutrition” depends practically entirely on who you are actually talking to.
Signalling well takes effort, and the more elite the group that you’re trying to signal to, the more effort is usually required of you. Thus, I predict hardly anyone actually tries to compete at all playing fields for signalling simultaneously at all times. Even before being aware of hidden motives and signalling I’ve never even tried to compete in some fields (like food or sports or owning anything expensive) and instead I focused entirely on competing in other areas, where I know I’m actually good at and where I know my comparative advantage lies.
Not only that, in addition I regularly find myself actively sh*tting on playing fields I myself am no good at. Damn all those pretentious artists and foodies and religious types and sports-fans and people who can afford spending lots of money on signalling! I’m obviously much better than all of those bastards, because science and academia is actually saving the world and it’s the only thing that really matters and incidentally it’s also where I can signal all my SMART.
If you can articulate and better define what the actual handful core insights are that you hope to transmit maybe you or someone else here can pinpoint better literature for what you are looking for.
It seems to me Eliezer’s “Probability is in the Mind” post may include at least in part of what you are looking for. Maybe you can slightly edit and streamline it for the purpose of making it more approachable to your audience.
Highlights from that post:
Quote #1
Jaynes was of the opinion that probabilities were in the mind, not in the environment—that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon.
Quote #2
The frequentist says, “No. Saying ‘probability 0.5’ means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1. But we know that the coin is biased, so it can have any probability of coming up heads except 0.5.”
The Bayesian says, “Uncertainty exists in the map, not in the territory. In the real world, the coin has either come up heads, or come up tails. Any talk of ‘probability’ must refer to the information that I have about the coin—my state of partial ignorance and partial knowledge—not just the coin itself.
Many years ago I was in the situation of having to learn stats for my B.Sc. in Psychology. Up until that point I’ve always been crap at math and great at everything else in school.
What made stats and to some extent math finally click for me and eventually made me pretty decent at stats was understanding what a formal language actually is by reading Gödel, Escher, Bach: An Eternal Golden Braid.
Now I wouldn’t recommend the book itself, because it’s extremely dense, I’m just saying assume that people don’t understand what a formal language actually is and how it ties into things like axioms and definitions and how it connects to logic. If that fundament isn’t in place then I’d assume whatever you try placing on top is built on quicksand.
Unfortunately I don’t have good and concise reading suggestions on that point though, I’m afraid. https://en.wikipedia.org/wiki/Formal_language unfortunately gets complex quickly and might cause despair. I think the core insight that needs to be in place is that if a formal language like math is logical, the strict rules of symbol-shuffling are obeyed, and the axoims are actually true, then what falls out the other end is Truth. Moreover, a formal system can be lots of different sets of rules (like different programming languages), but what makes math so special is that its rules are isomorphic to reality, and stats is in essence a subset of that system.
There may be one more core idea that is obvious to us now but seemingly wasn’t very common in many societies: Things might actually get better and the world is not about to end soon.
Most religions have the basic arc of a story: there is a beginning, there is an ending and lots of BS sandwiched in the middle. Since most civilizations didn’t realize how old the earth actually is, the apocalyptic ending part of the religious story was usually projected just a few hundred years out, especially by the dominant monotheistic civilizations during the last two millenia. It seems really weird to think that so many cultures lived in the ruins of the Roman Empire and didn’t seem to have the ambition to rediscover and adapt what was right in front of them, but it seems their zeitgeist was fairly fatalistic. Why build new aqueducts and a sewer system if the end is coming in a few centuries or even decades anyway?
Regarding the wheel
I don’t find it mysterious at all why wheels took “so long”. I’d expect the wheel to be conceptually discovered much earlier then it’s first usage, because to be actually useful in the real world a wheel requires roads or vast flat plains to outperform using a backpack, let alone simply loading the animal with sacks in case your civilization figured out animal husbandry of at least one animal useful for transporting.
Regarding printing
I was always a lot more surprised by how long the Gutenberg-type printing press (i.e. movable type press) took to invent and take off. According to wikipedia the earliest ones were invented around 1000-1200 B.C. in Asia https://en.wikipedia.org/wiki/Printing_press
Compared to other feats of engineering of the time you’d think arranging single letters on a slab and printing a few hundred pages rather than having monks do all the painstaking handwriting must have been an utterly obvious invention (though there may be a prestige factor in play here, since obviously handmade writing with illustrations is much more beautiful and would be preferred by people who could read, which were to a large extent also the people who’d be able to afford it in the first place).
However, upon looking into it, the invention of the “hand mould” and its manufacturing precision seemed to be of much importance for the technology to take off. Precise manufacturing of type seems to be of huge importance, because if some of your letters are just a tiny bit shorter or longer than the others it won’t work at all and you’ll get missing letters or otherwise low-quality print (e.g. if the letters are not at the same height). Since printing quality before Gutenberg was bad, people who could actually read presumably wouldn’t have wanted to waste their time with this low-status quality nonsense anyway. Unfortunately there is no wiki article in English, but check out these pictures to get an idea how the first hand moulds actually looked like: https://www.buch-kunst-papier.de/drucken/raritaeten/handgieinstrument.php
Basically people molded the type pieces directly in their hand inside a small device that could be separated into two halves. The type letter itself was stenciled into blocks of metal that would then fit inside the handheld mould device perfectly, achieving the needed standardized precision. Check out the pictures in this slider to get the idea: http://www.druckkunst-museum.de/de/schriftgiesserei.html
So it’s not that no one has conceived of it or tried it before, it’s just that the quality was too bad and no one figured out how to make perfectly standardized metal typesets. So Gutenbergs achievement wasn’t really so much the obvious basic idea of movable type, his real achievement was actually the invention of the hand mould, which is the much less obvious part of making the printing press actually work with enough efficiency to be worth the trouble.
Also, see https://en.wikipedia.org/wiki/Johannes_Gutenberg#Printing_method_with_moveable_type
Thus, they speculated that “the decisive factor for the birth of typography”, the use of reusable moulds for casting type, was a more progressive process than was previously thought.[32] They suggested that the additional step of using the punch to create a mould that could be reused many times was not taken until twenty years later, in the 1470s. Others have not accepted some or all of their suggestions, and have interpreted the evidence in other ways, and the truth of the matter remains uncertain.[33]
The situation in Sweden is rather similar.
However, in case you plan to do a psychological study in Sweden don’t just factor in time and nerves, also be ready to pay up:
https://www.epn.se/media/1207/application_form__translated_.pdf
Page one says “merely processing personal data” will cost you 5000 kr (~610 US$). Other options on the menu may incur higher fees for wasting the Ethical Review Boards’ precious time. What exactly is personal data you ask?
Ethical Review Act:
https://www.epn.se/media/2348/the_ethical_review_act.pdf
Section 3
This law shall apply to research that includes the handling of:
Sensitive personal data pursuant to Section 13 of the Personal Data Act (1998:204), or
Personal data regarding violations of law that include crimes [...]
Personal Data Act:
http://www.wipo.int/edocs/lexdocs/laws/en/se/se097en.pdf
Section 13
It is prohibited to process personal data that reveals
a) race or ethnic origin,
b) political opinions,
c) religious or philosophical beliefs, or
d) membership of a trade union.
It is also prohibited to process such personal data as concerns health or sex life. Information of the kind referred to in the first and second paragraphs is designated as sensitive personal data in this Act.
Don’t get me wrong I wouldn’t condone repeating the Milgram or Stanford experiments, so I do acknowledge the need for some rules, but I’d prefer them to be straightforward rather than a convoluted beurocratic mess, perhaps coupled with strict but straightforward data handling rules.
Just out of curiosity: How probable do you think any SETI-contact will turn out to be AI-initiated as opposed to biological (in the broadest possible sense of that word)?